The Learning Brain

“There’s a slogan in education,” says McGovern Investigator John Gabrieli. “The first three years are learning to read, and after that you read to learn.”

For John Gabrieli, learning to read represents one of the most important milestones in a child’s life. Except, that is, when a child can’t. Children who cannot learn to read adequately by the first grade have a 90 percent chance of still reading poorly in the fourth grade, and 75 percent odds of struggling in high school. For the estimated 10 percent of schoolchildren with a reading disability, that struggle often comes with a host of other social and emotional challenges: anxiety, damaged self-esteem, increased risk for poverty and eventually, encounters with the criminal justice system.

Most reading interventions focus on classical dyslexia, which is essentially a coding problem—trouble moving letters into sound patterns in the brain. But other factors, such as inadequate vocabulary and lack of practice opportunities, hinder reading too. The diagnosis can be subjective, and for those who are diagnosed, the standard treatments help only some students. “Every teacher knows half to two-thirds have a good response, the other third don’t,” Gabrieli says. “It’s a mystery. And amazingly there’s been almost no progress on that.”

For the last two decades, Gabrieli has sought to unravel the neuroscience behind learning and reading disabilities and, ultimately, convert that understanding into new and better education
interventions—a sort of translational medicine for the classroom.

The Home Effect

In 2011, when Julia Leonard was a research assistant in Gabrieli’s lab, she planned to go into pediatrics. But she became drawn to the lab’s education projects and decided to join the lab as
a graduate student to learn more. By 2015, she helped coauthor a landmark study with postdoc Allyson Mackey, that sought neural markers for the academic “achievement gap,” which separates higher socioeconomic status (SES) children from their disadvantaged peers. It was the first study to make a connection between SES-linked differences in brain structure and educational markers. Specifically, they found children from wealthier backgrounds had thicker cortical brain regions, which correlated with better academic achievement.

“Being a doctor is a really awesome and powerful career,” she says. “But I was more curious about the research that could cause bigger changes in children’s lives.”

Leonard collaborated with Rachel Romeo, another graduate student in the Gabrieli lab who wanted to understand the powerful effect of SES on the developing brain. Romeo had a distinctive background in speech pathology and literacy, where she’d observed wealthier students progressing more quickly compared to their disadvantaged peers.

Their research is revealing a fascinating picture. In a 2017 study, Romeo compared how reading-disabled children from low and high SES backgrounds fared after an intensive summer reading intervention. Low SES children in the intervention improved most in their reading, and MRI scans revealed their brains also underwent greater structural changes in response to the intervention. Higher SES children did not appear to change much, either in skill or brain structure.

“In the few studies that have looked at SES effects on treatment outcomes,” Romeo says, “the research suggests that higher SES kids would show the most improvement. We were surprised to
find that this wasn’t true.” She suspects that the midsummer timing of the intervention may account for this. Lower SES kids’ performance often suffer most during a “summer slump,”
and would therefore have the greatest potential to improve from interventions at this time.

However, in another study this year, Leonard uncovered unique brain differences in lower-SES children. Only among lower-SES children was better reasoning ability associated with thicker
cortex in a key part of the brain. Same behavior, different neural signatures.

“So this becomes a really interesting basic science question,” Leonard says. “Does the brain support cognition the same way across everyone, or does it differ based on how you grow up?”

Not a One-Size-Fits-All

Critics of such “educational neuroscience” have highlighted the lack of useful interventions produced by this research. Gabrieli agrees that so far, little has emerged. “The painful thing is the slowness of this work. It’s mind-boggling,” Gabrieli admits. Every intervention requires all the usual human research requirements, plus coordinating with schools, parents, teachers, and so on. “It’s a huge process to do even the smallest intervention,” he explains. Partly because of that, the field is still relatively new.

But he disagrees with the idea that nothing will come from this research. Gabrieli’s lab previously identified neural markers in children who will go on to develop reading disabilities. These markers could even predict who would or would not respond to standard treatments that focus on phonetic letter-sound coding.

Romeo and Leonard’s work suggests that varied etiologies underlie reading disabilities, which may be the key. “For so long people have thought that reading disorders were just a unitary construct: kids are bad at reading, so let’s fix that with a one-size-fits-all treatment,” Romeo says.

Such findings may ultimately help resource-strapped schools target existing phonetic training rather than enrolling all struggling readers in the same program, to see some still fail.

Think Spaces

At the Oliver Hazard Perry School, a public K-8 school located on the South Boston waterfront, teachers like Colleen Labbe have begun to independently navigate similar problems as they try
to reach their own struggling students.

“A lot of times we look at assessments and put students in intervention groups like phonics,” Labbe says. “But it’s important to also ask what is happening for these students on their way to school and at home.”

For Labbe and Perry Principal Geoffrey Rose, brain science has proven transformative. They’ve embraced literature on neuroplasticity—the idea that brains can change if teachers find the right combination of intervention and circumstances, like the low-SES students who benefited in Romeo and Leonard’s study.

“A big myth is that the brain can’t grow and change, and if you can’t reach that student, you pass them off,” Labbe says.

The science has also been empowering to her students, validating their own powers of self-change. “I tell the kids, we’re going to build the goop!” she says, referring to the brain’s ability to make new connections.

“All kids can learn,” Rose agrees. “But the flip of that is, can all kids do school?” His job, he says, is to make sure they can.

The classrooms at Perry are a mix of students from different cultures and socioeconomic backgrounds, so he and Labbe have focused on helping teachers find ways to connect with these children and help them manage their stresses and thus be ready to learn. Teachers here are armed with “scaffolds”—digestible neuro- and cognitive science aids culled from Rose’s postdoctoral studies at Boston College’s Professional School Administrator Program for school leaders. These encourage teachers to be more aware of cultural differences and tendencies in themselves and their students, to better connect.

There are also “Think Spaces” tucked into classroom corners. “Take a deep breath and be calm,” read posters at these soothing stations, which are equipped with de-stressing tools, like squeezable balls, play-dough, and meditation-inspiring sparkle wands. It sounds trivial, yet studies have shown that poverty-linked stressors like food and home insecurity take a toll on emotion and memory-linked brain areas like the amygdala and hippocampus.

In fact, a new study by Clemens Bauer, a postdoc in Gabrieli’s lab, argues that mindfulness training can help calm amygdala hyperactivity, help lower self-perceived stress, and boost attention. His study was conducted with children enrolled in a Boston charter school.

Taking these combined approaches, Labbe says, she’s seen one of her students rise from struggling at the lowest levels of instruction, to thriving by year end. Labbe’s focus on understanding the girl’s stressors, her family environment, and what social and emotional support she really needed was key. “Now she knows she can do it,” Labbe says.

Rose and Labbe only wish they could better bridge the gap between educators like themselves and brain scientists like Gabrieli. To help forge these connections, Rose recently visited Gabrieli’s lab and looks forward to future collaborations. Brain research will provide critical insights into teaching strategy, he says, but the gap is still wide.

From Lab to Classroom

“I’m hugely impressed by principals and teachers who are passionately interested in understanding the brain,” Gabrieli says. Fortunately, new efforts are bridging educators and scientists.

This March, Gabrieli and the MIT Integrated Learning Initiative—MITili, which he also directs—announced a $30 million-dollar grant from the Chan Zuckerberg Initiative for a collaboration
between MIT, the Harvard Graduate School of Education, and Florida State University.

The grant aims to translate some of Gabrieli’s work into more classrooms. Specifically, he hopes to produce better diagnostics that can identify children at risk for dyslexia and other learning
disabilities before they even learn to read.

He hopes to also provide rudimentary diagnostics that identify the source of struggle, be it classic dyslexia, lack of home support, stress, or maybe a combination of factors. That in turn,
could guide treatment—standard phonetic care for some children, versus alternatives: social support akin to Labbe’s efforts, reading practice, or maybe just vocabulary-boosting conversation time with adults.

“We want to get every kid to be an adequate reader by the end of the third grade,” Gabrieli says. “That’s the ultimate goal for me: to help all children become learners.”

How music lessons can improve language skills

Many studies have shown that musical training can enhance language skills. However, it was unknown whether music lessons improve general cognitive ability, leading to better language proficiency, or if the effect of music is more specific to language processing.

A new study from MIT has found that piano lessons have a very specific effect on kindergartners’ ability to distinguish different pitches, which translates into an improvement in discriminating between spoken words. However, the piano lessons did not appear to confer any benefit for overall cognitive ability, as measured by IQ, attention span, and working memory.

“The children didn’t differ in the more broad cognitive measures, but they did show some improvements in word discrimination, particularly for consonants. The piano group showed the best improvement there,” says Robert Desimone, director of MIT’s McGovern Institute for Brain Research and the senior author of the paper.

The study, performed in Beijing, suggests that musical training is at least as beneficial in improving language skills, and possibly more beneficial, than offering children extra reading lessons. The school where the study was performed has continued to offer piano lessons to students, and the researchers hope their findings could encourage other schools to keep or enhance their music offerings.

Yun Nan, an associate professor at Beijing Normal University, is the lead author of the study, which appears in the Proceedings of the National Academy of Sciences the week of June 25.

Other authors include Li Liu, Hua Shu, and Qi Dong, all of Beijing Normal University; Eveline Geiser, a former MIT research scientist; Chen-Chen Gong, an MIT research associate; and John Gabrieli, the Grover M. Hermann Professor in Health Sciences and Technology, a professor of brain and cognitive sciences, and a member of MIT’s McGovern Institute for Brain Research.

Benefits of music

Previous studies have shown that on average, musicians perform better than nonmusicians on tasks such as reading comprehension, distinguishing speech from background noise, and rapid auditory processing. However, most of these studies have been done by asking people about their past musical training. The MIT researchers wanted to perform a more controlled study in which they could randomly assign children to receive music lessons or not, and then measure the effects.

They decided to perform the study at a school in Beijing, along with researchers from the IDG/McGovern Institute at Beijing Normal University, in part because education officials there were interested in studying the value of music education versus additional reading instruction.

“If children who received music training did as well or better than children who received additional academic instruction, that could a justification for why schools might want to continue to fund music,” Desimone says.

The 74 children participating in the study were divided into three groups: one that received 45-minute piano lessons three times a week; one that received extra reading instruction for the same period of time; and one that received neither intervention. All children were 4 or 5 years old and spoke Mandarin as their native language.

After six months, the researchers tested the children on their ability to discriminate words based on differences in vowels, consonants, or tone (many Mandarin words differ only in tone). Better word discrimination usually corresponds with better phonological awareness — the awareness of the sound structure of words, which is a key component of learning to read.

Children who had piano lessons showed a significant advantage over children in the extra reading group in discriminating between words that differ by one consonant. Children in both the piano group and extra reading group performed better than children who received neither intervention when it came to discriminating words based on vowel differences.

The researchers also used electroencephalography (EEG) to measure brain activity and found that children in the piano group had stronger responses than the other children when they listened to a series of tones of different pitch. This suggest that a greater sensitivity to pitch differences is what helped the children who took piano lessons to better distinguish different words, Desimone says.

“That’s a big thing for kids in learning language: being able to hear the differences between words,” he says. “They really did benefit from that.”

In tests of IQ, attention, and working memory, the researchers did not find any significant differences among the three groups of children, suggesting that the piano lessons did not confer any improvement on overall cognitive function.

Aniruddh Patel, a professor of psychology at Tufts University, says the findings also address the important question of whether purely instrumental musical training can enhance speech processing.

“This study answers the question in the affirmative, with an elegant design that directly compares the effect of music and language instruction on young children. The work specifically relates behavioral improvements in speech perception to the neural impact of musical training, which has both theoretical and real-world significance,” says Patel, who was not involved in the research.

Educational payoff

Desimone says he hopes the findings will help to convince education officials who are considering abandoning music classes in schools not to do so.

“There are positive benefits to piano education in young kids, and it looks like for recognizing differences between sounds including speech sounds, it’s better than extra reading. That means schools could invest in music and there will be generalization to speech sounds,” Desimone says. “It’s not worse than giving extra reading to the kids, which is probably what many schools are tempted to do — get rid of the arts education and just have more reading.”

Desimone now hopes to delve further into the neurological changes caused by music training. One way to do that is to perform EEG tests before and after a single intense music lesson to see how the brain’s activity has been altered.

The research was funded by the National Natural Science Foundation of China, the Beijing Municipal Science and Technology Commission, the Interdiscipline Research Funds of Beijing Normal University, and the Fundamental Research Funds for the Central Universities.

Yanny or Laurel?

“Yanny” or “Laurel?” Discussion around this auditory version of “The Dress” has divided the internet this week.

In this video, brain and cognitive science PhD students Dana Boebinger and Kevin Sitek, both members of the McGovern Institute, unpack the science — and settle the debate. The upshot? Our brain is faced with a myriad of sensory cues that it must process and make sense of simultaneously. Hearing is no exception, and two brains can sometimes “translate” soundwaves in very different ways.

Nancy Kanwisher receives 2018 Heineken Prize

Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience at MIT, has been named a recipient of the 2018 Heineken Prize — the Netherlands’ most prestigious scientific prize — for her work on the functional organization of the human brain.

Kanwisher, who is a professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research, uses neuroimaging to study the functional organization of the human brain. Over the last 20 years her lab has played a central role in the identification of regions of the human brain that are engaged in particular components of perception and cognition. Many of these regions are very specifically engaged in a single mental function such as perceiving faces, places, bodies, or words, or understanding the meanings of sentences or the mental states of others. These regions form a “neural portrait of the human mind,” according to Kanwisher, who has assembled dozens of videos for the general public on her website, NancysBrainTalks.

“Nancy Kanwisher is an exceptionally innovative and influential researcher in cognitive neuropsychology and the neurosciences,” according to the Royal Netherlands Academy of Arts and Sciences, the organization that selects the prizewinners. “She is being recognized with the 2018 C.L. de Carvalho-Heineken Prize for Cognitive Science for her highly original, meticulous and cogent research on the functional organization of the human brain.”

Kanwisher is among five international scientists who have been recognized by the academy with the biennial award. The other winners include biomedical scientist Peter Carmeliet  of the University of Leuven, biologist Paul Hebert of the University of Guelph, historian John R. McNeill of Georgetown University, and biophysicist Xiaowei Zhuang of Harvard University.

The Heineken Prizes, each worth $200,000, are named after Henry P. Heineken (1886-1971); Alfred H. Heineken (1923-2002) and Charlene de Carvalho-Heineken (1954), chair of the Dr H.P. Heineken Foundation and the Alfred Heineken Fondsen Foundation, which fund the prizes. The laureates are selected by juries assembled by the academy and made up of leading Dutch and foreign scientists and scholars.

The Heineken Prizes will be presented at an award ceremony on Sept. 27 in Amsterdam.

Engineering intelligence

Go is an ancient board game that demands not only strategy and logic, but intuition, creativity, and subtlety—in other words, it’s a game of quintessentially human abilities. Or so it seemed, until Google’s DeepMind AI program, AlphaGo, roundly defeated the world’s top Go champion.

But ask it to read social cues or interpret what another person is thinking and it wouldn’t know where to start. It wouldn’t even understand that it didn’t know where to start. Outside of its game-playing milieu, AlphaGo is as smart as a rock.

“The problem of intelligence is the greatest problem in science,” says Tomaso Poggio, Eugene McDermott Professor of Brain and Cognitive Sciences at the McGovern Institute. One reason why? We still don’t really understand intelligence in ourselves.

Right now, most advanced AI developments are led by industry giants like Facebook, Google, Tesla and Apple, with an emphasis on engineering and computation, and very little work in humans. That has yielded enormous breakthroughs including Siri and Alexa, ever-better autonomous cars and AlphaGo.

But as Poggio points out, the algorithms behind most of these incredible technologies come right out of past neuroscience research–deep learning networks and reinforcement learning. “So it’s a good bet,” Poggio says, “that one of the next breakthroughs will also come from neuroscience.”

Five years ago, Poggio and a host of researchers at MIT and beyond took that bet when they applied for and won a $25 million Science and Technology Center award from the National Science Foundation to form the Center for Brains, Minds and Machines. The goal of the center was to take those computational approaches and blend them with basic, curiosity-driven research in neuroscience and cognition. They would knock down the divisions that traditionally separated these fields and not only unlock the secrets of human intelligence and develop smarter AIs, but found an entire new field—the science and engineering of intelligence.

A collaborative foundation

CBMM is a sprawling research initiative headquartered at the McGovern Institute, encompassing faculty at Harvard, Johns Hopkins, Rockefeller and Stanford; over a dozen industry collaborators including Siemens, Google, Toyota, Microsoft, Schlumberger and IBM; and partner institutions such as Howard University, Wellesley College and the University of Puerto Rico. The effort has already churned out 397 publications and has just been renewed for five more years and another $25 million.

For the first few years, collaboration in such a complex center posed a challenge. Research efforts were still divided into traditional silos—one research thrust for cognitive science, another for computation, and so on. But as the center grew, colleagues found themselves talking more and a new common language emerged. Immersed in each other’s research, the divisions began to fade.

“It became more than just a center in name,” says Matthew Wilson, associate director of CBMM and the Sherman Fairchild Professor of Neuroscience at MIT’s Department of Brain and Cognitive Sciences (BCS). “It really was trying to drive a new way of thinking about research and motivating intellectual curiosity that was motivated by this shared vision that all the participants had.”

New questioning

Today, the center is structured around four interconnected modules grounded around the problem of visual intelligence—vision, because it is the most understood and easily traced of our senses. The first module, co-directed by Poggio himself, unravels the visual operations that begin within that first few milliseconds of visual recognition as the information travels through the eye and to the visual cortex. Gabriel Kreiman, who studies visual comprehension at Harvard Medical School and Children’s Hospital, leads the second module which takes on the subsequent events as the brain directs the eye where to go next, what it is seeing and what to pay attention to, and then integrates this information into a holistic picture of the world that we experience. His research questions have grown as a result of CBMM’s cross-disciplinary influence.

Leyla Isik, a postdoc in Kreiman’s lab, is now tackling one of his new research initiatives: social intelligence. “So much of what we do and see as humans are social interactions between people. But even the best machines have trouble with it,” she explains.

To reveal the underlying computations of social intelligence, Isik is using data gathered from epilepsy patients as they watch full-length movies. (Certain epileptics spend several weeks before surgery with monitoring electrodes in their brains, providing a rare opportunity for scientists to see inside the brain of a living, thinking human). Isik hopes to be able to pick out reliable patterns in their neural activity that indicate when the patient is processing certain social cues such as faces. “It’s a pretty big challenge, so to start out we’ve tried to simplify the problem a little bit and just look at basic social visual phenomenon,” she explains.

In true CBMM spirit, Isik is co-advised by another McGovern investigator, Nancy Kanwisher, who helps lead CBMM’s third module with BCS Professor of Computational Cognitive Science, Josh Tenenbaum. That module picks up where the second leaves off, asking still deeper questions about how the brain understands complex scenes, and how infants and children develop the ability to piece together the physics and psychology of new events. In Kanwisher’s lab, instead of a stimulus-heavy movie, Isik shows simple stick figures to subjects in an MRI scanner. She’s looking for specific regions of the brain that engage only when the subjects view the “social interactions” between the figures. “I like the approach of tackling this problem both from very controlled experiments as well as something that’s much more naturalistic in terms of what people and machines would see,” Isik explains.

Built-in teamwork

Such complementary approaches are the norm at CBMM. Postdocs and graduate students are required to have at least two advisors in two different labs. The NSF money is even assigned directly to postdoc and graduate student projects. This ensures that collaborations are baked into the center, Wilson explains. “If the idea is to create a new field in the science of intelligence, you can’t continue to support work the way it was done in the old fields—you have to create a new model.”

In other labs, students and postdocs blend imaging with cognitive science to understand how the brain represents physics—like the mass of an object it sees. Or they’re combining human, primate, mouse and computational experiments to better understand how the living brain represents new objects it encounters, and then building algorithms to test the resulting theories.

Boris Katz’s lab is in the fourth and final module, which focuses on figuring out how the brain’s visual intelligence ties into higher-level thinking, like goal planning, language, and abstract concepts. One project, led by MIT research scientist Andrei Barbu and Yen-Ling Kuo, in collaboration with Harvard cognitive scientist Liz Spelke, is attempting to uncover how humans and machines devise plans to navigate around complex and dangerous environments.

“CBMM gives us the opportunity to close the loop between machine learning, cognitive science, and neuroscience,” says Barbu. “The cognitive science informs better machine learning, which helps us understand how humans behave and that in turn points the way toward understanding the structure of the brain. All of this feeds back into creating more capable machines.”

A new field

Every summer, CBMM heads down to Woods Hole, Massachusetts, to deliver an intensive crash course on the science of intelligence to graduate students from across the country. It’s one of many education initiatives designed to spread CBMM’s approach and key to the goal of establishing a new field. The students who come to learn from these courses often find it as transformative as the CBMM faculty did when the center began.

Candace Ross was an undergraduate at Howard University when she got her first taste of CBMM at a summer course with Kreiman trying to model human memory in machine learning algorithms. “It was the best summer of my life,” she says. “There were so many concepts I didn’t know about and didn’t understand. We’d get back to the dorm at night and just sit around talking about science.”

Ross loved it so much that she spent a second summer at CBMM, and is now a third-year graduate student working with Katz and Barbu, teaching computers how to use vision and language to learn more like children. She’s since gone back to the summer programs, now as a teaching assistant. “CBMM is a research center,” says Ellen Hildreth, a computer scientist at Wellesley College who coordinates CBMM’s education programs. “But it also fosters a strong commitment to education, and that effort is helping to create a community of researchers around this new field.”

Quest for intelligence

CBMM has far to go in its mission to understand the mind, but there is good reason to believe that what CBMM started will continue well beyond the NSF-funded ten years.

This February, MIT announced a new institute-wide initiative called the MIT Intelligence Quest, or MIT IQ. It’s a massive interdisciplinary push to study human intelligence and create new tools based on that knowledge. It is also, says McGovern Institute Director Robert Desimone, a sign of the institute’s faith in what CBMM itself has so far accomplished. “The fact that MIT has made this big commitment in this area is an endorsement of the kind of view we’ve been promoting through CBMM,” he says.

MIT IQ consists of two linked entities: “The Core” and “The Bridge.” CBMM is part of the Core, which will advance the science and engineering of both human and machine intelligence. “This combination is unique to MIT,” explains Poggio, “and is designed to win not only Turing but also Nobel prizes.”

And more than that, points out BCS Department Head Jim DiCarlo, it’s also a return to CBMM’s very first mission. Before CBMM began, Poggio and a few other MIT scientists had tested the waters with a small, Institute-funded collaboration called the Intelligence Initiative (I^2), that welcomed all types of intelligence research–even business and organizational intelligence. MIT IQ re-opens that broader door. “In practice, we want to build a bigger tent now around the science of intelligence,” DiCarlo says.

For his part, Poggio finds the name particularly apt. “Because it is going to be a long-term quest,” he says. “Remember, if I’m right, this is the greatest problem in science. Understanding the mind is understanding the very tool we use to try to solve every other problem.”

The quest to understand intelligence

McGovern investigators study intelligence to answer a practical question for both educators and computer scientists. Can intelligence be improved?

A nine-year-old girl, a contestant on a game show, is standing on stage. On a screen in front of her, there appears a twelve-digit number followed by a six-digit number. Her challenge is to divide the two numbers as fast as possible.

The timer begins. She is racing against three other contestants, two from China and one, like her, from Japan. Whoever answers first wins, but only if the answer is correct.

The show, called “The Brain,” is wildly popular in China, and attracts players who display their memory and concentration skills much the way American athletes demonstrate their physical skills in shows like “American Ninja Warrior.” After a few seconds, the girl slams the timer and gives the correct answer, faster than most people could have entered the numbers on a calculator.

The camera pans to a team of expert judges, including McGovern Director Robert Desimone, who had arrived in Nanjing just a few hours earlier. Desimone shakes his head in disbelief. The task appears to make extraordinary demands on working memory and rapid processing, but the girl explains that she solves it by visualizing an abacus in her mind—something she has practiced intensively.

The show raises an age-old question: What is intelligence, exactly?

The study of intelligence has a long and sometimes contentious history, but recently, neuroscientists have begun to dissect intelligence to understand the neural roots of the distinct cognitive skills that contribute to it. One key question is whether these skills can be improved individually with training and, if so, whether those improvements translate into overall intelligence gains. This research has practical implications for multiple domains, from brain science to education to artificial intelligence.

“The problem of intelligence is one of the great problems in science,” says Tomaso Poggio, a McGovern investigator and an expert on machine learning. “If we make progress in understanding intelligence, and if that helps us make progress in making ourselves smarter or in making machines that help us think better, we can solve all other problems more easily.”

Brain training 101

Many studies have reported positive results from brain training, and there is now a thriving industry devoted to selling tools and games such as Lumosity and BrainHQ. Yet the science behind brain training to improve intelligence remains controversial.

A case in point is the “n-back” working memory task, in which subjects are presented with a rapid sequence of letters or visual patterns, and must report whether the current item matches the last, last-but-one, last-but-two, and so on. The field of brain training received a boost in 2008 when a widely discussed study claimed that a few weeks of training on a challenging version of this task could boost fluid intelligence, the ability to solve novel problems. The report generated excitement and optimism when it first appeared, but several subsequent attempts to reproduce the findings have been unsuccessful.

Among those unable to confirm the result was McGovern Investigator John Gabrieli, who recruited 60 young adults and trained them forty minutes a day for four weeks on an n-back task similar to that of the original study.

Six months later, Gabrieli re-evaluated the participants. “They got amazingly better at the difficult task they practiced. We have great imaging data showing changes in brain activation as they performed the task from before to after,” says Gabrieli. “And yet, that didn’t help them do better on any other cognitive abilities we could measure, and we measured a lot of things.”

The results don’t completely rule out the value of n-back training, says Gabrieli. It may be more effective in children, or in populations with a lower average intelligence than the individuals (mostly college students) who were recruited for Gabrieli’s study. The prospect that training might help disadvantaged individuals holds strong appeal. “If you could raise the cognitive abilities of a child with autism, or a child who is struggling in school, the data tells us that their life would be a step better,” says Gabrieli. “It’s something you would wish for people, especially for those where something is holding them back from the expression of their other abilities.”

Music for the brain

The concept of early intervention is now being tested by Desimone, who has teamed with Chinese colleagues at the recently-established IDG/McGovern Institute at Beijing Normal University to explore the effect of music training on the cognitive abilities of young children.

The researchers recruited 100 children at a neighborhood kindergarten in Beijing, and provided them with a semester-long intervention, randomly assigning children either to music training or (as a control) to additional reading instruction. Unlike the so-called “Mozart Effect,” a scientifically unsubstantiated claim that passive listening to music increases intelligence, the new study requires active learning through daily practice. Several smaller studies have reported cognitive benefits from music training, and Desimone finds the idea plausible given that musical cognition involves several mental functions that are also implicated in intelligence. The study is nearly complete, and results are expected to emerge within a few months. “We’re also collecting data on brain activity, so if we see improvements in the kids who had music training, we’ll also be able to ask about its neural basis,” says Desimone. The results may also have immediate practical implications, since the study design reflects decisions that schools must make in determining how children spend their time. “Many schools are deciding to cut their arts and music programs to make room for more instruction in academic core subjects, so our study is relevant to real questions schools are facing.”

Intelligent classrooms

In another school-based study, Gabrieli’s group recently raised questions about the benefits of “teaching to the test.” In this study, postdoc Amy Finn evaluated over 1300 eighth-graders in the Boston public schools, some enrolled at traditional schools and others at charter schools that emphasize standardized test score improvements. The researchers wanted to find out whether raised test scores were accompanied by improvement of cognitive skills that are linked to intelligence. (Charter school students are selected by lottery, meaning that any results are unlikely to reflect preexisting differences between the two groups of students.) As expected, charter school students showed larger improvements in test scores (relative to their scores from 4 years earlier). But when Finn and her colleagues measured key aspects of intelligence, such as working memory, processing speed, and reasoning, they found no difference between the students who enrolled in charter schools and those who did not. “You can look at these skills as the building blocks of cognition. They are useful for reasoning in a novel situation, an ability that is really important for learning,” says Finn. “It’s surprising that school practices that increase achievement don’t also increase these building blocks.”

Gabrieli remains optimistic that it will eventually be possible to design scientifically based interventions that can raise children’s abilities. Allyson Mackey, a postdoc in his lab, is studying the use of games to exercise the cognitive skills in a classroom setting. As a graduate student at University of California, Berkeley, Mackey had studied the effects of games such as “Chocolate Fix,” in which players match shapes and flavors, represented by color, to positions in a grid based on hints, such as, “the upper left position is strawberry.”

These games gave children practice at thinking through and solving novel problems, and at the end of Mackey’s study, the students—from second through fourth grades—showed improved measures of skills associated with intelligence. “Our results suggest that these cognitive skills are specifically malleable, although we don’t yet know what the active ingredients were in this program,” says Mackey, who speaks of the interventions as if they were drugs, with dosages, efficacies and potentially synergistic combinations to be explored. Mackey is now working to identify the most promising interventions—those that boost cognitive abilities, work well in the classroom, and are engaging for kids—to try in Boston charter schools. “It’s just the beginning of a three-year process to methodically test interventions to see if they work,” she says.

Brain training…for machines

While Desimone, Gabrieli and their colleagues look for ways to raise human intelligence, Poggio, who directs the MIT-based Center for Brains, Minds and Machines, is trying to endow computers with more human-like intelligence. Computers can already match human performance on some specific tasks such as chess. Programs such as Apple’s “Siri” can mimic human speech interpretation, not perfectly but well enough to be useful. Computer vision programs are approaching human performance at rapid object recognitions, and one such system, developed by one of Poggio’s former postdocs, is now being used to assist car drivers. “The last decade has been pretty magical for intelligent computer systems,” says Poggio.

Like children, these intelligent systems learn from past experience. But compared to humans or other animals, machines tend to be very slow learners. For example, the visual system for automobiles was trained by presenting it with millions of images—traffic light, pedestrian, and so on—that had already been labeled by humans. “You would never present so many examples to a child,” says Poggio. “One of our big challenges is to understand how to make algorithms in computers learn with many fewer examples, to make them learn more like children do.”

To accomplish this and other goals of machine intelligence, Poggio suspects that the work being done by Desimone, Gabrieli and others to understand the neural basis of intelligence will be critical. But he is not expecting any single breakthrough that will make everything fall into place. “A century ago,” he says, “scientists pondered the problem of life, as if ‘life’—what we now call biology—were just one problem. The science of intelligence is like biology. It’s a lot of problems, and a lot of breakthroughs will have to come before a machine appears that is as intelligent as we are.”

Study finds early signatures of the social brain

Humans use an ability known as theory of mind every time they make inferences about someone else’s mental state — what the other person believes, what they want, or why they are feeling happy, angry, or scared.

Behavioral studies have suggested that children begin succeeding at a key measure of this ability, known as the false-belief task, around age 4. However, a new study from MIT has found that the brain network that controls theory of mind has already formed in children as young as 3.

The MIT study is the first to use functional magnetic resonance imaging (fMRI) to scan the brains of children as young as age 3 as they perform a task requiring theory of mind — in this case, watching a short animated movie involving social interactions between two characters.

“The brain regions involved in theory-of-mind reasoning are behaving like a cohesive network, with similar responses to the movie, by age 3, which is before kids tend to pass explicit false-belief tasks,” says Hilary Richardson, an MIT graduate student and the lead author of the study.

Rebecca Saxe, an MIT professor of brain and cognitive sciences and an associate member of MIT’s McGovern Institute for Brain Research, is the senior author of the paper, which appears in the March 12 issue of Nature Communications. Other authors are Indiana University graduate student Grace Lisandrelli and Wellesley College undergraduate Alexa Riobueno-Naylor.

Thinking about others

In 2003, Saxe first showed that theory of mind is seated in a brain region known as the right temporo-parietal junction (TPJ). The TPJ coordinates with other regions, including several parts of the prefrontal cortex, to form a network that is active when people think about the mental states of others.

The most commonly used test of theory of mind is the false-belief test, which probes whether the subject understands that other people may have beliefs that are not true. A classic example is the Sally-Anne test, in which a child is asked where Sally will look for a marble that she believes is in her own basket, but that Anne has moved to a different spot while Sally wasn’t looking. To pass, the subject must reply that Sally will look where she thinks the marble is (in her basket), not where it actually is.

Until now, neuroscientists had assumed that theory-of-mind studies involving fMRI brain scans could only be done with children at least 5 years of age, because the children need to be able to lie still in a scanner for about 20 minutes, listen to a series of stories, and answer questions about them.

Richardson wanted to study children younger than that, so that she could delve into what happens in the brain’s theory-of-mind network before the age of 5. To do that, she and Saxe came up with a new experimental protocol, which calls for scanning children while they watch a short movie that includes simple social interactions between two characters.

The animated movie they chose, called “Partly Cloudy,” has a plot that lends itself well to the experiment. It features Gus, a cloud who produces baby animals, and Peck, a stork whose job is to deliver the babies. Gus and Peck have some tense moments in their friendship because Gus produces baby alligators and porcupines, which are difficult to deliver, while other clouds create kittens and puppies. Peck is attacked by some of the fierce baby animals, and he isn’t sure if he wants to keep working for Gus.

“It has events that make you think about the characters’ mental states and events that make you think about their bodily states,” Richardson says.

The researchers spent about four years gathering data from 122 children ranging in age from 3 to 12 years. They scanned the entire brain, focusing on two distinct networks that have been well-characterized in adults: the theory-of-mind network and another network known as the pain matrix, which is active when thinking about another person’s physical state.

They also scanned 33 adults as they watched the movie so that they could identify scenes that provoke responses in either of those two networks. These scenes were dubbed theory-of-mind events and pain events. Scans of children revealed that even in 3-year-olds, the theory-of-mind and pain networks responded preferentially to the same events that the adult brains did.

“We see early signatures of this theory-of-mind network being wired up, so the theory-of-mind brain regions which we studied in adults are already really highly correlated with one another in 3-year-olds,” Richardson says.

The researchers also found that the responses in 3-year-olds were not as strong as in adults but gradually became stronger in the older children they scanned.

Patterns of development

The findings offer support for an existing hypothesis that says children develop theory of mind even before they can pass explicit false-belief tests, and that it continues to develop as they get older. Theory of mind encompasses many abilities, including more difficult skills such as understanding irony and assigning blame, which tend to develop later.

Another hypothesis is that children undergo a fairly sudden development of theory of mind around the age of 4 or 5, reflected by their success in the false-belief test. The MIT data, which do not show any dramatic changes in brain activity when children begin to succeed at the false-belief test, do not support that theory.

“Scientists have focused really intensely on the changes in children’s theory of mind that happen around age 4, when children get a better understanding of how people can have wrong or biased or misinformed beliefs,” Saxe says. “But really important changes in how we think about other minds happen long before, and long after, this famous landmark. Even 2-year-olds try to figure out why different people like different things — this might be why they get so interested in talking about everybody’s favorite colors. And even 9-year-olds are still learning about irony and negligence. Theory of mind seems to undergo a very long continuous developmental process, both in kids’ behaviors and in their brains.”

Now that the researchers have data on the typical trajectory of theory of mind development, they hope to scan the brains of autistic children to see whether there are any differences in how their theory-of-mind networks develop. Saxe’s lab is also studying children whose first exposure to language was delayed, to test the effects of early language on the development of theory of mind.

The research was funded by the National Science Foundation, the National Institutes of Health, and the David and Lucile Packard Foundation.

Back-and-forth exchanges boost children’s brain response to language

A landmark 1995 study found that children from higher-income families hear about 30 million more words during their first three years of life than children from lower-income families. This “30-million-word gap” correlates with significant differences in tests of vocabulary, language development, and reading comprehension.

MIT cognitive scientists have now found that conversation between an adult and a child appears to change the child’s brain, and that this back-and-forth conversation is actually more critical to language development than the word gap. In a study of children between the ages of 4 and 6, they found that differences in the number of “conversational turns” accounted for a large portion of the differences in brain physiology and language skills that they found among the children. This finding applied to children regardless of parental income or education.

The findings suggest that parents can have considerable influence over their children’s language and brain development by simply engaging them in conversation, the researchers say.

“The important thing is not just to talk to your child, but to talk with your child. It’s not just about dumping language into your child’s brain, but to actually carry on a conversation with them,” says Rachel Romeo, a graduate student at Harvard and MIT and the lead author of the paper, which appears in the Feb. 14 online edition of Psychological Science.

Using functional magnetic resonance imaging (fMRI), the researchers identified differences in the brain’s response to language that correlated with the number of conversational turns. In children who experienced more conversation, Broca’s area, a part of the brain involved in speech production and language processing, was much more active while they listened to stories. This brain activation then predicted children’s scores on language assessments, fully explaining the income-related differences in children’s language skills.

“The really novel thing about our paper is that it provides the first evidence that family conversation at home is associated with brain development in children. It’s almost magical how parental conversation appears to influence the biological growth of the brain,” says John Gabrieli, the Grover M. Hermann Professor in Health Sciences and Technology, a professor of brain and cognitive sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Beyond the word gap

Before this study, little was known about how the “word gap” might translate into differences in the brain. The MIT team set out to find these differences by comparing the brain scans of children from different socioeconomic backgrounds.

As part of the study, the researchers used a system called Language Environment Analysis (LENA) to record every word spoken or heard by each child. Parents who agreed to have their children participate in the study were told to have their children wear the recorder for two days, from the time they woke up until they went to bed.

The recordings were then analyzed by a computer program that yielded three measurements: the number of words spoken by the child, the number of words spoken to the child, and the number of times that the child and an adult took a “conversational turn” — a back-and-forth exchange initiated by either one.

The researchers found that the number of conversational turns correlated strongly with the children’s scores on standardized tests of language skill, including vocabulary, grammar, and verbal reasoning. The number of conversational turns also correlated with more activity in Broca’s area, when the children listened to stories while inside an fMRI scanner.

These correlations were much stronger than those between the number of words heard and language scores, and between the number of words heard and activity in Broca’s area.

This result aligns with other recent findings, Romeo says, “but there’s still a popular notion that there’s this 30-million-word gap, and we need to dump words into these kids — just talk to them all day long, or maybe sit them in front of a TV that will talk to them. However, the brain data show that it really seems to be this interactive dialogue that is more strongly related to neural processing.”

The researchers believe interactive conversation gives children more of an opportunity to practice their communication skills, including the ability to understand what another person is trying to say and to respond in an appropriate way.

While children from higher-income families were exposed to more language on average, children from lower-income families who experienced a high number of conversational turns had language skills and Broca’s area brain activity similar to those of children who came from higher-income families.

“In our analysis, the conversational turn-taking seems like the thing that makes a difference, regardless of socioeconomic status. Such turn-taking occurs more often in families from a higher socioeconomic status, but children coming from families with lesser income or parental education showed the same benefits from conversational turn-taking,” Gabrieli says.

Taking action

The researchers hope their findings will encourage parents to engage their young children in more conversation. Although this study was done in children age 4 to 6, this type of turn-taking can also be done with much younger children, by making sounds back and forth or making faces, the researchers say.

“One of the things we’re excited about is that it feels like a relatively actionable thing because it’s specific. That doesn’t mean it’s easy for less educated families, under greater economic stress, to have more conversation with their child. But at the same time, it’s a targeted, specific action, and there may be ways to promote or encourage that,” Gabrieli says.

Roberta Golinkoff, a professor of education at the University of Delaware School of Education, says the new study presents an important finding that adds to the evidence that it’s not just the number of words children hear that is significant for their language development.

“You can talk to a child until you’re blue in the face, but if you’re not engaging with the child and having a conversational duet about what the child is interested in, you’re not going to give the child the language processing skills that they need,” says Golinkoff, who was not involved in the study. “If you can get the child to participate, not just listen, that will allow the child to have a better language outcome.”

The MIT researchers now hope to study the effects of possible interventions that incorporate more conversation into young children’s lives. These could include technological assistance, such as computer programs that can converse or electronic reminders to parents to engage their children in conversation.

The research was funded by the Walton Family Foundation, the National Institute of Child Health and Human Development, a Harvard Mind Brain Behavior Grant, and a gift from David Pun Chan.

How badly do you want something? Babies can tell

Babies as young as 10 months can assess how much someone values a particular goal by observing how hard they are willing to work to achieve it, according to a new study from MIT and Harvard University.

This ability requires integrating information about both the costs of obtaining a goal and the benefit gained by the person seeking it, suggesting that babies acquire very early an intuition about how people make decisions.

“Infants are far from experiencing the world as a ‘blooming, buzzing confusion,’” says lead author Shari Liu, referring to a description by philosopher and psychologist William James about a baby’s first experience of the world. “They interpret people’s actions in terms of hidden variables, including the effort [people] expend in producing those actions, and also the value of the goals those actions achieve.”

“This study is an important step in trying to understand the roots of common-sense understanding of other people’s actions. It shows quite strikingly that in some sense, the basic math that is at the heart of how economists think about rational choice is very intuitive to babies who don’t know math, don’t speak, and can barely understand a few words,” says Josh Tenenbaum, a professor in MIT’s Department of Brain and Cognitive Sciences, a core member of the joint MIT-Harvard Center for Brains, Minds and Machines (CBMM), and one of the paper’s authors.

Tenenbaum helped to direct the research team along with Elizabeth Spelke, a professor of psychology at Harvard University and CBMM core member, in whose lab the research was conducted. Liu, the paper’s lead author, is a graduate student at Harvard. CBMM postdoc Tomer Ullman is also an author of the paper, which appears in the Nov. 23 online edition of Science.

Calculating value

Previous research has shown that adults and older children can infer someone’s motivations by observing how much effort that person exerts toward obtaining a goal.

The Harvard/MIT team wanted to learn more about how and when this ability develops. Babies expect people to be consistent in their preferences and to be efficient in how they achieve their goals, previous studies have found. The question posed in this study was whether babies can combine what they know about a person’s goal and the effort required to obtain it, to calculate the value of that goal.

To answer that question, the researchers showed 10-month-old infants animated videos in which an “agent,” a cartoon character shaped like a bouncing ball, tries to reach a certain goal (another cartoon character). In one of the videos, the agent has to leap over walls of varying height to reach the goal. First, the babies saw the agent jump over a low wall and then refuse to jump over a medium-height wall. Next, the agent jumped over the medium-height wall to reach a different goal, but refused to jump over a high wall to reach that goal.

The babies were then shown a scene in which the agent could choose between the two goals, with no obstacles in the way. An adult or older child would assume the agent would choose the second goal, because the agent had worked harder to reach that goal in the video seen earlier. The researchers found that 10-month-olds also reached this conclusion: When the agent was shown choosing the first goal, infants looked at the scene longer, indicating that they were surprised by that outcome. (Length of looking time is commonly used to measure surprise in studies of infants.)

The researchers found the same results when babies watched the agents perform the same set of actions with two different types of effort: climbing ramps of varying incline and jumping across gaps of varying width.

“Across our experiments, we found that babies looked longer when the agent chose the thing it had exerted less effort for, showing that they infer the amount of value that agents place on goals from the amount of effort that they take toward these goals,” Liu says.

The findings suggest that infants are able to calculate how much another person values something based on how much effort they put into getting it.

“This paper is not the first to suggest that idea, but its novelty is that it shows this is true in much younger babies than anyone has seen. These are preverbal babies, who themselves are not actively doing very much, yet they appear to understand other people’s actions in this sophisticated, quantitative way,” says Tenenbaum, who is also affiliated with MIT’s Computer Science and Artificial Intelligence Laboratory.

Studies of infants can reveal deep commonalities in the ways that we think throughout our lives, suggests Spelke. “Abstract, interrelated concepts like cost and value — concepts at the center both of our intuitive psychology and of utility theory in philosophy and economics — may originate in an early-emerging system by which infants understand other people’s actions,” she says.

The study shows, for the first time, that “preverbal infants can look at the world like economists,” says Gergely Csibra, a professor of cognitive science at Central European University in Hungary. “They do not simply calculate the costs and benefits of others’ actions (this had been demonstrated before), but relate these terms onto each other. In other words, they apply the well-known logic that all of us rely on when we try to assess someone’s preferences: The harder she tries to achieve something, the more valuable is the expected reward to her when she succeeds.”

Modeling intelligence

Over the past 10 years, scientists have developed computer models that come close to replicating how adults and older children incorporate different types of input to infer other people’s goals, intentions, and beliefs. For this study, the researchers built on that work, especially work by Julian Jara-Ettinger PhD ’16, who studied similar questions in preschool-age children. The researchers developed a computer model that can predict what 10-month-old babies would infer about an agent’s goals after observing the agent’s actions. This new model also posits an ability to calculate “work” (or total force applied over a distance) as a measure of the cost of actions, which the researchers believe babies are able to do on some intuitive level.

“Babies of this age seem to understand basic ideas of Newtonian mechanics, before they can talk and before they can count,” Tenenbaum says. “They’re putting together an understanding of forces, including things like gravity, and they also have some understanding of the usefulness of a goal to another person.”

Building this type of model is an important step toward developing artificial intelligence that replicates human behavior more accurately, the researchers say.

“We have to recognize that we’re very far from building AI systems that have anything like the common sense even of a 10-month-old,” Tenenbaum says. “But if we can understand in engineering terms the intuitive theories that even these young infants seem to have, that hopefully would be the basis for building machines that have more human-like intelligence.”

Still unanswered are the questions of exactly how and when these intuitive abilities arise in babies.

“Do infants start with a completely blank slate, and somehow they’re able to build up this sophisticated machinery? Or do they start with some rudimentary understanding of goals and beliefs, and then build up the sophisticated machinery? Or is it all just built in?” Ullman says.

The researchers hope that studies of even younger babies, perhaps as young as 3 months old, and computational models of learning intuitive theories that the team is also developing, may help to shed light on these questions.

This project was funded by the National Science Foundation through the Center for Brains, Minds, and Machines, which is based at MIT’s McGovern Institute for Brain Research and led by MIT and Harvard.

A sense of timing

The ability to measure time and to control the timing of actions is critical for almost every aspect of behavior. Yet the mechanisms by which our brains process time are still largely mysterious.

We experience time on many different scales—from milliseconds to years— but of particular interest is the middle range, the scale of seconds over which we perceive time directly, and over which many of our actions and thoughts unfold.

“We speak of a sense of time, yet unlike our other senses there is no sensory organ for time,” says McGovern Investigator Mehrdad Jazayeri. “It seems to come entirely from within. So if we understand time, we should be getting close to understanding mental processes.”

Singing in the brain

Emily Mackevicius comes to work in the early morning because that’s when her birds are most likely to sing. A graduate student in the lab of McGovern Investigator Michale Fee, she is studying zebra finches, songbirds that learn to sing by copying their fathers. Bird song involves a complex and precisely timed set of movements, and Mackevicius, who plays the cello in her spare time, likens it to musical performance. “With every phrase, you have to learn a sequence of finger movements and bowing movements, and put it all together with exact timing. The birds are doing something very similar with their vocal muscles.”

A typical zebra finch song lasts about one second, and consists of several syllables, produced at a rate similar to the syllables in human speech. Each song syllable involves a precisely timed sequence of muscle commands, and understanding how the bird’s brain generates this sequence is a central goal for Fee’s lab. Birds learn it naturally without any need for training, making it an ideal model for understanding the complex action sequences that represent the fundamental “building blocks” of behavior.

Some years ago Fee and colleagues made a surprising discovery that has shaped their thinking ever since. Within a part of the bird brain called HVC, they found neurons that fire a single short burst of pulses at exactly the same point on every repetition of the song. Each burst lasts about a hundredth of a second, and different neurons fire at different times within the song. With about 20,000 neurons in HVC, it was easy to imagine that there would be specific neurons active at every point in the song, meaning that each time point could be represented by the activity of a handful of individual neurons.

Proving this was not easy—“we had to wait about ten years for the technology to catch up,” says Fee—but they finally succeeded last year, when students Tatsuo Okubo and Galen Lynch analyzed recordings from hundreds of individual HVC neurons, and found that they do indeed fire in a fixed sequence, covering the entire song period.

“We think it’s like a row of falling dominoes,” says Fee. “The neurons are connected to each other so that when one fires it triggers the next one in the chain.” It’s an appealing model, because it’s easy to see how a chain of activity could control complex action sequences, simply by connecting individual time-stamp neurons to downstream motor neurons. With the correct connections, each movement is triggered at the right time in the sequence. Fee believes these motor connections are learned through trial and error—like babies babbling as they learn to speak—and a separate project in his lab aims to understand how this learning occurs.

But the domino metaphor also begs another question: who sets up the dominoes in the first place? Mackevicius and Okubo, along with summer student Hannah Payne, set out to answer this question, asking how HVC becomes wired to produce these precisely timed chain reactions.

Mackevicius, who studied math as an undergraduate before turning to neuroscience, developed computer simulations of the HVC neuronal network, and Okubo ran experiments to test the predictions, recording from young birds at different stages in the learning process. “We found that setting up a chain is surprisingly easy,” says Mackevicius. “If we start with a randomly connected network, and some realistic assumptions about the “plasticity rules” by which synapses change with repeated use, we found that these chains emerge spontaneously. All you need is to give them a push—like knocking over the first domino.”

Their results also suggested how a young bird learns to produce different syllables, as it progresses from repetitive babbling to a more adult-like song. “At first, there’s just one big burst of neural activity, but as the song becomes more complex, the activity gradually spreads out in time and splits into different sequences, each controlling a different syllable. It’s as if you started with lots of dominos all clumped together, and then gradually they become sorted into different rows.”

Does something similar happen in the human brain? “It seems very likely,” says Fee. “Many of our movements are precisely timed—think about speaking a sentence or performing a musical instrument or delivering a tennis serve. Even our thoughts often happen in sequences. Things happen faster in birds than mammals, but we suspect the underlying mechanisms will be very similar.”

Speed control

One floor above the Fee lab, Mehrdad Jazayeri is also studying how time controls actions, using humans and monkeys rather than birds. Like Fee, Jazayeri comes from an engineering background, and his goal is to understand, with an engineer’s level of detail, how we perceive time and use it flexibly to control our actions.

To begin to answer this question, Jazayeri trained monkeys to remember time intervals of a few seconds or less, and to reproduce them by pressing a button or making an eye movement at the correct time after a visual cue appears on a screen. He then recorded brain activity as the monkeys perform this task, to find out how the brain measures elapsed time. “There were two prominent ideas in the field,” he explains. “One idea was that there is an internal clock, and that the brain can somehow count the accumulating ticks. Another class of models had proposed that there are multiple oscillators that come in and out of phase at different times.”

When they examined the recordings, however, the results did not fit either model. Despite searching across multiple brain areas, Jazayeri and his colleagues found no sign of ticking or oscillations. Instead, their recordings revealed complex patterns of activity, distributed across populations of neurons; moreover, as the monkey produced longer or shorter intervals, these activity patterns were stretched or compressed in time, to fit the overall duration of each interval. In other words, says Jazayeri, the brain circuits were able to adjust the speed with which neural signals evolve over time. He compares it to a group of musicians performing a complex piece of music. “Each player has their own part, which they can play faster or slower depending on the overall tempo of the music.”

Ready-set-go

Jazayeri is also using time as a window onto a broader question—how our perceptions and decisions are shaped by past experience. “It’s one of the great questions in neuroscience, but it’s not easy to study. One of the great advantages of studying timing is that it’s easy to measure precisely, so we can frame our questions in precise mathematical ways.”

The starting point for this work was a deceptively simple task, which Jazayeri calls “Ready-Set-Go.” In this task, the subject is given the first two beats of a regular rhythm (“Ready, Set”) and must then generate the third beat (“Go”) at the correct time. To perform this task, the brain must measure the duration between Ready and Set and then immediately reproduce it.

Humans can do this fairly accurately, but not perfectly—their response times are imprecise, presumably because there is some “noise” in the neural signals that convey timing information within the brain. In the face of this uncertainty, the optimal strategy (known mathematically as Bayesian Inference) is to bias the time estimates based on prior expectations, and this is exactly what happened in Jazayeri’s experiments. If the intervals in previous trials were shorter, then people tend to under-estimate the next interval, whereas if the previous intervals were longer, they will over-estimate. In other words, people use their memory to improve their time estimates.

Monkeys can also learn this task and show similar biases, providing an opportunity to study how the brain establishes and stores these prior expectations, and how these expectations influence subsequent behavior. Again, Jazayeri and colleagues recorded from large numbers of neurons during the task. These patterns are complex and not easily described in words, but in mathematical terms, the activity forms a geometric structure known as a manifold. “Think of it as a curved surface, analogous to a cylinder,” he says. “In the past, people could not see it because they could only record from one or a few neurons at a time. We have to measure activity across large numbers of neurons simultaneously if we want to understand the workings of the system.”

Computing time

To interpret their data, Jazayeri and his team often turn to computer models based on artificial neural networks. “These models are a powerful tool in our work because we can fully reverse-engineer them and gain insight into the underlying mechanisms,” he explains. His lab has now succeeded in training a recurrent neural network that can perform the Ready-Set-Go task, and they have found that the model develops a manifold similar to the real brain data. This has led to the intriguing conjecture that memory of past experiences can be embedded in the structure of the manifold.

Jazayeri concludes: “We haven’t connected all the dots, but I suspect that many questions about brain and behavior will find their answers in the geometry and dynamics of neural activity.” Jazayeri’s long-term ambition is to develop predictive models of brain function. As an analogy, he says, think of a pendulum. “If we know its current state—its position and speed—we can predict with complete confidence what it will do next, and how it will respond to a perturbation. We don’t have anything like that for the brain—nobody has been able to do that, not even the simplest brain functions. But that’s where we’d eventually like to be.”

A clock within the brain?

It is not yet clear how the mechanisms studied by Fee and Jazayeri are related. “We talk together often, but we are still guessing how the pieces fit together,” says Fee. But one thing they both agree on is the lack of evidence for any central clock within the brain. “Most people have this intuitive feeling that time is a unitary thing, and that there must be some central clock inside our head, coordinating everything like the conductor of the orchestra or the clock inside your computer,” says Jazayeri. “Even many experts in the field believe this, but we don’t think it’s right.” Rather, his work and Fee’s both point to the existence of separate circuits for different time-related behaviors, such as singing. If there is no clock, how do the different systems work together to create our apparently seamless perception of time? “It’s still a big mystery,” says Jazayeri. “Questions like that are what make neuroscience so interesting.”