How badly do you want something? Babies can tell

Babies as young as 10 months can assess how much someone values a particular goal by observing how hard they are willing to work to achieve it, according to a new study from MIT and Harvard University.

This ability requires integrating information about both the costs of obtaining a goal and the benefit gained by the person seeking it, suggesting that babies acquire very early an intuition about how people make decisions.

“Infants are far from experiencing the world as a ‘blooming, buzzing confusion,’” says lead author Shari Liu, referring to a description by philosopher and psychologist William James about a baby’s first experience of the world. “They interpret people’s actions in terms of hidden variables, including the effort [people] expend in producing those actions, and also the value of the goals those actions achieve.”

“This study is an important step in trying to understand the roots of common-sense understanding of other people’s actions. It shows quite strikingly that in some sense, the basic math that is at the heart of how economists think about rational choice is very intuitive to babies who don’t know math, don’t speak, and can barely understand a few words,” says Josh Tenenbaum, a professor in MIT’s Department of Brain and Cognitive Sciences, a core member of the joint MIT-Harvard Center for Brains, Minds and Machines (CBMM), and one of the paper’s authors.

Tenenbaum helped to direct the research team along with Elizabeth Spelke, a professor of psychology at Harvard University and CBMM core member, in whose lab the research was conducted. Liu, the paper’s lead author, is a graduate student at Harvard. CBMM postdoc Tomer Ullman is also an author of the paper, which appears in the Nov. 23 online edition of Science.

Calculating value

Previous research has shown that adults and older children can infer someone’s motivations by observing how much effort that person exerts toward obtaining a goal.

The Harvard/MIT team wanted to learn more about how and when this ability develops. Babies expect people to be consistent in their preferences and to be efficient in how they achieve their goals, previous studies have found. The question posed in this study was whether babies can combine what they know about a person’s goal and the effort required to obtain it, to calculate the value of that goal.

To answer that question, the researchers showed 10-month-old infants animated videos in which an “agent,” a cartoon character shaped like a bouncing ball, tries to reach a certain goal (another cartoon character). In one of the videos, the agent has to leap over walls of varying height to reach the goal. First, the babies saw the agent jump over a low wall and then refuse to jump over a medium-height wall. Next, the agent jumped over the medium-height wall to reach a different goal, but refused to jump over a high wall to reach that goal.

The babies were then shown a scene in which the agent could choose between the two goals, with no obstacles in the way. An adult or older child would assume the agent would choose the second goal, because the agent had worked harder to reach that goal in the video seen earlier. The researchers found that 10-month-olds also reached this conclusion: When the agent was shown choosing the first goal, infants looked at the scene longer, indicating that they were surprised by that outcome. (Length of looking time is commonly used to measure surprise in studies of infants.)

The researchers found the same results when babies watched the agents perform the same set of actions with two different types of effort: climbing ramps of varying incline and jumping across gaps of varying width.

“Across our experiments, we found that babies looked longer when the agent chose the thing it had exerted less effort for, showing that they infer the amount of value that agents place on goals from the amount of effort that they take toward these goals,” Liu says.

The findings suggest that infants are able to calculate how much another person values something based on how much effort they put into getting it.

“This paper is not the first to suggest that idea, but its novelty is that it shows this is true in much younger babies than anyone has seen. These are preverbal babies, who themselves are not actively doing very much, yet they appear to understand other people’s actions in this sophisticated, quantitative way,” says Tenenbaum, who is also affiliated with MIT’s Computer Science and Artificial Intelligence Laboratory.

Studies of infants can reveal deep commonalities in the ways that we think throughout our lives, suggests Spelke. “Abstract, interrelated concepts like cost and value — concepts at the center both of our intuitive psychology and of utility theory in philosophy and economics — may originate in an early-emerging system by which infants understand other people’s actions,” she says.

The study shows, for the first time, that “preverbal infants can look at the world like economists,” says Gergely Csibra, a professor of cognitive science at Central European University in Hungary. “They do not simply calculate the costs and benefits of others’ actions (this had been demonstrated before), but relate these terms onto each other. In other words, they apply the well-known logic that all of us rely on when we try to assess someone’s preferences: The harder she tries to achieve something, the more valuable is the expected reward to her when she succeeds.”

Modeling intelligence

Over the past 10 years, scientists have developed computer models that come close to replicating how adults and older children incorporate different types of input to infer other people’s goals, intentions, and beliefs. For this study, the researchers built on that work, especially work by Julian Jara-Ettinger PhD ’16, who studied similar questions in preschool-age children. The researchers developed a computer model that can predict what 10-month-old babies would infer about an agent’s goals after observing the agent’s actions. This new model also posits an ability to calculate “work” (or total force applied over a distance) as a measure of the cost of actions, which the researchers believe babies are able to do on some intuitive level.

“Babies of this age seem to understand basic ideas of Newtonian mechanics, before they can talk and before they can count,” Tenenbaum says. “They’re putting together an understanding of forces, including things like gravity, and they also have some understanding of the usefulness of a goal to another person.”

Building this type of model is an important step toward developing artificial intelligence that replicates human behavior more accurately, the researchers say.

“We have to recognize that we’re very far from building AI systems that have anything like the common sense even of a 10-month-old,” Tenenbaum says. “But if we can understand in engineering terms the intuitive theories that even these young infants seem to have, that hopefully would be the basis for building machines that have more human-like intelligence.”

Still unanswered are the questions of exactly how and when these intuitive abilities arise in babies.

“Do infants start with a completely blank slate, and somehow they’re able to build up this sophisticated machinery? Or do they start with some rudimentary understanding of goals and beliefs, and then build up the sophisticated machinery? Or is it all just built in?” Ullman says.

The researchers hope that studies of even younger babies, perhaps as young as 3 months old, and computational models of learning intuitive theories that the team is also developing, may help to shed light on these questions.

This project was funded by the National Science Foundation through the Center for Brains, Minds, and Machines, which is based at MIT’s McGovern Institute for Brain Research and led by MIT and Harvard.

A sense of timing

The ability to measure time and to control the timing of actions is critical for almost every aspect of behavior. Yet the mechanisms by which our brains process time are still largely mysterious.

We experience time on many different scales—from milliseconds to years— but of particular interest is the middle range, the scale of seconds over which we perceive time directly, and over which many of our actions and thoughts unfold.

“We speak of a sense of time, yet unlike our other senses there is no sensory organ for time,” says McGovern Investigator Mehrdad Jazayeri. “It seems to come entirely from within. So if we understand time, we should be getting close to understanding mental processes.”

Singing in the brain

Emily Mackevicius comes to work in the early morning because that’s when her birds are most likely to sing. A graduate student in the lab of McGovern Investigator Michale Fee, she is studying zebra finches, songbirds that learn to sing by copying their fathers. Bird song involves a complex and precisely timed set of movements, and Mackevicius, who plays the cello in her spare time, likens it to musical performance. “With every phrase, you have to learn a sequence of finger movements and bowing movements, and put it all together with exact timing. The birds are doing something very similar with their vocal muscles.”

A typical zebra finch song lasts about one second, and consists of several syllables, produced at a rate similar to the syllables in human speech. Each song syllable involves a precisely timed sequence of muscle commands, and understanding how the bird’s brain generates this sequence is a central goal for Fee’s lab. Birds learn it naturally without any need for training, making it an ideal model for understanding the complex action sequences that represent the fundamental “building blocks” of behavior.

Some years ago Fee and colleagues made a surprising discovery that has shaped their thinking ever since. Within a part of the bird brain called HVC, they found neurons that fire a single short burst of pulses at exactly the same point on every repetition of the song. Each burst lasts about a hundredth of a second, and different neurons fire at different times within the song. With about 20,000 neurons in HVC, it was easy to imagine that there would be specific neurons active at every point in the song, meaning that each time point could be represented by the activity of a handful of individual neurons.

Proving this was not easy—“we had to wait about ten years for the technology to catch up,” says Fee—but they finally succeeded last year, when students Tatsuo Okubo and Galen Lynch analyzed recordings from hundreds of individual HVC neurons, and found that they do indeed fire in a fixed sequence, covering the entire song period.

“We think it’s like a row of falling dominoes,” says Fee. “The neurons are connected to each other so that when one fires it triggers the next one in the chain.” It’s an appealing model, because it’s easy to see how a chain of activity could control complex action sequences, simply by connecting individual time-stamp neurons to downstream motor neurons. With the correct connections, each movement is triggered at the right time in the sequence. Fee believes these motor connections are learned through trial and error—like babies babbling as they learn to speak—and a separate project in his lab aims to understand how this learning occurs.

But the domino metaphor also begs another question: who sets up the dominoes in the first place? Mackevicius and Okubo, along with summer student Hannah Payne, set out to answer this question, asking how HVC becomes wired to produce these precisely timed chain reactions.

Mackevicius, who studied math as an undergraduate before turning to neuroscience, developed computer simulations of the HVC neuronal network, and Okubo ran experiments to test the predictions, recording from young birds at different stages in the learning process. “We found that setting up a chain is surprisingly easy,” says Mackevicius. “If we start with a randomly connected network, and some realistic assumptions about the “plasticity rules” by which synapses change with repeated use, we found that these chains emerge spontaneously. All you need is to give them a push—like knocking over the first domino.”

Their results also suggested how a young bird learns to produce different syllables, as it progresses from repetitive babbling to a more adult-like song. “At first, there’s just one big burst of neural activity, but as the song becomes more complex, the activity gradually spreads out in time and splits into different sequences, each controlling a different syllable. It’s as if you started with lots of dominos all clumped together, and then gradually they become sorted into different rows.”

Does something similar happen in the human brain? “It seems very likely,” says Fee. “Many of our movements are precisely timed—think about speaking a sentence or performing a musical instrument or delivering a tennis serve. Even our thoughts often happen in sequences. Things happen faster in birds than mammals, but we suspect the underlying mechanisms will be very similar.”

Speed control

One floor above the Fee lab, Mehrdad Jazayeri is also studying how time controls actions, using humans and monkeys rather than birds. Like Fee, Jazayeri comes from an engineering background, and his goal is to understand, with an engineer’s level of detail, how we perceive time and use it flexibly to control our actions.

To begin to answer this question, Jazayeri trained monkeys to remember time intervals of a few seconds or less, and to reproduce them by pressing a button or making an eye movement at the correct time after a visual cue appears on a screen. He then recorded brain activity as the monkeys perform this task, to find out how the brain measures elapsed time. “There were two prominent ideas in the field,” he explains. “One idea was that there is an internal clock, and that the brain can somehow count the accumulating ticks. Another class of models had proposed that there are multiple oscillators that come in and out of phase at different times.”

When they examined the recordings, however, the results did not fit either model. Despite searching across multiple brain areas, Jazayeri and his colleagues found no sign of ticking or oscillations. Instead, their recordings revealed complex patterns of activity, distributed across populations of neurons; moreover, as the monkey produced longer or shorter intervals, these activity patterns were stretched or compressed in time, to fit the overall duration of each interval. In other words, says Jazayeri, the brain circuits were able to adjust the speed with which neural signals evolve over time. He compares it to a group of musicians performing a complex piece of music. “Each player has their own part, which they can play faster or slower depending on the overall tempo of the music.”

Ready-set-go

Jazayeri is also using time as a window onto a broader question—how our perceptions and decisions are shaped by past experience. “It’s one of the great questions in neuroscience, but it’s not easy to study. One of the great advantages of studying timing is that it’s easy to measure precisely, so we can frame our questions in precise mathematical ways.”

The starting point for this work was a deceptively simple task, which Jazayeri calls “Ready-Set-Go.” In this task, the subject is given the first two beats of a regular rhythm (“Ready, Set”) and must then generate the third beat (“Go”) at the correct time. To perform this task, the brain must measure the duration between Ready and Set and then immediately reproduce it.

Humans can do this fairly accurately, but not perfectly—their response times are imprecise, presumably because there is some “noise” in the neural signals that convey timing information within the brain. In the face of this uncertainty, the optimal strategy (known mathematically as Bayesian Inference) is to bias the time estimates based on prior expectations, and this is exactly what happened in Jazayeri’s experiments. If the intervals in previous trials were shorter, then people tend to under-estimate the next interval, whereas if the previous intervals were longer, they will over-estimate. In other words, people use their memory to improve their time estimates.

Monkeys can also learn this task and show similar biases, providing an opportunity to study how the brain establishes and stores these prior expectations, and how these expectations influence subsequent behavior. Again, Jazayeri and colleagues recorded from large numbers of neurons during the task. These patterns are complex and not easily described in words, but in mathematical terms, the activity forms a geometric structure known as a manifold. “Think of it as a curved surface, analogous to a cylinder,” he says. “In the past, people could not see it because they could only record from one or a few neurons at a time. We have to measure activity across large numbers of neurons simultaneously if we want to understand the workings of the system.”

Computing time

To interpret their data, Jazayeri and his team often turn to computer models based on artificial neural networks. “These models are a powerful tool in our work because we can fully reverse-engineer them and gain insight into the underlying mechanisms,” he explains. His lab has now succeeded in training a recurrent neural network that can perform the Ready-Set-Go task, and they have found that the model develops a manifold similar to the real brain data. This has led to the intriguing conjecture that memory of past experiences can be embedded in the structure of the manifold.

Jazayeri concludes: “We haven’t connected all the dots, but I suspect that many questions about brain and behavior will find their answers in the geometry and dynamics of neural activity.” Jazayeri’s long-term ambition is to develop predictive models of brain function. As an analogy, he says, think of a pendulum. “If we know its current state—its position and speed—we can predict with complete confidence what it will do next, and how it will respond to a perturbation. We don’t have anything like that for the brain—nobody has been able to do that, not even the simplest brain functions. But that’s where we’d eventually like to be.”

A clock within the brain?

It is not yet clear how the mechanisms studied by Fee and Jazayeri are related. “We talk together often, but we are still guessing how the pieces fit together,” says Fee. But one thing they both agree on is the lack of evidence for any central clock within the brain. “Most people have this intuitive feeling that time is a unitary thing, and that there must be some central clock inside our head, coordinating everything like the conductor of the orchestra or the clock inside your computer,” says Jazayeri. “Even many experts in the field believe this, but we don’t think it’s right.” Rather, his work and Fee’s both point to the existence of separate circuits for different time-related behaviors, such as singing. If there is no clock, how do the different systems work together to create our apparently seamless perception of time? “It’s still a big mystery,” says Jazayeri. “Questions like that are what make neuroscience so interesting.”

 

Mehrdad Jazayeri to join McGovern Institute faculty

We are pleased to announce the appointment of Mehrdad Jazayeri as an Investigator at the McGovern Institute for Brain Research. He will join the institute in January 2013, with a faculty appointment as assistant professor in MIT’s Department of Brain and Cognitive Sciences.

Complex behaviors rely on a combination of sensory evidence, prior experience and knowledge about potential costs and benefits. Jazayeri’s research is focused on the neural mechanisms that enable the brain to integrate these internal and external cues and to produce flexible goal-directed behavior.

In his dissertation work with J. Anthony Movshon at New York University, Jazayeri asked how the brain uses unreliable sensory signals to make probabilistic inferences. His work led to a simple computational scheme that explained how information in visual cortical maps is used for a variety of visual perceptual tasks. Later, as a Helen Hay Whitney postdoctoral fellow, he began to investigate the role of prior experience on perception. Working in the laboratory of Michael Shadlen at the University of Washington, he used a simple timing task to show that humans exploit their prior experience of temporal regularities to make better estimates of time intervals. Using a rigorous mathematical framework — Bayesian estimation — this work provided a detailed model for quantifying how measurements, prior expectations and internal goals influence timing behavior.

Jazayeri then turned to monkey electrophysiology to study how neurons process timing information and how they combine sensory cues with prior experience. For this work, he taught monkeys to reproduce time intervals, as if keeping the beat in music. The animals were provided with beats 1 and 2 and were rewarded for producing a third beat at the correct time. By recording from sensorimotor neurons in the parietal cortex during this task, Jazayeri showed that the pattern of activity is very different during the measurement and production phases of the task, even though the interval is the same.  Moreover, he found that the response dynamics of parietal neurons were shaped not only by the immediate time cues but also by the intervals monkeys had encountered in preceding trials.

Building on his previous work, Jazayeri will pursue two long-term research themes at MIT. One line of research will examine how brain circuits measure and produce time, an ability that is crucial for mental capacities such as learning causes and effects, “intuitive physics,” and sequencing thoughts and actions. The other line of research will exploit timing tasks to understand the neural basis of sensorimotor integration, a key component of cognitive functions such as deliberation and probabilistic reasoning.

Understanding complex behaviors such as flexible timing or sensorimotor integration requires methods for manipulating the activity of specific structures and circuits within the brain. Optogenetics, the ability to control brain activity using light, has emerged as a powerful tool for such studies. In a recent collaboration with Greg Horwitz at the Univeristy of Washington, Jazayeri reported the first successful application of optogenetics to evoke a behavioral response in primates. Motivated by this proof-of-principle experiment, Jazayeri plans to combine the traditional tools of psychophysics and electrophysiology with optogenetic manipulations to characterize the circuits that control timing and sensorimotor integration in the primate brain.

Originally from Iran, Jazayeri obtained his B.Sc in Electrical Engineering from Sharif University of Technology in Tehran. He received his PhD from New York University, where he studied with J. Anthony Movshon, winning the Dean’s award for the most outstanding dissertation in the university.  After graduating, he was awarded a Helen Hay Whitney fellowship to join the laboratory of Michael Shadlen at the University of Washington, where he has been since 2007.

McGovern Institute to present inaugural Edward M. Scolnick Prize in Neuroscience Research

The Edward M. Scolnick Prize in Neuroscience Research will be awarded on Friday April 23rd at the McGovern Institute at MIT, a leading research and teaching institute committed to advancing understanding of the human mind and communications. According to Dr. Phillip A. Sharp, Director of the Institute, this annual research prize will recognize outstanding discoveries or significant advances in the field of neuroscience.

The inaugural prize will be presented to Dr. Masakazu Konishi, Bing Professor of Behavioral Biology at the California Institute of Technology. As part of the day’s events, Dr. Konishi will present a free public lecture, “Non-linear steps to high stimulus selectivity in different sensory systems” at 1:30 PM on Friday, April 23rd at MIT (building E25, room 111.) Following the lecture, The McGovern Institute is hosting an invitation-only reception and dinner honoring Dr. Konishi at the MIT Faculty Club. Speakers for the evening award presentation include: Dr. Sharp; Patrick J. McGovern, Founder and Chairman of International Data Group (IDG) and trustee of MIT and the Institute; Edward Scolnick, former President of Merck Research Laboratories; and Torsten Wiesel, President Emeritus of Rockefeller University.

“I am pleased, on behalf of the McGovern Institute, to recognize the important work that Dr. Mark Konishi is doing,” said Dr. Sharp. “Dr. Konishi is being recognized for his fundamental discoveries concerning mechanisms in the brain for sound location such as a neural topographic map of auditory space. Through a combination of his discoveries, the positive influence of his rigorous approach, and the cadre of young scientists he has mentored and trained, Dr. Konishi has improved our knowledge of how the brain works, and the future of neuroscience research. Mark is truly a leader, and well-deserving of this prestigious honor.”

Dr. Konishi received his B.S and M.S degrees from Hokkaido University in Sapporo, Japan and his Doctorate from the University of California, Berkeley in 1963. After holding positions at the University of Tubingen and the Max-Planck Institute in Germany, Dr. Konishi returned to the United States, where he worked at the University of Wisconsin and Princeton University before coming to the California Institute of Technology in 1975 as Professor of Biology. He has been the Bing Professor of Behavioral Biology at Caltech since 1980. With scores of publications dating back to 1971, and as the recipient of fourteen previous awards, Dr. Konishi has forged a deserved reputation as an outstanding investigator.

Among his many findings, Dr. Konishi is known for his fundamental discoveries concerning sound location by the barn owl and the song system in the bird. He discovered that in the inferior colliculus of the brain of the barn owl there is a map of auditory space and he identified the computational principles and the neural mechanisms that underlie the workings of the map.

The creation of the Edward M. Scolnick Prize was announced last year, with the first presentation scheduled for 2004. The annual prize consists of an award equal to $50,000 and will be given each year to an outstanding leader in the international neuroscience research community. The McGovern Institute will host a public lecture by Dr. Konishi in the spring of 2004, followed by an award presentation ceremony.

The award is named in honor of Dr. Edward M. Scolnick, who stepped down as President of Merck Research Laboratories in December 2002, after holding Merck & Co., Inc.’s top research post for 17 years. During his tenure, Dr. Scolnick led the discovery, development and introduction of 29 new medicines and vaccines. While many of the medicines and vaccines have contributed to improving patient health, some have revolutionized the ways in which certain diseases are treated.

About the McGovern Institute at MIT

The McGovern Institute at MIT is a research and teaching institute committed to advancing human understanding and communications. The goal of the McGovern Institute is to investigate and ultimately understand the biological basis of all higher brain function in humans. The McGovern Institute conducts integrated research in neuroscience, genetic and cellular neurobiology, cognitive science, computation, and related areas.

By determining how the brain works, from the level of gene expression in individual neurons to the interrelationships between complex neural networks, the McGovern Institute’s efforts work to improve human health, discover the basis of learning and recognition, and enhance education and communication. The McGovern Institute contributes to the most basic knowledge of the fundamental mysteries of human awareness, decisions, and actions.