How badly do you want something? Babies can tell

Babies as young as 10 months can assess how much someone values a particular goal by observing how hard they are willing to work to achieve it, according to a new study from MIT and Harvard University.

This ability requires integrating information about both the costs of obtaining a goal and the benefit gained by the person seeking it, suggesting that babies acquire very early an intuition about how people make decisions.

“Infants are far from experiencing the world as a ‘blooming, buzzing confusion,’” says lead author Shari Liu, referring to a description by philosopher and psychologist William James about a baby’s first experience of the world. “They interpret people’s actions in terms of hidden variables, including the effort [people] expend in producing those actions, and also the value of the goals those actions achieve.”

“This study is an important step in trying to understand the roots of common-sense understanding of other people’s actions. It shows quite strikingly that in some sense, the basic math that is at the heart of how economists think about rational choice is very intuitive to babies who don’t know math, don’t speak, and can barely understand a few words,” says Josh Tenenbaum, a professor in MIT’s Department of Brain and Cognitive Sciences, a core member of the joint MIT-Harvard Center for Brains, Minds and Machines (CBMM), and one of the paper’s authors.

Tenenbaum helped to direct the research team along with Elizabeth Spelke, a professor of psychology at Harvard University and CBMM core member, in whose lab the research was conducted. Liu, the paper’s lead author, is a graduate student at Harvard. CBMM postdoc Tomer Ullman is also an author of the paper, which appears in the Nov. 23 online edition of Science.

Calculating value

Previous research has shown that adults and older children can infer someone’s motivations by observing how much effort that person exerts toward obtaining a goal.

The Harvard/MIT team wanted to learn more about how and when this ability develops. Babies expect people to be consistent in their preferences and to be efficient in how they achieve their goals, previous studies have found. The question posed in this study was whether babies can combine what they know about a person’s goal and the effort required to obtain it, to calculate the value of that goal.

To answer that question, the researchers showed 10-month-old infants animated videos in which an “agent,” a cartoon character shaped like a bouncing ball, tries to reach a certain goal (another cartoon character). In one of the videos, the agent has to leap over walls of varying height to reach the goal. First, the babies saw the agent jump over a low wall and then refuse to jump over a medium-height wall. Next, the agent jumped over the medium-height wall to reach a different goal, but refused to jump over a high wall to reach that goal.

The babies were then shown a scene in which the agent could choose between the two goals, with no obstacles in the way. An adult or older child would assume the agent would choose the second goal, because the agent had worked harder to reach that goal in the video seen earlier. The researchers found that 10-month-olds also reached this conclusion: When the agent was shown choosing the first goal, infants looked at the scene longer, indicating that they were surprised by that outcome. (Length of looking time is commonly used to measure surprise in studies of infants.)

The researchers found the same results when babies watched the agents perform the same set of actions with two different types of effort: climbing ramps of varying incline and jumping across gaps of varying width.

“Across our experiments, we found that babies looked longer when the agent chose the thing it had exerted less effort for, showing that they infer the amount of value that agents place on goals from the amount of effort that they take toward these goals,” Liu says.

The findings suggest that infants are able to calculate how much another person values something based on how much effort they put into getting it.

“This paper is not the first to suggest that idea, but its novelty is that it shows this is true in much younger babies than anyone has seen. These are preverbal babies, who themselves are not actively doing very much, yet they appear to understand other people’s actions in this sophisticated, quantitative way,” says Tenenbaum, who is also affiliated with MIT’s Computer Science and Artificial Intelligence Laboratory.

Studies of infants can reveal deep commonalities in the ways that we think throughout our lives, suggests Spelke. “Abstract, interrelated concepts like cost and value — concepts at the center both of our intuitive psychology and of utility theory in philosophy and economics — may originate in an early-emerging system by which infants understand other people’s actions,” she says.

The study shows, for the first time, that “preverbal infants can look at the world like economists,” says Gergely Csibra, a professor of cognitive science at Central European University in Hungary. “They do not simply calculate the costs and benefits of others’ actions (this had been demonstrated before), but relate these terms onto each other. In other words, they apply the well-known logic that all of us rely on when we try to assess someone’s preferences: The harder she tries to achieve something, the more valuable is the expected reward to her when she succeeds.”

Modeling intelligence

Over the past 10 years, scientists have developed computer models that come close to replicating how adults and older children incorporate different types of input to infer other people’s goals, intentions, and beliefs. For this study, the researchers built on that work, especially work by Julian Jara-Ettinger PhD ’16, who studied similar questions in preschool-age children. The researchers developed a computer model that can predict what 10-month-old babies would infer about an agent’s goals after observing the agent’s actions. This new model also posits an ability to calculate “work” (or total force applied over a distance) as a measure of the cost of actions, which the researchers believe babies are able to do on some intuitive level.

“Babies of this age seem to understand basic ideas of Newtonian mechanics, before they can talk and before they can count,” Tenenbaum says. “They’re putting together an understanding of forces, including things like gravity, and they also have some understanding of the usefulness of a goal to another person.”

Building this type of model is an important step toward developing artificial intelligence that replicates human behavior more accurately, the researchers say.

“We have to recognize that we’re very far from building AI systems that have anything like the common sense even of a 10-month-old,” Tenenbaum says. “But if we can understand in engineering terms the intuitive theories that even these young infants seem to have, that hopefully would be the basis for building machines that have more human-like intelligence.”

Still unanswered are the questions of exactly how and when these intuitive abilities arise in babies.

“Do infants start with a completely blank slate, and somehow they’re able to build up this sophisticated machinery? Or do they start with some rudimentary understanding of goals and beliefs, and then build up the sophisticated machinery? Or is it all just built in?” Ullman says.

The researchers hope that studies of even younger babies, perhaps as young as 3 months old, and computational models of learning intuitive theories that the team is also developing, may help to shed light on these questions.

This project was funded by the National Science Foundation through the Center for Brains, Minds, and Machines, which is based at MIT’s McGovern Institute for Brain Research and led by MIT and Harvard.

Stress can lead to risky decisions

Making decisions is not always easy, especially when choosing between two options that have both positive and negative elements, such as deciding between a job with a high salary but long hours, and a lower-paying job that allows for more leisure time.

MIT neuroscientists have now discovered that making decisions in this type of situation, known as a cost-benefit conflict, is dramatically affected by chronic stress. In a study of mice, they found that stressed animals were far likelier to choose high-risk, high-payoff options.

The researchers also found that impairments of a specific brain circuit underlie this abnormal decision making, and they showed that they could restore normal behavior by manipulating this circuit. If a method for tuning this circuit in humans were developed, it could help patients with disorders such as depression, addiction, and anxiety, which often feature poor decision-making.

“One exciting thing is that by doing this very basic science, we found a microcircuit of neurons in the striatum that we could manipulate to reverse the effects of stress on this type of decision making. This to us is extremely promising, but we are aware that so far these experiments are in rats and mice,” says Ann Graybiel, an Institute Professor at MIT and member of the McGovern Institute for Brain Research.

Graybiel is the senior author of the paper, which appears in Cell on Nov. 16. The paper’s lead author is Alexander Friedman, a McGovern Institute research scientist.

Hard decisions

In 2015, Graybiel, Friedman, and their colleagues first identified the brain circuit involved in decision making that involves cost-benefit conflict. The circuit begins in the medial prefrontal cortex, which is responsible for mood control, and extends into clusters of neurons called striosomes, which are located in the striatum, a region associated with habit formation, motivation, and reward reinforcement.

In that study, the researchers trained rodents to run a maze in which they had to choose between one option that included highly concentrated chocolate milk, which they like, along with bright light, which they don’t like, and an option with dimmer light but weaker chocolate milk. By inhibiting the connection between cortical neurons and striosomes, using a technique known as optogenetics, they found that they could transform the rodents’ preference for lower-risk, lower-payoff choices to a preference for bigger payoffs despite their bigger costs.

In the new study, the researchers performed a similar experiment without optogenetic manipulations. Instead, they exposed the rodents to a short period of stress every day for two weeks.

Before experiencing stress, normal rats and mice would choose to run toward the maze arm with dimmer light and weaker chocolate milk about half the time. The researchers gradually increased the concentration of chocolate milk found in the dimmer side, and as they did so, the animals began choosing that side more frequently.

However, when chronically stressed rats and mice were put in the same situation, they continued to choose the bright light/better chocolate milk side even as the chocolate milk concentration greatly increased on the dimmer side. This was the same behavior the researchers saw in rodents that had the prefrontal cortex-striosome circuit disrupted optogenetically.

“The result is that the animal ignores the high cost and chooses the high reward,” Friedman says.

The findings help to explain how stress contributes to substance abuse and may worsen mental disorders, says Amy Arnsten, a professor of neuroscience and psychology at the Yale University School of Medicine, who was not involved in the research.

“Stress is ubiquitous, for both humans and animals, and its effects on brain and behavior are of central importance to the understanding of both normal function and neuropsychiatric disease. It is both pernicious and ironic that chronic stress can lead to impulsive action; in many clinical cases, such as drug addiction, impulsivity is likely to worsen patterns of behavior that produce the stress in the first place, inducing a vicious cycle,” Arnsten wrote in a commentary accompanying the Cell paper, co-authored by Daeyeol Lee and Christopher Pittenger of the Yale University School of Medicine.

Circuit dynamics

The researchers believe that this circuit integrates information about the good and bad aspects of possible choices, helping the brain to produce a decision. Normally, when the circuit is turned on, neurons of the prefrontal cortex activate certain neurons called high-firing interneurons, which then suppress striosome activity.

When the animals are stressed, these circuit dynamics shift and the cortical neurons fire too late to inhibit the striosomes, which then become overexcited. This results in abnormal decision making.

“Somehow this prior exposure to chronic stress controls the integration of good and bad,” Graybiel says. “It’s as though the animals had lost their ability to balance excitation and inhibition in order to settle on reasonable behavior.”

Once this shift occurs, it remains in effect for months, the researchers found. However, they were able to restore normal decision making in the stressed mice by using optogenetics to stimulate the high-firing interneurons, thereby suppressing the striosomes. This suggests that the prefronto-striosome circuit remains intact following chronic stress and could potentially be susceptible to manipulations that would restore normal behavior in human patients whose disorders lead to abnormal decision making.

“This state change could be reversible, and it’s possible in the future that you could target these interneurons and restore the excitation-inhibition balance,” Friedman says.

The research was funded by the National Institutes of Health/National Institute for Mental Health, the CHDI Foundation, the Defense Advanced Research Projects Agency and the U.S. Army Research Office, the Bachmann-Strauss Dystonia and Parkinson Foundation, the William N. and Bernice E. Bumpus Foundation, Michael Stiefel, the Saks Kavanaugh Foundation, and John Wasserlein and Lucille Braun.

Next-generation optogenetic molecules control single neurons

Researchers at MIT and Paris Descartes University have developed a new optogenetic technique that sculpts light to target individual cells bearing engineered light-sensitive molecules, so that individual neurons can be precisely stimulated.

Until now, it has been challenging to use optogenetics to target single cells with such precise control over both the timing and location of the activation. This new advance paves the way for studies of how individual cells, and connections among those cells, generate specific behaviors such as initiating a movement or learning a new skill.

“Ideally what you would like to do is play the brain like a piano. You would want to control neurons independently, rather than having them all march in lockstep the way traditional optogenetics works, but which normally the brain doesn’t do,” says Ed Boyden, an associate professor of brain and cognitive sciences and biological engineering at MIT, and a member of MIT’s Media Lab and McGovern Institute for Brain Research.

The new technique relies on a new type of light-sensitive protein that can be embedded in neuron cell bodies, combined with holographic light-shaping that can focus light on a single cell.

Boyden and Valentina Emiliani, a research director at France’s National Center for Scientific Research (CNRS) and director of the Neurophotonics Laboratory at Paris Descartes University, are the senior authors of the study, which appears in the Nov. 13 issue of Nature Neuroscience. The lead authors are MIT postdoc Or Shemesh and CNRS postdocs Dimitrii Tanese and Valeria Zampini.

Precise control

More than 10 years ago, Boyden and his collaborators first pioneered the use of light-sensitive proteins known as microbial opsins to manipulate neuron electrical activity. These opsins can be embedded into the membranes of neurons, and when they are exposed to certain wavelengths of light, they silence or stimulate the cells.

Over the past decade, scientists have used this technique to study how populations of neurons behave during brain tasks such as memory recall or habit formation. Traditionally, many cells are targeted simultaneously because the light shining into the brain strikes a relatively large area. However, as Boyden points out, neurons may have different functions even when they are near each other.

“Two adjacent cells can have completely different neural codes. They can do completely different things, respond to different stimuli, and play different activity patterns during different tasks,” he says.

To achieve independent control of single cells, the researchers combined two new advances: a localized, more powerful opsin and an optimized holographic light-shaping microscope.

For the opsin, the researchers used a protein called CoChR, which the Boyden lab discovered in 2014. They chose this molecule because it generates a very strong electric current in response to light (about 10 times stronger than that produced by channelrhodopsin-2, the first protein used for optogenetics).

They fused CoChR to a small protein that directs the opsin into the cell bodies of neurons and away from axons and dendrites, which extend from the neuron body. This helps to prevent crosstalk between neurons, since light that activates one neuron can also strike axons and dendrites of other neurons that intertwine with the target neuron.

Boyden then worked with Emiliani to combine this approach with a light-stimulation technique that she had previously developed, known as two-photon computer-generated holography (CGH). This can be used to create three-dimensional sculptures of light that envelop a target cell.

Traditional holography is based on reproducing, with light, the shape of a specific object, in the absence of that original object. This is achieved by creating an “interferogram” that contains the information needed to reconstruct an object that was previously illuminated by a reference beam. In computer generated holography, the interferogram is calculated by a computer without the need of any original object. Years ago, Emiliani’s research group demonstrated that combined with two-photon excitation, CGH can be used to refocus laser light to precisely illuminate a cell or a defined group of cells in the brain.

In the new study, by combining this approach with new opsins that cluster in the cell body, the researchers showed they could stimulate individual neurons with not only precise spatial control but also great control over the timing of the stimulation. When they target a specific neuron, it responds consistently every time, with variability that is less than one millisecond, even when the cell is stimulated many times in a row.

“For the first time ever, we can bring the precision of single-cell control toward the natural timescales of neural computation,” Boyden says.

Mapping connections

Using this technique, the researchers were able to stimulate single neurons in brain slices and then measure the responses from cells that are connected to that cell. This paves the way for possible diagramming of the connections of the brain, and analyzing how those connections change in real time as the brain performs a task or learns a new skill.

One possible experiment, Boyden says, would be to stimulate neurons connected to each other to try to figure out if one is controlling the others or if they are all receiving input from a far-off controller.

“It’s an open question,” he says. “Is a given function being driven from afar, or is there a local circuit that governs the dynamics and spells out the exact chain of command within a circuit? If you can catch that chain of command in action and then use this technology to prove that that’s actually a causal link of events, that could help you explain how a sensation, or movement, or decision occurs.”

As a step toward that type of study, the researchers now plan to extend this approach into living animals. They are also working on improving their targeting molecules and developing high-current opsins that can silence neuron activity.

Kirill Volynski, a professor at the Institute of Neurology at University College London, who was not involved in the research, plans to use the new technology in his studies of diseases caused by mutations of proteins involved in synaptic communication between neurons.

“This gives us a very nice tool to study those mutations and those disorders,” Volynski says. “We expect this to enable a major improvement in the specificity of stimulating neurons that have mutated synaptic proteins.”

The research was funded by the National Institutes of Health, France’s National Research Agency, the Simons Foundation for the Social Brain, the Human Frontiers Science Program, John Doerr, the Open Philanthropy Project, the Howard Hughes Medical Institute, and the Defense Advanced Research Projects Agency.

McGovern Institute 2017 Halloween Party

See below for a full gallery of images from our annual Halloween party.

Researchers engineer CRISPR to edit single RNA letters in human cells

The Broad Institute and MIT scientists who first harnessed CRISPR for mammalian genome editing have engineered a new molecular system for efficiently editing RNA in human cells. RNA editing, which can alter gene products without making changes to the genome, has profound potential as a tool for both research and disease treatment.

In a paper published today in Science, senior author Feng Zhang and his team describe the new CRISPR-based system, called RNA Editing for Programmable A to I Replacement, or “REPAIR.” The system can change single RNA nucleotides in mammalian cells in a programmable and precise fashion. REPAIR has the ability to reverse disease-causing mutations at the RNA level, as well as other potential therapeutic and basic science applications.

“The ability to correct disease-causing mutations is one of the primary goals of genome editing,” says Zhang, a core institute member of the Broad Institute, an investigator at the McGovern Institute, and the James and Patricia Poitras ’63 Professor in Neuroscience and associate professor in the departments of Brain and Cognitive Sciences and Biological Engineering at MIT. “So far, we’ve gotten very good at inactivating genes, but actually recovering lost protein function is much more challenging. This new ability to edit RNA opens up more potential opportunities to recover that function and treat many diseases, in almost any kind of cell.”

REPAIR has the ability to target individual RNA letters, or nucleosides, switching adenosines to inosines (read as guanosines by the cell). These letters are involved in single-base changes known to regularly cause disease in humans. In human disease, a mutation from G to A is extremely common; these alterations have been implicated in, for example, cases of focal epilepsy, Duchenne muscular dystrophy, and Parkinson’s disease. REPAIR has the ability to reverse the impact of any pathogenic G-to-A mutation regardless of its surrounding nucleotide sequence, with the potential to operate in any cell type.

Unlike the permanent changes to the genome required for DNA editing, RNA editing offers a safer, more flexible way to make corrections in the cell. “REPAIR can fix mutations without tampering with the genome, and because RNA naturally degrades, it’s a potentially reversible fix,” explains co-first author David Cox, a graduate student in Zhang’s lab.

To create REPAIR, the researchers systematically profiled the CRISPR-Cas13 enzyme family for potential “editor” candidates (unlike Cas9, the Cas13 proteins target and cut RNA). They selected an enzyme from Prevotella bacteria, called PspCas13b, which was the most effective at inactivating RNA. The team engineered a deactivated variant of PspCas13b that still binds to specific stretches of RNA but lacks its “scissor-like” activity, and fused it to a protein called ADAR2, which changes the letters A to I in RNA transcripts.

In REPAIR, the deactivated Cas13b enzyme seeks out a target sequence of RNA, and the ADAR2 element performs the base conversion without cutting the transcript or relying on any of the cell’s native machinery.

The team further modified the editing system to improve its specificity, reducing detectable off-target edits from 18,385 to only 20 in the whole transcriptome. The upgraded incarnation, REPAIRv2, consistently achieved the desired edit in 20 to 40 percent — and up to 51 percent — of a targeted RNA without signs of significant off-target activity. “The success we had engineering this system is encouraging, and there are clear signs REPAIRv2 can be evolved even further for more robust activity while still maintaining specificity,” says Omar Abudayyeh, co-first author and a graduate student in Zhang’s lab. Cox and Abudayyeh are both students in the Harvard-MIT Program in Health Sciences and Technology.

To demonstrate REPAIR’s therapeutic potential, the team synthesized the pathogenic mutations that cause Fanconi anemia and X-linked nephrogenic diabetes insipidus, introduced them into human cells, and successfully corrected these mutations at the RNA level. To push the therapeutic prospects further, the team plans to improve REPAIRv2’s efficiency and to package it into a delivery system appropriate for introducing REPAIRv2 into specific tissues in animal models.

The researchers are also working on additional tools for other types of nucleotide conversions. “There’s immense natural diversity in these enzymes,” says co-first author Jonathan Gootenberg, a graduate student in both Zhang’s lab and the lab of Broad core institute member Aviv Regev. “We’re always looking to harness the power of nature to carry out these changes.”

Zhang, along with the Broad Institute and MIT, plans to share the REPAIR system widely. As with earlier CRISPR tools, the groups will make this technology freely available for academic research via the Zhang lab’s page on the plasmid-sharing website Addgene, through which the Zhang lab has already shared reagents more than 42,000 times with researchers at more than 2,200 labs in 61 countries, accelerating research around the world.

This research was funded, in part, by the National Institutes of Health and the Poitras Center for Affective Disorders Research.

A sense of timing

The ability to measure time and to control the timing of actions is critical for almost every aspect of behavior. Yet the mechanisms by which our brains process time are still largely mysterious.

We experience time on many different scales—from milliseconds to years— but of particular interest is the middle range, the scale of seconds over which we perceive time directly, and over which many of our actions and thoughts unfold.

“We speak of a sense of time, yet unlike our other senses there is no sensory organ for time,” says McGovern Investigator Mehrdad Jazayeri. “It seems to come entirely from within. So if we understand time, we should be getting close to understanding mental processes.”

Singing in the brain

Emily Mackevicius comes to work in the early morning because that’s when her birds are most likely to sing. A graduate student in the lab of McGovern Investigator Michale Fee, she is studying zebra finches, songbirds that learn to sing by copying their fathers. Bird song involves a complex and precisely timed set of movements, and Mackevicius, who plays the cello in her spare time, likens it to musical performance. “With every phrase, you have to learn a sequence of finger movements and bowing movements, and put it all together with exact timing. The birds are doing something very similar with their vocal muscles.”

A typical zebra finch song lasts about one second, and consists of several syllables, produced at a rate similar to the syllables in human speech. Each song syllable involves a precisely timed sequence of muscle commands, and understanding how the bird’s brain generates this sequence is a central goal for Fee’s lab. Birds learn it naturally without any need for training, making it an ideal model for understanding the complex action sequences that represent the fundamental “building blocks” of behavior.

Some years ago Fee and colleagues made a surprising discovery that has shaped their thinking ever since. Within a part of the bird brain called HVC, they found neurons that fire a single short burst of pulses at exactly the same point on every repetition of the song. Each burst lasts about a hundredth of a second, and different neurons fire at different times within the song. With about 20,000 neurons in HVC, it was easy to imagine that there would be specific neurons active at every point in the song, meaning that each time point could be represented by the activity of a handful of individual neurons.

Proving this was not easy—“we had to wait about ten years for the technology to catch up,” says Fee—but they finally succeeded last year, when students Tatsuo Okubo and Galen Lynch analyzed recordings from hundreds of individual HVC neurons, and found that they do indeed fire in a fixed sequence, covering the entire song period.

“We think it’s like a row of falling dominoes,” says Fee. “The neurons are connected to each other so that when one fires it triggers the next one in the chain.” It’s an appealing model, because it’s easy to see how a chain of activity could control complex action sequences, simply by connecting individual time-stamp neurons to downstream motor neurons. With the correct connections, each movement is triggered at the right time in the sequence. Fee believes these motor connections are learned through trial and error—like babies babbling as they learn to speak—and a separate project in his lab aims to understand how this learning occurs.

But the domino metaphor also begs another question: who sets up the dominoes in the first place? Mackevicius and Okubo, along with summer student Hannah Payne, set out to answer this question, asking how HVC becomes wired to produce these precisely timed chain reactions.

Mackevicius, who studied math as an undergraduate before turning to neuroscience, developed computer simulations of the HVC neuronal network, and Okubo ran experiments to test the predictions, recording from young birds at different stages in the learning process. “We found that setting up a chain is surprisingly easy,” says Mackevicius. “If we start with a randomly connected network, and some realistic assumptions about the “plasticity rules” by which synapses change with repeated use, we found that these chains emerge spontaneously. All you need is to give them a push—like knocking over the first domino.”

Their results also suggested how a young bird learns to produce different syllables, as it progresses from repetitive babbling to a more adult-like song. “At first, there’s just one big burst of neural activity, but as the song becomes more complex, the activity gradually spreads out in time and splits into different sequences, each controlling a different syllable. It’s as if you started with lots of dominos all clumped together, and then gradually they become sorted into different rows.”

Does something similar happen in the human brain? “It seems very likely,” says Fee. “Many of our movements are precisely timed—think about speaking a sentence or performing a musical instrument or delivering a tennis serve. Even our thoughts often happen in sequences. Things happen faster in birds than mammals, but we suspect the underlying mechanisms will be very similar.”

Speed control

One floor above the Fee lab, Mehrdad Jazayeri is also studying how time controls actions, using humans and monkeys rather than birds. Like Fee, Jazayeri comes from an engineering background, and his goal is to understand, with an engineer’s level of detail, how we perceive time and use it flexibly to control our actions.

To begin to answer this question, Jazayeri trained monkeys to remember time intervals of a few seconds or less, and to reproduce them by pressing a button or making an eye movement at the correct time after a visual cue appears on a screen. He then recorded brain activity as the monkeys perform this task, to find out how the brain measures elapsed time. “There were two prominent ideas in the field,” he explains. “One idea was that there is an internal clock, and that the brain can somehow count the accumulating ticks. Another class of models had proposed that there are multiple oscillators that come in and out of phase at different times.”

When they examined the recordings, however, the results did not fit either model. Despite searching across multiple brain areas, Jazayeri and his colleagues found no sign of ticking or oscillations. Instead, their recordings revealed complex patterns of activity, distributed across populations of neurons; moreover, as the monkey produced longer or shorter intervals, these activity patterns were stretched or compressed in time, to fit the overall duration of each interval. In other words, says Jazayeri, the brain circuits were able to adjust the speed with which neural signals evolve over time. He compares it to a group of musicians performing a complex piece of music. “Each player has their own part, which they can play faster or slower depending on the overall tempo of the music.”

Ready-set-go

Jazayeri is also using time as a window onto a broader question—how our perceptions and decisions are shaped by past experience. “It’s one of the great questions in neuroscience, but it’s not easy to study. One of the great advantages of studying timing is that it’s easy to measure precisely, so we can frame our questions in precise mathematical ways.”

The starting point for this work was a deceptively simple task, which Jazayeri calls “Ready-Set-Go.” In this task, the subject is given the first two beats of a regular rhythm (“Ready, Set”) and must then generate the third beat (“Go”) at the correct time. To perform this task, the brain must measure the duration between Ready and Set and then immediately reproduce it.

Humans can do this fairly accurately, but not perfectly—their response times are imprecise, presumably because there is some “noise” in the neural signals that convey timing information within the brain. In the face of this uncertainty, the optimal strategy (known mathematically as Bayesian Inference) is to bias the time estimates based on prior expectations, and this is exactly what happened in Jazayeri’s experiments. If the intervals in previous trials were shorter, then people tend to under-estimate the next interval, whereas if the previous intervals were longer, they will over-estimate. In other words, people use their memory to improve their time estimates.

Monkeys can also learn this task and show similar biases, providing an opportunity to study how the brain establishes and stores these prior expectations, and how these expectations influence subsequent behavior. Again, Jazayeri and colleagues recorded from large numbers of neurons during the task. These patterns are complex and not easily described in words, but in mathematical terms, the activity forms a geometric structure known as a manifold. “Think of it as a curved surface, analogous to a cylinder,” he says. “In the past, people could not see it because they could only record from one or a few neurons at a time. We have to measure activity across large numbers of neurons simultaneously if we want to understand the workings of the system.”

Computing time

To interpret their data, Jazayeri and his team often turn to computer models based on artificial neural networks. “These models are a powerful tool in our work because we can fully reverse-engineer them and gain insight into the underlying mechanisms,” he explains. His lab has now succeeded in training a recurrent neural network that can perform the Ready-Set-Go task, and they have found that the model develops a manifold similar to the real brain data. This has led to the intriguing conjecture that memory of past experiences can be embedded in the structure of the manifold.

Jazayeri concludes: “We haven’t connected all the dots, but I suspect that many questions about brain and behavior will find their answers in the geometry and dynamics of neural activity.” Jazayeri’s long-term ambition is to develop predictive models of brain function. As an analogy, he says, think of a pendulum. “If we know its current state—its position and speed—we can predict with complete confidence what it will do next, and how it will respond to a perturbation. We don’t have anything like that for the brain—nobody has been able to do that, not even the simplest brain functions. But that’s where we’d eventually like to be.”

A clock within the brain?

It is not yet clear how the mechanisms studied by Fee and Jazayeri are related. “We talk together often, but we are still guessing how the pieces fit together,” says Fee. But one thing they both agree on is the lack of evidence for any central clock within the brain. “Most people have this intuitive feeling that time is a unitary thing, and that there must be some central clock inside our head, coordinating everything like the conductor of the orchestra or the clock inside your computer,” says Jazayeri. “Even many experts in the field believe this, but we don’t think it’s right.” Rather, his work and Fee’s both point to the existence of separate circuits for different time-related behaviors, such as singing. If there is no clock, how do the different systems work together to create our apparently seamless perception of time? “It’s still a big mystery,” says Jazayeri. “Questions like that are what make neuroscience so interesting.”

 

Ten researchers from MIT and Broad receive NIH Director’s Awards

The High-Risk, High-Reward Research (HRHR) program, supported by the National Institutes of Health (NIH) Common Fund, has awarded 86 grants to scientists with unconventional approaches to major challenges in biomedical and behavioral research. Ten of the awardees are affiliated with MIT and the Broad Institute of MIT and Harvard.

The NIH typically supports research projects, not individual scientists, but the HRHR program identifies specific researchers with innovative ideas to address gaps in biomedical research. The program issues four types of awards annually — the Pioneer Award, the New Innovator Award, the Transformative Research Award and the Early Independence Award — to “high-caliber investigators whose ideas stretch the boundaries of our scientific knowledge.”

Four researchers who are affiliated with either MIT or the Broad Institute received this year’s New Innovator Awards, which support “unusually innovative research” from early career investigators. They are:

  • Paul Blainey, an MIT assistant professor of biological engineering and a core member of the Broad Institute, is an expert in microanalysis systems for studies of individual molecules and cells. The award will fund the establishment a new technology that enables advanced readout from living cells.
  • Kevin Esvelt, an associate professor of media arts and sciences at MIT’s Media Lab, invents new ways to study and influence the evolution of ecosystems. Esvelt plans to use the NIH grant to develop powerful “daisy drive” systems for more precise genetic alterations of wild organisms. Such an intervention has the potential to serve as a powerful weapon against malaria, Zika, Lyme disease, and many other infectious diseases.
  • Evan Macosko is an associate member of the Broad Institute who develops molecular techniques to more deeply understand the function of cellular specialization in the nervous system. Macosko’s award will fund a novel technology, Slide-seq, which enables genome-wide expression analysis of brain tissue sections at single-cell resolution.
  • Gabriela Schlau-Cohen, an MIT assistant professor of chemistry, combines tools from chemistry, optics, biology, and microscopy to develop new approaches to study the dynamics of biological systems. Her award will be used to fund the development of a new nanometer-distance assay that directly accesses protein motion with unprecedented spatiotemporal resolution under physiological conditions.

Recipients of the Early Independence Award include three Broad Institute Fellows. The award recognizes “exceptional junior scientists” with an opportunity to skip traditional postdoctoral training and move immediately into independent research positions.

  • Ahmed Badran is a Broad Institute Fellow who studies the function of ribosomes and the control of protein synthesis. Ribosomes are important targets for antibiotics, and the NIH award will support the development of a new technology platform for probing ribosome function within living cells.
  • Fei Chen, a Broad Institute Fellow who is also a research affiliate at MIT’s McGovern Institute for Brain Research, has pioneered novel molecular and microscopy tools to illuminate biological pathways and function. He will use one of these tools, expansion microscopy, to explore the molecular basis of glioblastomas, an aggressive form of brain cancer.
  • Hilary Finucane, a Broad Institute Fellow who recently received her PhD from MIT’s Department of Mathematics, develops computational methods for analyzing biological data. She plans to develop methods to analyze large-scale genomic data to identify disease-relevant cell types and tissues, a necessary first step for understanding molecular mechanisms of disease.

Among the recipients of the NIH’s Pioneer Awards are Kay Tye, an assistant professor of brain and cognitive sciences at MIT and a member of MIT’s Picower Institute for Learning and Memory, and Feng Zhang, the James and Patricia Poitras ’63 Professor in Neuroscience, an associate professor of brain and cognitive sciences and biological engineering at MIT, a core member of the Broad Institute, and an investigator at MIT’s McGovern Institute for Brain Research. Recipients of this award are challenged to pursue “groundbreaking, high-impact approaches to a broad area of biomedical or behavioral science. Tye, who studies the brain mechanisms underlying emotion and behavior, will use her award to look at the neural representation of social homeostasis and social rank. Zhang, who pioneered the gene-editing technology known as CRISPR, plans to develop a suite of tools designed to achieve precise genome surgery for repairing disease-causing changes in DNA.

Ed Boyden, an associate professor of brain and cognitive sciences and biological engineering at MIT, and a member of MIT’s Media Lab and McGovern Institute for Brain Research, is a recipient of the Transformative Research Award. This award promotes “cross-cutting, interdisciplinary approaches that could potentially create or challenge existing paradigms.” Boyden, who develops new strategies for understanding and engineering brain circuits, will use the grant to develop high-speed 3-D imaging of neural activity.

This year, the NIH issued a total of 12 Pioneer Awards, 55 New Innovator Awards, 8 Transformative Research Awards, and 11 Early Independence Awards. The awards total $263 million and represent contributions from the NIH Common Fund; National Institute of General Medical Sciences; National Institute of Mental Health; National Center for Complementary and Integrative Health; and National Institute of Dental and Craniofacial Research.

“I continually point to this program as an example of the creative and revolutionary research NIH supports,” said NIH Director Francis S. Collins. “The quality of the investigators and the impact their research has on the biomedical field is extraordinary.”

Gene-editing technology developer Feng Zhang awarded $500,000 Lemelson-MIT Prize

Feng Zhang, a pioneer of the revolutionary CRISPR gene-editing technology, TAL effector proteins, and optogenetics, is the recipient of the 2017 $500,000 Lemelson-MIT Prize, the largest cash prize for invention in the United States. Zhang is a core member of the Broad Institute of MIT and Harvard, an investigator at the McGovern Institute for Brain Research, the James and Patricia Poitras Professor in Neuroscience at MIT, and associate professor in the departments of Brain and Cognitive Sciences and Biological Engineering at MIT.

Zhang and his team were first to develop and demonstrate successful methods for using an engineered CRISPR-Cas9 system to edit genomes in living mouse and human cells and have turned CRISPR technology into a practical and shareable collection of tools for robust gene editing and epigenomic manipulation. CRISPR, short for Clustered Regularly Interspaced Short Palindromic Repeats, has been harnessed by Zhang and his team as a groundbreaking gene-editing tool that is simple and versatile to use. A key tenet of Zhang’s is to encourage further development and research through open sharing of tools and scientific collaboration. Zhang believes that wide use of CRISPR-based tools will further our understanding of biology, allowing scientists to identify genetic differences that contribute to diseases and, eventually, provide the basis for new therapeutic techniques.

Zhang’s lab has trained thousands of researchers to use CRISPR technology, and since 2013 he has shared over 40,000 plasmid samples with labs around the world both directly and through the nonprofit Addgene, enabling wide use of his CRISPR tools in their research.

Zhang began working in a gene therapy laboratory at the age of 16 and has played key roles in the development of multiple technologies. Prior to harnessing CRISPR-Cas9, Zhang engineered microbial TAL effectors (TALEs) for use in mammalian cells, working with colleagues at Harvard University, authoring multiple publications on the subject and becoming a co-inventor on several patents on TALE-based technologies. Zhang was also a key member of the team at Stanford University that harnessed microbial opsins for developing optogenetics, which uses light signals and light-sensitive proteins to monitor and control activity in brain cells. This technology can help scientists understand how cells in the brain affect mental and neurological illnesses. Zhang has co-authored multiple publications on optogenetics and is a co-inventor on several patents related to this technology.

Zhang’s numerous scientific discoveries and inventions, as well as his commitment to mentorship and collaboration, earned him the Lemelson-MIT Prize, which honors outstanding mid-career inventors who improve the world through technological invention and demonstrate a commitment to mentorship in science, technology, engineering and mathematics (STEM).

“Feng’s creativity and dedication to problem-solving impressed us,” says Stephanie Couch, executive director of the Lemelson-MIT Program. “Beyond the breadth of his own accomplishments, Feng and his lab have also helped thousands of scientists across the world access the new technology to advance their own scientific discoveries.”

“It is a tremendous honor to receive the Lemelson-MIT Prize and to join the company of so many incredibly impactful inventors who have won this prize in years past,” says Zhang. “Invention has always been a part of my life; I think about new problems every day and work to solve them creatively. This prize is a testament to the passionate work of my team and the support of my family, teachers, colleagues and counterparts around the world.”

The $500,000 prize, which bears no restrictions in how it can be used, is made possible through the support of The Lemelson Foundation, the world’s leading funder of invention in service of social and economic change.

“We are thrilled to honor Dr. Zhang, who we commend for his advancements in genetics, and more importantly, his willingness to share his discoveries to advance the work of others around the world,” says Dorothy Lemelson, chair of The Lemelson Foundation. “Zhang’s work is inspiring a new generation of inventors to tackle the biggest problems of our time.”

Zhang will speak at EmTech MIT, the annual conference on emerging technologies hosted by MIT Technology Review at the MIT Media Lab on Tuesday, Nov. 7.

The Lemelson-MIT Program is now seeking nominations for the 2018 $500,000 Lemelson-MIT Prize. Please contact the Lemelson-MIT Program at awards-lemelson@mit.edu for more information or visit the MIT-Lemelson Prize website.

Studies help explain link between autism, severe infection during pregnancy

Mothers who experience an infection severe enough to require hospitalization during pregnancy are at higher risk of having a child with autism. Two new studies from MIT and the University of Massachusetts Medical School shed more light on this phenomenon and identify possible approaches to preventing it.

In research on mice, the researchers found that the composition of bacterial populations in the mother’s digestive tract can influence whether maternal infection leads to autistic-like behaviors in offspring. They also discovered the specific brain changes that produce these behaviors.

“We identified a very discrete brain region that seems to be modulating all the behaviors associated with this particular model of neurodevelopmental disorder,” says Gloria Choi, the Samuel A. Goldblith Career Development Assistant Professor of Brain and Cognitive Sciences and a member of MIT’s McGovern Institute for Brain Research.

If further validated in human studies, the findings could offer a possible way to reduce the risk of autism, which would involve blocking the function of certain strains of bacteria found in the maternal gut, the researchers say.

Choi and Jun Huh, formerly an assistant professor at UMass Medical School who is now a faculty member at Harvard Medical School, are the senior authors of both papers, which appear in Nature on Sept. 13. MIT postdoc Yeong Shin Yim is the first author of one paper, and UMass Medical School visiting scholars Sangdoo Kim and Hyunju Kim are the lead authors of the other.

Reversing symptoms

A 2010 study that included all children born in Denmark between 1980 and 2005 found that severe viral infections during the first trimester of pregnancy translated to a threefold risk for autism, and serious bacterial infections during the second trimester were linked with a 1.42-fold increase in risk. These infections included influenza, viral gastroenteritis, and severe urinary tract infections.

Similar effects have been described in mouse models of maternal inflammation, and in a 2016 Science paper, Choi and Huh found that a type of immune cells known as Th17 cells, and their effector molecule, called IL-17, are responsible for this effect in mice. IL-17 then interacts with receptors found on brain cells in the developing fetus, leading to irregularities that the researchers call “patches” in certain parts of the cortex.

In one of the new papers, the researchers set out to learn more about these patches and to determine if they were responsible for the behavioral abnormalities seen in those mice, which include repetitive behavior and impaired sociability.

The researchers found that the patches are most common in a part of the brain known as S1DZ. Part of the somatosensory cortex, this region is believed to be responsible for proprioception, or sensing where the body is in space. In these patches, populations of cells called interneurons, which express a protein called parvalbumin, are reduced. Interneurons are responsible for controlling the balance of excitation and inhibition in the brain, and the researchers found that the changes they found in the cortical patches were associated with overexcitement in S1DZ.

When the researchers restored normal levels of brain activity in this area, they were able to reverse the behavioral abnormalities. They were also able to induce the behaviors in otherwise normal mice by overstimulating neurons in S1DZ.

The researchers also discovered that S1DZ sends messages to two other brain regions: the temporal association area of the cortex and the striatum. When the researchers inhibited the neurons connected to the temporal association area, they were able to reverse the sociability deficits. When they inhibited the neurons connected to the striatum, they were able to halt the repetitive behaviors.

Microbial factors

In the second Nature paper, the researchers delved into some of the additional factors that influence whether or not a severe infection leads to autism. Not all mothers who experience severe infection end up having child with autism, and similarly not all the mice in the maternal inflammation model develop behavioral abnormalities.

“This suggests that inflammation during pregnancy is just one of the factors. It needs to work with additional factors to lead all the way to that outcome,” Choi says.

A key clue was that when immune systems in some of the pregnant mice were stimulated, they began producing IL-17 within a day. “Normally it takes three to five days, because IL-17 is produced by specialized immune cells and they require time to differentiate,” Huh says. “We thought that perhaps this cytokine is being produced not from differentiating immune cells, but rather from pre-existing immune cells.”

Previous studies in mice and humans have found populations of Th17 cells in the intestines of healthy individuals. These cells, which help to protect the host from harmful microbes, are thought to be produced after exposure to particular types of harmless bacteria that associate with the epithelium.

The researchers found that only the offspring of mice with one specific type of harmless bacteria, known as segmented filamentous bacteria, had behavioral abnormalities and cortical patches. When the researchers killed those bacteria with antibiotics, the mice produced normal offspring.

“This data strongly suggests that perhaps certain mothers who happen to carry these types of Th17 cell-inducing bacteria in their gut may be susceptible to this inflammation-induced condition,” Huh says.

Humans can also carry strains of gut bacteria known to drive production of Th17 cells, and the researchers plan to investigate whether the presence of these bacteria is associated with autism.

Sarah Gaffen, a professor of rheumatology and clinical immunology at the University of Pittsburgh, says the study clearly demonstrates the link between IL-17 and the neurological effects seen in the mice offspring. “It’s rare for things to fit into such a clear model, where you can identify a single molecule that does what you predicted,” says Gaffen, who was not involved in the study.

The research was funded by the Simons Foundation Autism Research Initiative, the Simons Center for the Social Brain at MIT, the Howard Hughes Medical Institute, Robert Buxton, the National Research Foundation of Korea, the Searle Scholars Program, a Pew Scholarship for Biomedical Sciences, the Kenneth Rainin Foundation, the National Institutes of Health, and the Hock E. Tan and K. Lisa Yang Center for Autism Research.

Robotic system monitors specific neurons

Recording electrical signals from inside a neuron in the living brain can reveal a great deal of information about that neuron’s function and how it coordinates with other cells in the brain. However, performing this kind of recording is extremely difficult, so only a handful of neuroscience labs around the world do it.

To make this technique more widely available, MIT engineers have now devised a way to automate the process, using a computer algorithm that analyzes microscope images and guides a robotic arm to the target cell.

This technology could allow more scientists to study single neurons and learn how they interact with other cells to enable cognition, sensory perception, and other brain functions. Researchers could also use it to learn more about how neural circuits are affected by brain disorders.

“Knowing how neurons communicate is fundamental to basic and clinical neuroscience. Our hope is this technology will allow you to look at what’s happening inside a cell, in terms of neural computation, or in a disease state,” says Ed Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT, and a member of MIT’s Media Lab and McGovern Institute for Brain Research.

Boyden is the senior author of the paper, which appears in the Aug. 30 issue of Neuron. The paper’s lead author is MIT graduate student Ho-Jun Suk.

Precision guidance

For more than 30 years, neuroscientists have been using a technique known as patch clamping to record the electrical activity of cells. This method, which involves bringing a tiny, hollow glass pipette in contact with the cell membrane of a neuron, then opening up a small pore in the membrane, usually takes a graduate student or postdoc several months to learn. Learning to perform this on neurons in the living mammalian brain is even more difficult.

There are two types of patch clamping: a “blind” (not image-guided) method, which is limited because researchers cannot see where the cells are and can only record from whatever cell the pipette encounters first, and an image-guided version that allows a specific cell to be targeted.

Five years ago, Boyden and colleagues at MIT and Georgia Tech, including co-author Craig Forest, devised a way to automate the blind version of patch clamping. They created a computer algorithm that could guide the pipette to a cell based on measurements of a property called electrical impedance — which reflects how difficult it is for electricity to flow out of the pipette. If there are no cells around, electricity flows and impedance is low. When the tip hits a cell, electricity can’t flow as well and impedance goes up.

Once the pipette detects a cell, it can stop moving instantly, preventing it from poking through the membrane. A vacuum pump then applies suction to form a seal with the cell’s membrane. Then, the electrode can break through the membrane to record the cell’s internal electrical activity.

The researchers achieved very high accuracy using this technique, but it still could not be used to target a specific cell. For most studies, neuroscientists have a particular cell type they would like to learn about, Boyden says.

“It might be a cell that is compromised in autism, or is altered in schizophrenia, or a cell that is active when a memory is stored. That’s the cell that you want to know about,” he says. “You don’t want to patch a thousand cells until you find the one that is interesting.”

To enable this kind of precise targeting, the researchers set out to automate image-guided patch clamping. This technique is difficult to perform manually because, although the scientist can see the target neuron and the pipette through a microscope, he or she must compensate for the fact that nearby cells will move as the pipette enters the brain.

“It’s almost like trying to hit a moving target inside the brain, which is a delicate tissue,” Suk says. “For machines it’s easier because they can keep track of where the cell is, they can automatically move the focus of the microscope, and they can automatically move the pipette.”

By combining several imaging processing techniques, the researchers came up with an algorithm that guides the pipette to within about 25 microns of the target cell. At that point, the system begins to rely on a combination of imagery and impedance, which is more accurate at detecting contact between the pipette and the target cell than either signal alone.

The researchers imaged the cells with two-photon microscopy, a commonly used technique that uses a pulsed laser to send infrared light into the brain, lighting up cells that have been engineered to express a fluorescent protein.

Using this automated approach, the researchers were able to successfully target and record from two types of cells — a class of interneurons, which relay messages between other neurons, and a set of excitatory neurons known as pyramidal cells. They achieved a success rate of about 20 percent, which is comparable to the performance of highly trained scientists performing the process manually.

Unraveling circuits

This technology paves the way for in-depth studies of the behavior of specific neurons, which could shed light on both their normal functions and how they go awry in diseases such as Alzheimer’s or schizophrenia. For example, the interneurons that the researchers studied in this paper have been previously linked with Alzheimer’s. In a recent study of mice, led by Li-Huei Tsai, director of MIT’s Picower Institute for Learning and Memory, and conducted in collaboration with Boyden, it was reported that inducing a specific frequency of brain wave oscillation in interneurons in the hippocampus could help to clear amyloid plaques similar to those found in Alzheimer’s patients.

“You really would love to know what’s happening in those cells,” Boyden says. “Are they signaling to specific downstream cells, which then contribute to the therapeutic result? The brain is a circuit, and to understand how a circuit works, you have to be able to monitor the components of the circuit while they are in action.”

This technique could also enable studies of fundamental questions in neuroscience, such as how individual neurons interact with each other as the brain makes a decision or recalls a memory.

Bernardo Sabatini, a professor of neurobiology at Harvard Medical School, says he is interested in adapting this technique to use in his lab, where students spend a great deal of time recording electrical activity from neurons growing in a lab dish.

“It’s silly to have amazingly intelligent students doing tedious tasks that could be done by robots,” says Sabatini, who was not involved in this study. “I would be happy to have robots do more of the experimentation so we can focus on the design and interpretation of the experiments.”

To help other labs adopt the new technology, the researchers plan to put the details of their approach on their web site, autopatcher.org.

Other co-authors include Ingrid van Welie, Suhasa Kodandaramaiah, and Brian Allen. The research was funded by Jeremy and Joyce Wertheimer, the National Institutes of Health (including the NIH Single Cell Initiative and the NIH Director’s Pioneer Award), the HHMI-Simons Faculty Scholars Program, and the New York Stem Cell Foundation-Robertson Award.