Polina Anikeeva and Feng Zhang awarded 2018 Vilcek Prize

Polina Anikeeva, the Class of 1942 Associate Professor in the Department of Materials Science and Engineering and associate director of the Research Laboratory of Electronics, and Feng Zhang, the James and Patricia Poitras ’63 Professor in Neuroscience at the McGovern Institute, have each been awarded a 2018 Vilcek Prize for Creative Promise in Biomedical Science. Awarded annually by the Vilcek Foundation, the $50,000 prizes recognize younger immigrants who have demonstrated exceptional promise early in their careers.

“The Vilcek Prizes were established in appreciation of the immigrants who chose to dedicate their vision and talent to bettering American society,” says Rick Kinsel, president of the Vilcek Foundation. “This year’s prizewinners honor and continue that legacy with works of astounding, revolutionary importance.”

Polina Anikeeva, who was born in the former Soviet Union, earned her PhD in materials science and engineering at MIT in 2009 and now runs her own bioelectronics lab in the same department focused on the development of materials and devices that enable recording and manipulation of signaling processes within the nervous system. The Vilcek Foundation recognizes Anikeeva for “fashioning ingenious solutions to long-standing challenges in biomedical engineering” including the design of therapeutic devices for conditions such as Parkinson’s disease and spinal cord injury.

Feng Zhang, who is also a core member of the Broad Institute and an associate professor in the departments of Brain and Cognitive Sciences and Biological Engineering, is being recognized for his role in advancing optogenetics (a method for controlling brain activity with light) and developing molecular tools to edit the genome. Thanks to his leadership in inventing precise and efficient gene-editing technologies using CRISPR, Zhang’s work has resulted in a “growing array of applications, such as uncovering the genetic underpinnings of diseases, ushering in gene therapies to cure heritable diseases, and improving agriculture.” Zhang’s family immigrated to the United States from China when he was 11 years of age.

Anikeeva and Zhang will be among eight Vilcek prizewinners honored at an awards gala in New York City in April 2018.

The Vilcek Foundation was established in 2000 by Jan and Marica Vilcek, immigrants from the former Czechoslovakia. The mission of the foundation, to honor the contributions of immigrants to the United States and to foster appreciation of the arts and sciences, was inspired by the couple’s respective careers in biomedical science and art history, as well as their personal experiences and appreciation of the opportunities they received as newcomers to this country.

Institute launches the MIT Intelligence Quest

MIT today announced the launch of the MIT Intelligence Quest, an initiative to discover the foundations of human intelligence and drive the development of technological tools that can positively influence virtually every aspect of society.

The announcement was first made in a letter MIT President L. Rafael Reif sent to the Institute community.

At a time of rapid advances in intelligence research across many disciplines, the Intelligence Quest will encourage researchers to investigate the societal implications of their work as they pursue hard problems lying beyond the current horizon of what is known.

Some of these advances may be foundational in nature, involving new insight into human intelligence, and new methods to allow machines to learn effectively. Others may be practical tools for use in a wide array of research endeavors, such as disease diagnosis, drug discovery, materials and manufacturing design, automated systems, synthetic biology, and finance.

“Today we set out to answer two big questions, says President Reif. “How does human intelligence work, in engineering terms? And how can we use that deep grasp of human intelligence to build wiser and more useful machines, to the benefit of society?”

MIT Intelligence Quest: The Core and The Bridge

MIT is poised to lead this work through two linked entities within MIT Intelligence Quest. One of them, “The Core,” will advance the science and engineering of both human and machine intelligence. A key output of this work will be machine-learning algorithms. At the same time, MIT Intelligence Quest seeks to advance our understanding of human intelligence by using insights from computer science.

The second entity, “The Bridge” will be dedicated to the application of MIT discoveries in natural and artificial intelligence to all disciplines, and it will host state-of-the-art tools from industry and research labs worldwide.

The Bridge will provide a variety of assets to the MIT community, including intelligence technologies, platforms, and infrastructure; education for students, faculty, and staff about AI tools; rich and unique data sets; technical support; and specialized hardware.

Along with developing and advancing the technologies of intelligence, MIT Intelligence Quest researchers will also investigate the societal and ethical implications of advanced analytical and predictive tools. There are already active projects and groups at the Institute investigating autonomous systems, media and information quality, labor markets and the work of the future, innovation and the digital economy, and the role of AI in the legal system.

In all its activities, MIT Intelligence Quest is intended to take advantage of — and strengthen — the Institute’s culture of collaboration. MIT Intelligence Quest will connect and amplify existing excellence across labs and centers already engaged in intelligence research. It will also establish shared, central spaces conducive to group work, and its resources will directly support research.

“Our quest is meant to power world-changing possibilities,” says Anantha Chandrakasan, dean of the MIT School of Engineering and Vannevar Bush Professor of Electrical Engineering and Computer Science. Chandrakasan, in collaboration with Provost Martin Schmidt and all four of MIT’s other school deans, has led the development and establishment of MIT Intelligence Quest.

“We imagine preventing deaths from cancer by using deep learning for early detection and personalized treatment,” Chandrakasan continues. “We imagine artificial intelligence in sync with, complementing, and assisting our own intelligence. And we imagine every scientist and engineer having access to human-intelligence-inspired algorithms that open new avenues of discovery in their fields. Researchers across our campus want to push the boundaries of what’s possible.”

Engaging energetically with partners

In order to power MIT Intelligence Quest and achieve results that are consistent with its ambitions, the Institute will raise financial support through corporate sponsorship and philanthropic giving.

MIT Intelligence Quest will build on the model that was established with the MIT–IBM Watson AI Lab, which was announced in September 2017. MIT researchers will collaborate with each other and with industry on challenges that range in scale from the very broad to the very specific.

“In the short time since we began our collaboration with IBM, the lab has garnered tremendous interest inside and outside MIT, and it will be a vital part of MIT Intelligence Quest,” says President Reif.

John E. Kelly III, IBM senior vice president for cognitive solutions and research, says, “To take on the world’s greatest challenges and seize its biggest opportunities, we need to rapidly advance both AI technology and our understanding of human intelligence. Building on decades of collaboration — including our extensive joint MIT–IBM Watson AI Lab — IBM and MIT will together shape a new agenda for intelligence research and its applications. We are proud to be a cornerstone of this expanded initiative.”

MIT will seek to establish additional entities within MIT Intelligence Quest, in partnership with corporate and philanthropic organizations.

Why MIT

MIT has been on the frontier of intelligence research since the 1950s, when pioneers Marvin Minsky and John McCarthy helped establish the field of artificial intelligence.

MIT now has over 200 principal investigators whose research bears directly on intelligence. Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the MIT Department of Brain and Cognitive Sciences (BCS) — along with the McGovern Institute for Brain Research and the Picower Institute for Learning and Memory — collaborate on a range of projects. MIT is also home to the National Science Foundation–funded center for Brains, Minds and Machines (CBMM) — the only national center of its kind.

Four years ago, MIT launched the Institute for Data, Systems, and Society (IDSS) with a mission promoting data science, particularly in the context of social systems. It is  anticipated that faculty and students from IDSS will play a critical role in this initiative.

Faculty from across the Institute will participate in the initiative, including researchers in the Media Lab, the Operations Research Center, the Sloan School of Management, the School of Architecture and Planning, and the School of Humanities, Arts, and Social Sciences.

“Our quest will amount to a journey taken together by all five schools at MIT,” says Provost Schmidt. “Success will rest on a shared sense of purpose and a mix of contributions from a wide variety of disciplines. I’m excited by the new thinking we can help unlock.”

At the heart of MIT Intelligence Quest will be collaboration among researchers in human and artificial intelligence.

“To revolutionize the field of artificial intelligence, we should continue to look to the roots of intelligence: the brain,” says James DiCarlo, department head and Peter de Florez Professor of Neuroscience in the Department of Brain and Cognitive Sciences. “By working with engineers and artificial intelligence researchers, human intelligence researchers can build models of the brain systems that produce intelligent behavior. The time is now, as model building at the scale of those brain systems is now possible. Discovering how the brain works in the language of engineers will not only lead to transformative AI — it will also illuminate entirely new ways to repair, educate, and augment our own minds.”

Daniela Rus, the Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT, and director of CSAIL, agrees. MIT researchers, she says, “have contributed pioneering and visionary solutions for intelligence since the beginning of the field, and are excited to make big leaps to understand human intelligence and to engineer significantly more capable intelligent machines. Understanding intelligence will give us the knowledge to understand ourselves and to create machines that will support us with cognitive and physical work.”

David Siegel, who earned a PhD in computer science at MIT in 1991 pursuing research at MIT’s Artificial Intelligence Laboratory, and who is a member of the MIT Corporation and an advisor to the MIT Center for Brains, Minds, and Machines, has been integral to the vision and formation of MIT Intelligence Quest and will continue to help shape the effort. “Understanding human intelligence is one of the greatest scientific challenges,” he says, “one that helps us understand who we are while meaningfully advancing the field of artificial intelligence.” Siegel is co-chairman and a founder of Two Sigma Investments, LP.

The fruits of research

MIT Intelligence Quest will thus provide a platform for long-term research, encouraging the foundational advances of the future. At the same time, MIT professors and researchers may develop technologies with near-term value, leading to new kinds of collaborations with existing companies — and to new companies.

Some such entrepreneurial efforts could be supported by The Engine, an Institute initiative launched in October 2016 to support startup companies pursuing particularly ambitious goals.

Other innovations stemming from MIT Intelligence Quest could be absorbed into the innovation ecosystem surrounding the Institute — in Kendall Square, Cambridge, and the Boston metropolitan area. MIT is located in close proximity to a world-leading nexus of biotechnology and medical-device research and development, as well as a cluster of leading-edge technology firms that study and deploy machine intelligence.

MIT also has roots in centers of innovation elsewhere in the United States and around the world, through faculty research projects, institutional and industry collaborations, and the activities and leadership of its alumni. MIT Intelligence Quest will seek to connect to innovative companies and individuals who share MIT’s passion for work in intelligence.

Eric Schmidt, former executive chairman of Alphabet, has helped MIT form the vision for MIT Intelligence Quest. “Imagine the good that can be done by putting novel machine-learning tools in the hands of those who can make great use of them,” he says. “MIT Intelligence Quest can become a fount of exciting new capabilities.”

“I am thrilled by today’s news,” says President Reif. “Drawing on MIT’s deep strengths and signature values, culture, and history, MIT Intelligence Quest promises to make important contributions to understanding the nature of intelligence, and to harnessing it to make a better world.”

“MIT is placing a bet,” he says, “on the central importance of intelligence research to meeting the needs of humanity.”

Ultrathin needle can deliver drugs directly to the brain

MIT researchers have devised a miniaturized system that can deliver tiny quantities of medicine to brain regions as small as 1 cubic millimeter. This type of targeted dosing could make it possible to treat diseases that affect very specific brain circuits, without interfering with the normal function of the rest of the brain, the researchers say.

Using this device, which consists of several tubes contained within a needle about as thin as a human hair, the researchers can deliver one or more drugs deep within the brain, with very precise control over how much drug is given and where it goes. In a study of rats, they found that they could deliver targeted doses of a drug that affects the animals’ motor function.

“We can infuse very small amounts of multiple drugs compared to what we can do intravenously or orally, and also manipulate behavioral changes through drug infusion,” says Canan Dagdeviren, the LG Electronics Career Development Assistant Professor of Media Arts and Sciences and the lead author of the paper, which appears in the Jan. 24 issue of Science Translational Medicine.

“We believe this tiny microfabricated device could have tremendous impact in understanding brain diseases, as well as providing new ways of delivering biopharmaceuticals and performing biosensing in the brain,” says Robert Langer, the David H. Koch Institute Professor at MIT and one of the paper’s senior authors.

Michael Cima, the David H. Koch Professor of Engineering in the Department of Materials Science and Engineering and a member of MIT’s Koch Institute for Integrative Cancer Research, is also a senior author of the paper.

Targeted action

Drugs used to treat brain disorders often interact with brain chemicals called neurotransmitters or the cell receptors that interact with neurotransmitters. Examples include l-dopa, a dopamine precursor used to treat Parkinson’s disease, and Prozac, used to boost serotonin levels in patients with depression. However, these drugs can have side effects because they act throughout the brain.

“One of the problems with central nervous system drugs is that they’re not specific, and if you’re taking them orally they go everywhere. The only way we can limit the exposure is to just deliver to a cubic millimeter of the brain, and in order to do that, you have to have extremely small cannulas,” Cima says.

The MIT team set out to develop a miniaturized cannula (a thin tube used to deliver medicine) that could target very small areas. Using microfabrication techniques, the researchers constructed tubes with diameters of about 30 micrometers and lengths up to 10 centimeters. These tubes are contained within a stainless steel needle with a diameter of about 150 microns. “The device is very stable and robust, and you can place it anywhere that you are interested,” Dagdeviren says.

The researchers connected the cannulas to small pumps that can be implanted under the skin. Using these pumps, the researchers showed that they could deliver tiny doses (hundreds of nanoliters) into the brains of rats. In one experiment, they delivered a drug called muscimol to a brain region called the substantia nigra, which is located deep within the brain and helps to control movement.

Previous studies have shown that muscimol induces symptoms similar to those seen in Parkinson’s disease. The researchers were able to generate those effects, which include stimulating the rats to continually turn in a clockwise direction, using their miniaturized delivery needle. They also showed that they could halt the Parkinsonian behavior by delivering a dose of saline through a different channel, to wash the drug away.

“Since the device can be customizable, in the future we can have different channels for different chemicals, or for light, to target tumors or neurological disorders such as Parkinson’s disease or Alzheimer’s,” Dagdeviren says.

This device could also make it easier to deliver potential new treatments for behavioral neurological disorders such as addiction or obsessive compulsive disorder, which may be caused by specific disruptions in how different parts of the brain communicate with each other.

“Even if scientists and clinicians can identify a therapeutic molecule to treat neural disorders, there remains the formidable problem of how to delivery the therapy to the right cells — those most affected in the disorder. Because the brain is so structurally complex, new accurate ways to deliver drugs or related therapeutic agents locally are urgently needed,” says Ann Graybiel, an MIT Institute Professor and a member of MIT’s McGovern Institute for Brain Research, who is also an author of the paper.

Measuring drug response

The researchers also showed that they could incorporate an electrode into the tip of the cannula, which can be used to monitor how neurons’ electrical activity changes after drug treatment. They are now working on adapting the device so it can also be used to measure chemical or mechanical changes that occur in the brain following drug treatment.

The cannulas can be fabricated in nearly any length or thickness, making it possible to adapt them for use in brains of different sizes, including the human brain, the researchers say.

“This study provides proof-of-concept experiments, in large animal models, that a small, miniaturized device can be safely implanted in the brain and provide miniaturized control of the electrical activity and function of single neurons or small groups of neurons. The impact of this could be significant in focal diseases of the brain, such as Parkinson’s disease,” says Antonio Chiocca, neurosurgeon-in-chief and chairman of the Department of Neurosurgery at Brigham and Women’s Hospital, who was not involved in the research.

The research was funded by the National Institutes of Health and the National Institute of Biomedical Imaging and Bioengineering.

The Beautiful Brain: The Drawings of Santiago Ramón y Cajal

Opening May 3, 2018

Santiago Ramón y Cajal made transformative discoveries of the anatomy of the brain and nervous system, work that led to his receiving a Nobel Prize in 1906. This founder of modern neuroscience was also an exceptional artist. His drawings of the brain were not only beautiful, but also astounding in their capacity to illustrate and understand the details of brain structure and function.

The Beautiful Brain: The Drawings of Santiago Ramón y Cajal at the MIT Museum is part of a traveling exhibit that will include approximately 80 of Cajal’s drawings, many rarely before seen in the U.S.

These historical works will be complimented by a contemporary exhibition of neuroscience visualizations that are leading to new insights, aided by technologies, many pioneered here at MIT’s McGovern Institute, that allow increasingly more detailed and precise understandings.

The exhibit is scheduled to open on May 3, 2018.


The Beautiful Brain: The Drawings of Santiago Ramón y Cajal was developed by the Frederick R. Weisman Art Museum, University of Minnesota with the CSIC’s Cajal Institute, Madrid, Spain.

wam_logos_4c-copy

 

 

 

 

 

Major exhibition support provided by:

ramondelpinocaixagreen

 

 

Sustaining exhibition support provided by:

cantabrialabs

 

 

 

 

 

Contributing exhibition support provided by:

spanishembassymibrpilm

 

bcslogostanleycenter

 

 

 

This exhibition is generously supported by the Associate Provost for the Arts, Philip Khoury. Additional support has been provided by the Council for the Arts at MIT.

CAMIT_Logo

 

 

 


 

School of Science Infinite Kilometer Awards for 2017

The MIT School of Science has announced the 2017 winners of the Infinite Kilometer Award. The Infinite Kilometer Award was established in 2012 to highlight and reward the extraordinary — but often underrecognized — work of the school’s research staff and postdocs.

Recipients of the award are exceptional contributors to their research programs. In many cases, they are also deeply committed to their local or global MIT community, and are frequently involved in mentoring and advising their junior colleagues, participating in the school’s educational programs, making contributions to the MIT Postdoctoral Association, or contributing to some other facet of the MIT community.

In addition to a monetary award, honorees and their colleagues, friends, and family are invited to a celebratory lunch in May.

The 2017 Infinite Kilometer winners are:

Rodrigo Garcia, McGovern Institute for Brain Research;

Lydia Herzel, Department of Biology;

Yutaro Iiyama, Laboratory for Nuclear Science;

Kendrick Jones, Picower Institute for Learning and Memory;

Matthew Musgrave, Laboratory for Nuclear Science;

Cody Siciliano, Picower Institute for Learning and Memory;

Peter Sudmant, Department of Biology;

Ashley Watson, Picower Institute for Learning and Memory;

The School of Science is also currently accepting nominations for its Infinite Mile Awards. Nominations are due by Feb. 16 and all School of Science employees are eligible. Infinite Mile Awards will be presented with the Infinite Kilometer Awards this spring.

Warm Wishes for 2018!

This year, we hope you enjoy “Postcards from the Brain” — an illustrative journey featuring brain regions studied by McGovern researchers.

For a closer look at these postcards, including a description of how our researchers are studying these particular regions of the brain, please visit our image gallery.

Listening to neurons

When McGovern Investigator Mark Harnett gets a text from his collaborator at Massachusetts General Hospital, it’s time to stock up on Red Bull and coffee.

Because very soon—sometimes within a few hours—a chunk of living human brain will arrive at the lab, marking the start of an epic session recording the brain’s internal dialogue. And it continues non-stop until the neurons die.

“That first time, we went for 54 hours straight,” Harnett says.

Now two years old, his lab is trying to answer fundamental questions about how the brain’s basic calculations lead to the experience of daily life. Most neuroscientists consider the neuron to be the brain’s basic computational unit, but Harnett is focusing on the internal workings of individual neurons, and in particular, the role of dendrites, the elaborate branching structures that are the most distinctive feature of these cells.

Years ago, scientists viewed dendrites as essentially passive structures, receiving neurochemical information that they translated into electrical signals and sent to the cell body, or soma. The soma was the calculator, summing up the data and deciding whether or not to produce an output signal, known as an action potential. Now though, evidence has accumulated showing dendrites to be capable of processing information themselves, leading to a new and more expansive view in which each individual neuron contains multiple computational elements.

Due to the enormous technical challenge such work demands, however, scientists still don’t fully understand the biophysical mechanisms behind dendritic computations.

They understand even less how these mechanisms operate in and contribute to an awake, thinking brain—nor how much the mouse models that have defined the field translate to the vastly more powerful computational abilities of the human brain.

Harnett is in an ideal position to untangle some of these questions, owing to a rare combination of the technology and skills needed to record from dendrites—a feat in itself—as well as access to animals and human tissue, and a lab eager for a challenge.

Human interest

Most previous research on dendrites has been done in rats or mice, and Harnett’s collaboration with MGH addresses a deceptively simple question: are the brain cells of rodents really equivalent to those of humans?

Researchers have generally assumed that they are similar, but no one has studied the question in depth. It is known, however, that human dendrites are longer and more structurally complex, and Harnett suspects that these shape differences may reflect the existence of additional computational mechanisms.

To investigate this question, Harnett reached out to Sydney Cash, a neurologist at MGH and Harvard Medical School. Cash was intrigued. He’d been studying epilepsy patients with electrodes implanted in their brains to locate seizures before brain surgery, and he was seeing odd quirks in his data. The neurons seemed to be more connected than animal data would suggest, but he had no way to investigate. “And so I thought this collaboration would be fantastic,” he says. “The amazing electrophysiology that Mark’s group can do would be able to give us that insight into the behavior of these individual human neurons.”

So Cash arranged for Harnett to receive tissue from the brains of patients undergoing lobe resections—removal of chunks of tissue associated with seizures, which often works for patients for whom other treatments have failed.

Logistics were challenging—how to get a living piece of brain from one side of the Charles River to the other before it dies? Harnett initially wanted to use a drone; the legal department shot down that idea. Then he wanted to preserve the delicate tissue in bubbling oxygenated solution. But carting cylinders of hazardous compressed gas around the city was also a non-starter. “So, on the first one, we said to heck with it, we’ll just see if it works at all,” Harnett says. “We threw the brain into a bottle of ice-cold solution, screwed the top on, and told an Uber driver to go fast.”

When the cargo reaches the lab, the team starts the experiments immediately to collect as much data as possible before the neurons fail. This process involves the kind of arduous work that Harnett’s first graduate student, Lou Beaulieu-Laroche, relishes. Indeed, it’s why the young Quebecois wanted to join Harnett’s lab in the first place. “Every time I get to do this recording, I get so excited I don’t even need to sleep,” he says.

First, Beaulieu-Laroche places the precious tissue into a nutrient solution, carefully slicing it at the correct angle to reveal the neurons of interest. Then he begins patch clamp recordings, placing a tiny glass pipette to the surface of a single neuron in order to record its electrical activity. Most labs patch the larger soma; few can successfully patch the far finer dendrites. Beaulieu-Laroche can record two locations on a single dendrite simultaneously.

“It’s tricky experiment on top of tricky experiment,” Harnett says. “If you don’t succeed at every step, you get nothing out of it.” But do it right, and it’s a human neuron laid bare, whirring calculations visible in real-time.

The lab has collected samples from just seven surgeries so far, but a fascinating picture is emerging. For instance, spikes of activity in some human dendrites don’t seem to show up in the main part of the cell, a peculiar decoupling mice don’t show. What it means is still unclear, but it may be a sign of Harnett’s theorized intermediary computations between the distant dendrites and the cell body.

“It could be that the dendrite network of a human neuron is a little more complicated—maybe a little bit smarter,” Beaulieu-Laroche speculates. “And maybe that contributes to our intelligence.”

Active questioning

The human work is inherently limited to studying cells in a dish, and that gets to Harnett’s real focus. “A huge amount of time and effort has been spent identifying what dendrites are capable of doing in brain slices,” he says. Far less effort has gone into studying what they do in the behaving brain. It’s like exhaustively examining a set of tires on a car without ever testing its performance on the road.

To get at this problem, Harnett studies spatial navigation in mice, a task that requires the mouse brain to combine information about vision, motion, and self-orientation into a holistic experience. Scientists don’t know how this integration happens, but Harnett thinks it is an ideal test bed for exploring how dendritic processes contribute to complex behavioral computations. “We know the different types of information must eventually converge, but we think each type could be processed separately in the dendrites before being combined in the cell body,” he says.

The difficult part is catching neurons in the act of computing. This requires a two-pronged approach combining finegrained dendritic biophysics—like what Beaulieu-Laroche does in human cells— with behavioral studies and imaging in awake mice.

Marie-Sophie van der Goes, Harnett’s second graduate student, took up the challenge when she joined the lab in early 2016. From previous work, she knew spatial integration happened in a structure called the retrosplenial cortex, but the region was not well studied.

“We didn’t know where the information entering the RSC came from, or how it was organized,” she explains.

She and laboratory technician Derrick Barnagian used reverse tracing methods to identify inputs to the RSC, and teamed up with postdoc Mathieu Lafourcade to figure out how that information was organized and processed. Vision, motor and orientation systems are all connected to the region, as expected, but the inputs are segregated, with visual and motor information, for example, arriving at different locations within the dendritic tree. According to the patch clamp data, this is likely to be very important, since different dendrites appear to process information in different ways.

The next step for Van der Goes will be to record from neurons as mice perform a navigation task in a virtual maze. Two other postdocs, Jakob Voigts and Lukas Fischer, have already begun looking at similar questions. Working with mice genetically engineered so that their neurons light up when activated, the researchers implant a small glass window in the skull, directly over the RSC. Peering in with a two-photon microscope, they can watch, in real time, the activity of individual neurons and dendrites, as the animal processes different stimuli, including visual cues, sugar-water reward, and the sensation of its feet running along the ground.

It’s not a perfect system; the mouse’s head has to be held absolutely still for the scope to work. For now, they use a virtual reality maze and treadmill, although thanks to an ingenious rig Voigts invented, the set-up is poised to undergo a key improvement to make it feel more life-like for the mouse, and thus more accurate for the researchers.

Human questions

As much as the lab has accomplished so far, Harnett considers the people his greatest achievement. “Lab culture’s critical, in my opinion,” Harnett says. “How it manifests can really affect who wants to join your particular pirate crew.”

And his lab, he says, “is a wonderful environment and my team is incredibly successful in getting hard things to work.”

Everyone works on each other’s projects, coming in on Friday nights and weekend mornings, while ongoing jokes, lab memes, and shared meals bind the team together. Even Harnett prefers to bring his laptop to the crowded student and postdoc office rather than work in his own spacious quarters. With only three Americans in the lab—including Harnett —the space is rich in languages and friendly jabs. Canadian Beaulieu-Laroche says France-born Lafourcade speaks French like his grandmother; Lafourcade insists he speaks the best French—and the best Spanish. “But the Germans never speak German,” he wonders.

And there’s another uniting factor as well—a passion for asking big questions in life. Perhaps it is because many of the lab members are internationally educated and have studied more philosophy and literature than a typical science student. “Marie randomly dropped a Marcus Aurelius quote on me the other day,” Harnett says. He’d been flabbergasted, “But then I wondered, what is it about the fact that they’ve ended up here and we work together so incredibly well? I think it’s that we all think about this stuff—it gives us a shared humanism in the laboratory.”

How the brain keeps time

Timing is critical for playing a musical instrument, swinging a baseball bat, and many other activities. Neuroscientists have come up with several models of how the brain achieves its exquisite control over timing, the most prominent being that there is a centralized clock, or pacemaker, somewhere in the brain that keeps time for the entire brain.

However, a new study from MIT researchers provides evidence for an alternative timekeeping system that relies on the neurons responsible for producing a specific action. Depending on the time interval required, these neurons compress or stretch out the steps they take to generate the behavior at a specific time.

“What we found is that it’s a very active process. The brain is not passively waiting for a clock to reach a particular point,” says Mehrdad Jazayeri, the Robert A. Swanson Career Development Professor of Life Sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

MIT postdoc Jing Wang and former postdoc Devika Narain are the lead authors of the paper, which appears in the Dec. 4 issue of Nature Neuroscience. Graduate student Eghbal Hosseini is also an author of the paper.

Flexible control

One of the earliest models of timing control, known as the clock accumulator model, suggested that the brain has an internal clock or pacemaker that keeps time for the rest of the brain. A later variation of this model suggested that instead of using a central pacemaker, the brain measures time by tracking the synchronization between different brain wave frequencies.

Although these clock models are intuitively appealing, Jazayeri says, “they don’t match well with what the brain does.”

No one has found evidence for a centralized clock, and Jazayeri and others wondered if parts of the brain that control behaviors that require precise timing might perform the timing function themselves. “People now question why would the brain want to spend the time and energy to generate a clock when it’s not always needed. For certain behaviors you need to do timing, so perhaps the parts of the brain that subserve these functions can also do timing,” he says.

To explore this possibility, the researchers recorded neuron activity from three brain regions in animals as they performed a task at two different time intervals — 850 milliseconds or 1,500 milliseconds.

The researchers found a complicated pattern of neural activity during these intervals. Some neurons fired faster, some fired slower, and some that had been oscillating began to oscillate faster or slower. However, the researchers’ key discovery was that no matter the neurons’ response, the rate at which they adjusted their activity depended on the time interval required.

At any point in time, a collection of neurons is in a particular “neural state,” which changes over time as each individual neuron alters its activity in a different way. To execute a particular behavior, the entire system must reach a defined end state. The researchers found that the neurons always traveled the same trajectory from their initial state to this end state, no matter the interval. The only thing that changed was the rate at which the neurons traveled this trajectory.

When the interval required was longer, this trajectory was “stretched,” meaning the neurons took more time to evolve to the final state. When the interval was shorter, the trajectory was compressed.

“What we found is that the brain doesn’t change the trajectory when the interval changes, it just changes the speed with which it goes from the initial internal state to the final state,” Jazayeri says.

Dean Buonomano, a professor of behavioral neuroscience at the University of California at Los Angeles, says that the study “provides beautiful evidence that timing is a distributed process in the brain — that is, there is no single master clock.”

“This work also supports the notion that the brain does not tell time using a clock-like mechanism, but rather relies on the dynamics inherent to neural circuits, and that as these dynamics increase and decrease in speed, animals move more quickly or slowly,” adds Buonomano, who was not involved in the research.

Neural networks

The researchers focused their study on a brain loop that connects three regions: the dorsomedial frontal cortex, the caudate, and the thalamus. They found this distinctive neural pattern in the dorsomedial frontal cortex, which is involved in many cognitive processes, and the caudate, which is involved in motor control, inhibition, and some types of learning. However, in the thalamus, which relays motor and sensory signals, they found a different pattern: Instead of altering the speed of their trajectory, many of the neurons simply increased or decreased their firing rate, depending on the interval required.

Jazayeri says this finding is consistent with the possibility that the thalamus is instructing the cortex on how to adjust its activity to generate a certain interval.

The researchers also created a computer model to help them further understand this phenomenon. They began with a model of hundreds of neurons connected together in random ways, and then trained it to perform the same interval-producing task they had used to train animals, offering no guidance on how the model should perform the task.

They found that these neural networks ended up using the same strategy that they observed in the animal brain data. A key discovery was that this strategy only works if some of the neurons have nonlinear activity — that is, the strength of their output doesn’t constantly increase as their input increases. Instead, as they receive more input, their output increases at a slower rate.

Jazayeri now hopes to explore further how the brain generates the neural patterns seen during varying time intervals, and also how our expectations influence our ability to produce different intervals.

The research was funded by the Rubicon Grant from the Netherlands Scientific Organization, the National Institutes of Health, the Sloan Foundation, the Klingenstein Foundation, the Simons Foundation, the Center for Sensorimotor Neural Engineering, and the McGovern Institute.

How badly do you want something? Babies can tell

Babies as young as 10 months can assess how much someone values a particular goal by observing how hard they are willing to work to achieve it, according to a new study from MIT and Harvard University.

This ability requires integrating information about both the costs of obtaining a goal and the benefit gained by the person seeking it, suggesting that babies acquire very early an intuition about how people make decisions.

“Infants are far from experiencing the world as a ‘blooming, buzzing confusion,’” says lead author Shari Liu, referring to a description by philosopher and psychologist William James about a baby’s first experience of the world. “They interpret people’s actions in terms of hidden variables, including the effort [people] expend in producing those actions, and also the value of the goals those actions achieve.”

“This study is an important step in trying to understand the roots of common-sense understanding of other people’s actions. It shows quite strikingly that in some sense, the basic math that is at the heart of how economists think about rational choice is very intuitive to babies who don’t know math, don’t speak, and can barely understand a few words,” says Josh Tenenbaum, a professor in MIT’s Department of Brain and Cognitive Sciences, a core member of the joint MIT-Harvard Center for Brains, Minds and Machines (CBMM), and one of the paper’s authors.

Tenenbaum helped to direct the research team along with Elizabeth Spelke, a professor of psychology at Harvard University and CBMM core member, in whose lab the research was conducted. Liu, the paper’s lead author, is a graduate student at Harvard. CBMM postdoc Tomer Ullman is also an author of the paper, which appears in the Nov. 23 online edition of Science.

Calculating value

Previous research has shown that adults and older children can infer someone’s motivations by observing how much effort that person exerts toward obtaining a goal.

The Harvard/MIT team wanted to learn more about how and when this ability develops. Babies expect people to be consistent in their preferences and to be efficient in how they achieve their goals, previous studies have found. The question posed in this study was whether babies can combine what they know about a person’s goal and the effort required to obtain it, to calculate the value of that goal.

To answer that question, the researchers showed 10-month-old infants animated videos in which an “agent,” a cartoon character shaped like a bouncing ball, tries to reach a certain goal (another cartoon character). In one of the videos, the agent has to leap over walls of varying height to reach the goal. First, the babies saw the agent jump over a low wall and then refuse to jump over a medium-height wall. Next, the agent jumped over the medium-height wall to reach a different goal, but refused to jump over a high wall to reach that goal.

The babies were then shown a scene in which the agent could choose between the two goals, with no obstacles in the way. An adult or older child would assume the agent would choose the second goal, because the agent had worked harder to reach that goal in the video seen earlier. The researchers found that 10-month-olds also reached this conclusion: When the agent was shown choosing the first goal, infants looked at the scene longer, indicating that they were surprised by that outcome. (Length of looking time is commonly used to measure surprise in studies of infants.)

The researchers found the same results when babies watched the agents perform the same set of actions with two different types of effort: climbing ramps of varying incline and jumping across gaps of varying width.

“Across our experiments, we found that babies looked longer when the agent chose the thing it had exerted less effort for, showing that they infer the amount of value that agents place on goals from the amount of effort that they take toward these goals,” Liu says.

The findings suggest that infants are able to calculate how much another person values something based on how much effort they put into getting it.

“This paper is not the first to suggest that idea, but its novelty is that it shows this is true in much younger babies than anyone has seen. These are preverbal babies, who themselves are not actively doing very much, yet they appear to understand other people’s actions in this sophisticated, quantitative way,” says Tenenbaum, who is also affiliated with MIT’s Computer Science and Artificial Intelligence Laboratory.

Studies of infants can reveal deep commonalities in the ways that we think throughout our lives, suggests Spelke. “Abstract, interrelated concepts like cost and value — concepts at the center both of our intuitive psychology and of utility theory in philosophy and economics — may originate in an early-emerging system by which infants understand other people’s actions,” she says.

The study shows, for the first time, that “preverbal infants can look at the world like economists,” says Gergely Csibra, a professor of cognitive science at Central European University in Hungary. “They do not simply calculate the costs and benefits of others’ actions (this had been demonstrated before), but relate these terms onto each other. In other words, they apply the well-known logic that all of us rely on when we try to assess someone’s preferences: The harder she tries to achieve something, the more valuable is the expected reward to her when she succeeds.”

Modeling intelligence

Over the past 10 years, scientists have developed computer models that come close to replicating how adults and older children incorporate different types of input to infer other people’s goals, intentions, and beliefs. For this study, the researchers built on that work, especially work by Julian Jara-Ettinger PhD ’16, who studied similar questions in preschool-age children. The researchers developed a computer model that can predict what 10-month-old babies would infer about an agent’s goals after observing the agent’s actions. This new model also posits an ability to calculate “work” (or total force applied over a distance) as a measure of the cost of actions, which the researchers believe babies are able to do on some intuitive level.

“Babies of this age seem to understand basic ideas of Newtonian mechanics, before they can talk and before they can count,” Tenenbaum says. “They’re putting together an understanding of forces, including things like gravity, and they also have some understanding of the usefulness of a goal to another person.”

Building this type of model is an important step toward developing artificial intelligence that replicates human behavior more accurately, the researchers say.

“We have to recognize that we’re very far from building AI systems that have anything like the common sense even of a 10-month-old,” Tenenbaum says. “But if we can understand in engineering terms the intuitive theories that even these young infants seem to have, that hopefully would be the basis for building machines that have more human-like intelligence.”

Still unanswered are the questions of exactly how and when these intuitive abilities arise in babies.

“Do infants start with a completely blank slate, and somehow they’re able to build up this sophisticated machinery? Or do they start with some rudimentary understanding of goals and beliefs, and then build up the sophisticated machinery? Or is it all just built in?” Ullman says.

The researchers hope that studies of even younger babies, perhaps as young as 3 months old, and computational models of learning intuitive theories that the team is also developing, may help to shed light on these questions.

This project was funded by the National Science Foundation through the Center for Brains, Minds, and Machines, which is based at MIT’s McGovern Institute for Brain Research and led by MIT and Harvard.

Stress can lead to risky decisions

Making decisions is not always easy, especially when choosing between two options that have both positive and negative elements, such as deciding between a job with a high salary but long hours, and a lower-paying job that allows for more leisure time.

MIT neuroscientists have now discovered that making decisions in this type of situation, known as a cost-benefit conflict, is dramatically affected by chronic stress. In a study of mice, they found that stressed animals were far likelier to choose high-risk, high-payoff options.

The researchers also found that impairments of a specific brain circuit underlie this abnormal decision making, and they showed that they could restore normal behavior by manipulating this circuit. If a method for tuning this circuit in humans were developed, it could help patients with disorders such as depression, addiction, and anxiety, which often feature poor decision-making.

“One exciting thing is that by doing this very basic science, we found a microcircuit of neurons in the striatum that we could manipulate to reverse the effects of stress on this type of decision making. This to us is extremely promising, but we are aware that so far these experiments are in rats and mice,” says Ann Graybiel, an Institute Professor at MIT and member of the McGovern Institute for Brain Research.

Graybiel is the senior author of the paper, which appears in Cell on Nov. 16. The paper’s lead author is Alexander Friedman, a McGovern Institute research scientist.

Hard decisions

In 2015, Graybiel, Friedman, and their colleagues first identified the brain circuit involved in decision making that involves cost-benefit conflict. The circuit begins in the medial prefrontal cortex, which is responsible for mood control, and extends into clusters of neurons called striosomes, which are located in the striatum, a region associated with habit formation, motivation, and reward reinforcement.

In that study, the researchers trained rodents to run a maze in which they had to choose between one option that included highly concentrated chocolate milk, which they like, along with bright light, which they don’t like, and an option with dimmer light but weaker chocolate milk. By inhibiting the connection between cortical neurons and striosomes, using a technique known as optogenetics, they found that they could transform the rodents’ preference for lower-risk, lower-payoff choices to a preference for bigger payoffs despite their bigger costs.

In the new study, the researchers performed a similar experiment without optogenetic manipulations. Instead, they exposed the rodents to a short period of stress every day for two weeks.

Before experiencing stress, normal rats and mice would choose to run toward the maze arm with dimmer light and weaker chocolate milk about half the time. The researchers gradually increased the concentration of chocolate milk found in the dimmer side, and as they did so, the animals began choosing that side more frequently.

However, when chronically stressed rats and mice were put in the same situation, they continued to choose the bright light/better chocolate milk side even as the chocolate milk concentration greatly increased on the dimmer side. This was the same behavior the researchers saw in rodents that had the prefrontal cortex-striosome circuit disrupted optogenetically.

“The result is that the animal ignores the high cost and chooses the high reward,” Friedman says.

The findings help to explain how stress contributes to substance abuse and may worsen mental disorders, says Amy Arnsten, a professor of neuroscience and psychology at the Yale University School of Medicine, who was not involved in the research.

“Stress is ubiquitous, for both humans and animals, and its effects on brain and behavior are of central importance to the understanding of both normal function and neuropsychiatric disease. It is both pernicious and ironic that chronic stress can lead to impulsive action; in many clinical cases, such as drug addiction, impulsivity is likely to worsen patterns of behavior that produce the stress in the first place, inducing a vicious cycle,” Arnsten wrote in a commentary accompanying the Cell paper, co-authored by Daeyeol Lee and Christopher Pittenger of the Yale University School of Medicine.

Circuit dynamics

The researchers believe that this circuit integrates information about the good and bad aspects of possible choices, helping the brain to produce a decision. Normally, when the circuit is turned on, neurons of the prefrontal cortex activate certain neurons called high-firing interneurons, which then suppress striosome activity.

When the animals are stressed, these circuit dynamics shift and the cortical neurons fire too late to inhibit the striosomes, which then become overexcited. This results in abnormal decision making.

“Somehow this prior exposure to chronic stress controls the integration of good and bad,” Graybiel says. “It’s as though the animals had lost their ability to balance excitation and inhibition in order to settle on reasonable behavior.”

Once this shift occurs, it remains in effect for months, the researchers found. However, they were able to restore normal decision making in the stressed mice by using optogenetics to stimulate the high-firing interneurons, thereby suppressing the striosomes. This suggests that the prefronto-striosome circuit remains intact following chronic stress and could potentially be susceptible to manipulations that would restore normal behavior in human patients whose disorders lead to abnormal decision making.

“This state change could be reversible, and it’s possible in the future that you could target these interneurons and restore the excitation-inhibition balance,” Friedman says.

The research was funded by the National Institutes of Health/National Institute for Mental Health, the CHDI Foundation, the Defense Advanced Research Projects Agency and the U.S. Army Research Office, the Bachmann-Strauss Dystonia and Parkinson Foundation, the William N. and Bernice E. Bumpus Foundation, Michael Stiefel, the Saks Kavanaugh Foundation, and John Wasserlein and Lucille Braun.