Engineering intelligence

Go is an ancient board game that demands not only strategy and logic, but intuition, creativity, and subtlety—in other words, it’s a game of quintessentially human abilities. Or so it seemed, until Google’s DeepMind AI program, AlphaGo, roundly defeated the world’s top Go champion.

But ask it to read social cues or interpret what another person is thinking and it wouldn’t know where to start. It wouldn’t even understand that it didn’t know where to start. Outside of its game-playing milieu, AlphaGo is as smart as a rock.

“The problem of intelligence is the greatest problem in science,” says Tomaso Poggio, Eugene McDermott Professor of Brain and Cognitive Sciences at the McGovern Institute. One reason why? We still don’t really understand intelligence in ourselves.

Right now, most advanced AI developments are led by industry giants like Facebook, Google, Tesla and Apple, with an emphasis on engineering and computation, and very little work in humans. That has yielded enormous breakthroughs including Siri and Alexa, ever-better autonomous cars and AlphaGo.

But as Poggio points out, the algorithms behind most of these incredible technologies come right out of past neuroscience research–deep learning networks and reinforcement learning. “So it’s a good bet,” Poggio says, “that one of the next breakthroughs will also come from neuroscience.”

Five years ago, Poggio and a host of researchers at MIT and beyond took that bet when they applied for and won a $25 million Science and Technology Center award from the National Science Foundation to form the Center for Brains, Minds and Machines. The goal of the center was to take those computational approaches and blend them with basic, curiosity-driven research in neuroscience and cognition. They would knock down the divisions that traditionally separated these fields and not only unlock the secrets of human intelligence and develop smarter AIs, but found an entire new field—the science and engineering of intelligence.

A collaborative foundation

CBMM is a sprawling research initiative headquartered at the McGovern Institute, encompassing faculty at Harvard, Johns Hopkins, Rockefeller and Stanford; over a dozen industry collaborators including Siemens, Google, Toyota, Microsoft, Schlumberger and IBM; and partner institutions such as Howard University, Wellesley College and the University of Puerto Rico. The effort has already churned out 397 publications and has just been renewed for five more years and another $25 million.

For the first few years, collaboration in such a complex center posed a challenge. Research efforts were still divided into traditional silos—one research thrust for cognitive science, another for computation, and so on. But as the center grew, colleagues found themselves talking more and a new common language emerged. Immersed in each other’s research, the divisions began to fade.

“It became more than just a center in name,” says Matthew Wilson, associate director of CBMM and the Sherman Fairchild Professor of Neuroscience at MIT’s Department of Brain and Cognitive Sciences (BCS). “It really was trying to drive a new way of thinking about research and motivating intellectual curiosity that was motivated by this shared vision that all the participants had.”

New questioning

Today, the center is structured around four interconnected modules grounded around the problem of visual intelligence—vision, because it is the most understood and easily traced of our senses. The first module, co-directed by Poggio himself, unravels the visual operations that begin within that first few milliseconds of visual recognition as the information travels through the eye and to the visual cortex. Gabriel Kreiman, who studies visual comprehension at Harvard Medical School and Children’s Hospital, leads the second module which takes on the subsequent events as the brain directs the eye where to go next, what it is seeing and what to pay attention to, and then integrates this information into a holistic picture of the world that we experience. His research questions have grown as a result of CBMM’s cross-disciplinary influence.

Leyla Isik, a postdoc in Kreiman’s lab, is now tackling one of his new research initiatives: social intelligence. “So much of what we do and see as humans are social interactions between people. But even the best machines have trouble with it,” she explains.

To reveal the underlying computations of social intelligence, Isik is using data gathered from epilepsy patients as they watch full-length movies. (Certain epileptics spend several weeks before surgery with monitoring electrodes in their brains, providing a rare opportunity for scientists to see inside the brain of a living, thinking human). Isik hopes to be able to pick out reliable patterns in their neural activity that indicate when the patient is processing certain social cues such as faces. “It’s a pretty big challenge, so to start out we’ve tried to simplify the problem a little bit and just look at basic social visual phenomenon,” she explains.

In true CBMM spirit, Isik is co-advised by another McGovern investigator, Nancy Kanwisher, who helps lead CBMM’s third module with BCS Professor of Computational Cognitive Science, Josh Tenenbaum. That module picks up where the second leaves off, asking still deeper questions about how the brain understands complex scenes, and how infants and children develop the ability to piece together the physics and psychology of new events. In Kanwisher’s lab, instead of a stimulus-heavy movie, Isik shows simple stick figures to subjects in an MRI scanner. She’s looking for specific regions of the brain that engage only when the subjects view the “social interactions” between the figures. “I like the approach of tackling this problem both from very controlled experiments as well as something that’s much more naturalistic in terms of what people and machines would see,” Isik explains.

Built-in teamwork

Such complementary approaches are the norm at CBMM. Postdocs and graduate students are required to have at least two advisors in two different labs. The NSF money is even assigned directly to postdoc and graduate student projects. This ensures that collaborations are baked into the center, Wilson explains. “If the idea is to create a new field in the science of intelligence, you can’t continue to support work the way it was done in the old fields—you have to create a new model.”

In other labs, students and postdocs blend imaging with cognitive science to understand how the brain represents physics—like the mass of an object it sees. Or they’re combining human, primate, mouse and computational experiments to better understand how the living brain represents new objects it encounters, and then building algorithms to test the resulting theories.

Boris Katz’s lab is in the fourth and final module, which focuses on figuring out how the brain’s visual intelligence ties into higher-level thinking, like goal planning, language, and abstract concepts. One project, led by MIT research scientist Andrei Barbu and Yen-Ling Kuo, in collaboration with Harvard cognitive scientist Liz Spelke, is attempting to uncover how humans and machines devise plans to navigate around complex and dangerous environments.

“CBMM gives us the opportunity to close the loop between machine learning, cognitive science, and neuroscience,” says Barbu. “The cognitive science informs better machine learning, which helps us understand how humans behave and that in turn points the way toward understanding the structure of the brain. All of this feeds back into creating more capable machines.”

A new field

Every summer, CBMM heads down to Woods Hole, Massachusetts, to deliver an intensive crash course on the science of intelligence to graduate students from across the country. It’s one of many education initiatives designed to spread CBMM’s approach and key to the goal of establishing a new field. The students who come to learn from these courses often find it as transformative as the CBMM faculty did when the center began.

Candace Ross was an undergraduate at Howard University when she got her first taste of CBMM at a summer course with Kreiman trying to model human memory in machine learning algorithms. “It was the best summer of my life,” she says. “There were so many concepts I didn’t know about and didn’t understand. We’d get back to the dorm at night and just sit around talking about science.”

Ross loved it so much that she spent a second summer at CBMM, and is now a third-year graduate student working with Katz and Barbu, teaching computers how to use vision and language to learn more like children. She’s since gone back to the summer programs, now as a teaching assistant. “CBMM is a research center,” says Ellen Hildreth, a computer scientist at Wellesley College who coordinates CBMM’s education programs. “But it also fosters a strong commitment to education, and that effort is helping to create a community of researchers around this new field.”

Quest for intelligence

CBMM has far to go in its mission to understand the mind, but there is good reason to believe that what CBMM started will continue well beyond the NSF-funded ten years.

This February, MIT announced a new institute-wide initiative called the MIT Intelligence Quest, or MIT IQ. It’s a massive interdisciplinary push to study human intelligence and create new tools based on that knowledge. It is also, says McGovern Institute Director Robert Desimone, a sign of the institute’s faith in what CBMM itself has so far accomplished. “The fact that MIT has made this big commitment in this area is an endorsement of the kind of view we’ve been promoting through CBMM,” he says.

MIT IQ consists of two linked entities: “The Core” and “The Bridge.” CBMM is part of the Core, which will advance the science and engineering of both human and machine intelligence. “This combination is unique to MIT,” explains Poggio, “and is designed to win not only Turing but also Nobel prizes.”

And more than that, points out BCS Department Head Jim DiCarlo, it’s also a return to CBMM’s very first mission. Before CBMM began, Poggio and a few other MIT scientists had tested the waters with a small, Institute-funded collaboration called the Intelligence Initiative (I^2), that welcomed all types of intelligence research–even business and organizational intelligence. MIT IQ re-opens that broader door. “In practice, we want to build a bigger tent now around the science of intelligence,” DiCarlo says.

For his part, Poggio finds the name particularly apt. “Because it is going to be a long-term quest,” he says. “Remember, if I’m right, this is the greatest problem in science. Understanding the mind is understanding the very tool we use to try to solve every other problem.”

The quest to understand intelligence

McGovern investigators study intelligence to answer a practical question for both educators and computer scientists. Can intelligence be improved?

A nine-year-old girl, a contestant on a game show, is standing on stage. On a screen in front of her, there appears a twelve-digit number followed by a six-digit number. Her challenge is to divide the two numbers as fast as possible.

The timer begins. She is racing against three other contestants, two from China and one, like her, from Japan. Whoever answers first wins, but only if the answer is correct.

The show, called “The Brain,” is wildly popular in China, and attracts players who display their memory and concentration skills much the way American athletes demonstrate their physical skills in shows like “American Ninja Warrior.” After a few seconds, the girl slams the timer and gives the correct answer, faster than most people could have entered the numbers on a calculator.

The camera pans to a team of expert judges, including McGovern Director Robert Desimone, who had arrived in Nanjing just a few hours earlier. Desimone shakes his head in disbelief. The task appears to make extraordinary demands on working memory and rapid processing, but the girl explains that she solves it by visualizing an abacus in her mind—something she has practiced intensively.

The show raises an age-old question: What is intelligence, exactly?

The study of intelligence has a long and sometimes contentious history, but recently, neuroscientists have begun to dissect intelligence to understand the neural roots of the distinct cognitive skills that contribute to it. One key question is whether these skills can be improved individually with training and, if so, whether those improvements translate into overall intelligence gains. This research has practical implications for multiple domains, from brain science to education to artificial intelligence.

“The problem of intelligence is one of the great problems in science,” says Tomaso Poggio, a McGovern investigator and an expert on machine learning. “If we make progress in understanding intelligence, and if that helps us make progress in making ourselves smarter or in making machines that help us think better, we can solve all other problems more easily.”

Brain training 101

Many studies have reported positive results from brain training, and there is now a thriving industry devoted to selling tools and games such as Lumosity and BrainHQ. Yet the science behind brain training to improve intelligence remains controversial.

A case in point is the “n-back” working memory task, in which subjects are presented with a rapid sequence of letters or visual patterns, and must report whether the current item matches the last, last-but-one, last-but-two, and so on. The field of brain training received a boost in 2008 when a widely discussed study claimed that a few weeks of training on a challenging version of this task could boost fluid intelligence, the ability to solve novel problems. The report generated excitement and optimism when it first appeared, but several subsequent attempts to reproduce the findings have been unsuccessful.

Among those unable to confirm the result was McGovern Investigator John Gabrieli, who recruited 60 young adults and trained them forty minutes a day for four weeks on an n-back task similar to that of the original study.

Six months later, Gabrieli re-evaluated the participants. “They got amazingly better at the difficult task they practiced. We have great imaging data showing changes in brain activation as they performed the task from before to after,” says Gabrieli. “And yet, that didn’t help them do better on any other cognitive abilities we could measure, and we measured a lot of things.”

The results don’t completely rule out the value of n-back training, says Gabrieli. It may be more effective in children, or in populations with a lower average intelligence than the individuals (mostly college students) who were recruited for Gabrieli’s study. The prospect that training might help disadvantaged individuals holds strong appeal. “If you could raise the cognitive abilities of a child with autism, or a child who is struggling in school, the data tells us that their life would be a step better,” says Gabrieli. “It’s something you would wish for people, especially for those where something is holding them back from the expression of their other abilities.”

Music for the brain

The concept of early intervention is now being tested by Desimone, who has teamed with Chinese colleagues at the recently-established IDG/McGovern Institute at Beijing Normal University to explore the effect of music training on the cognitive abilities of young children.

The researchers recruited 100 children at a neighborhood kindergarten in Beijing, and provided them with a semester-long intervention, randomly assigning children either to music training or (as a control) to additional reading instruction. Unlike the so-called “Mozart Effect,” a scientifically unsubstantiated claim that passive listening to music increases intelligence, the new study requires active learning through daily practice. Several smaller studies have reported cognitive benefits from music training, and Desimone finds the idea plausible given that musical cognition involves several mental functions that are also implicated in intelligence. The study is nearly complete, and results are expected to emerge within a few months. “We’re also collecting data on brain activity, so if we see improvements in the kids who had music training, we’ll also be able to ask about its neural basis,” says Desimone. The results may also have immediate practical implications, since the study design reflects decisions that schools must make in determining how children spend their time. “Many schools are deciding to cut their arts and music programs to make room for more instruction in academic core subjects, so our study is relevant to real questions schools are facing.”

Intelligent classrooms

In another school-based study, Gabrieli’s group recently raised questions about the benefits of “teaching to the test.” In this study, postdoc Amy Finn evaluated over 1300 eighth-graders in the Boston public schools, some enrolled at traditional schools and others at charter schools that emphasize standardized test score improvements. The researchers wanted to find out whether raised test scores were accompanied by improvement of cognitive skills that are linked to intelligence. (Charter school students are selected by lottery, meaning that any results are unlikely to reflect preexisting differences between the two groups of students.) As expected, charter school students showed larger improvements in test scores (relative to their scores from 4 years earlier). But when Finn and her colleagues measured key aspects of intelligence, such as working memory, processing speed, and reasoning, they found no difference between the students who enrolled in charter schools and those who did not. “You can look at these skills as the building blocks of cognition. They are useful for reasoning in a novel situation, an ability that is really important for learning,” says Finn. “It’s surprising that school practices that increase achievement don’t also increase these building blocks.”

Gabrieli remains optimistic that it will eventually be possible to design scientifically based interventions that can raise children’s abilities. Allyson Mackey, a postdoc in his lab, is studying the use of games to exercise the cognitive skills in a classroom setting. As a graduate student at University of California, Berkeley, Mackey had studied the effects of games such as “Chocolate Fix,” in which players match shapes and flavors, represented by color, to positions in a grid based on hints, such as, “the upper left position is strawberry.”

These games gave children practice at thinking through and solving novel problems, and at the end of Mackey’s study, the students—from second through fourth grades—showed improved measures of skills associated with intelligence. “Our results suggest that these cognitive skills are specifically malleable, although we don’t yet know what the active ingredients were in this program,” says Mackey, who speaks of the interventions as if they were drugs, with dosages, efficacies and potentially synergistic combinations to be explored. Mackey is now working to identify the most promising interventions—those that boost cognitive abilities, work well in the classroom, and are engaging for kids—to try in Boston charter schools. “It’s just the beginning of a three-year process to methodically test interventions to see if they work,” she says.

Brain training…for machines

While Desimone, Gabrieli and their colleagues look for ways to raise human intelligence, Poggio, who directs the MIT-based Center for Brains, Minds and Machines, is trying to endow computers with more human-like intelligence. Computers can already match human performance on some specific tasks such as chess. Programs such as Apple’s “Siri” can mimic human speech interpretation, not perfectly but well enough to be useful. Computer vision programs are approaching human performance at rapid object recognitions, and one such system, developed by one of Poggio’s former postdocs, is now being used to assist car drivers. “The last decade has been pretty magical for intelligent computer systems,” says Poggio.

Like children, these intelligent systems learn from past experience. But compared to humans or other animals, machines tend to be very slow learners. For example, the visual system for automobiles was trained by presenting it with millions of images—traffic light, pedestrian, and so on—that had already been labeled by humans. “You would never present so many examples to a child,” says Poggio. “One of our big challenges is to understand how to make algorithms in computers learn with many fewer examples, to make them learn more like children do.”

To accomplish this and other goals of machine intelligence, Poggio suspects that the work being done by Desimone, Gabrieli and others to understand the neural basis of intelligence will be critical. But he is not expecting any single breakthrough that will make everything fall into place. “A century ago,” he says, “scientists pondered the problem of life, as if ‘life’—what we now call biology—were just one problem. The science of intelligence is like biology. It’s a lot of problems, and a lot of breakthroughs will have to come before a machine appears that is as intelligent as we are.”

Listening to neurons

When McGovern Investigator Mark Harnett gets a text from his collaborator at Massachusetts General Hospital, it’s time to stock up on Red Bull and coffee.

Because very soon—sometimes within a few hours—a chunk of living human brain will arrive at the lab, marking the start of an epic session recording the brain’s internal dialogue. And it continues non-stop until the neurons die.

“That first time, we went for 54 hours straight,” Harnett says.

Now two years old, his lab is trying to answer fundamental questions about how the brain’s basic calculations lead to the experience of daily life. Most neuroscientists consider the neuron to be the brain’s basic computational unit, but Harnett is focusing on the internal workings of individual neurons, and in particular, the role of dendrites, the elaborate branching structures that are the most distinctive feature of these cells.

Years ago, scientists viewed dendrites as essentially passive structures, receiving neurochemical information that they translated into electrical signals and sent to the cell body, or soma. The soma was the calculator, summing up the data and deciding whether or not to produce an output signal, known as an action potential. Now though, evidence has accumulated showing dendrites to be capable of processing information themselves, leading to a new and more expansive view in which each individual neuron contains multiple computational elements.

Due to the enormous technical challenge such work demands, however, scientists still don’t fully understand the biophysical mechanisms behind dendritic computations.

They understand even less how these mechanisms operate in and contribute to an awake, thinking brain—nor how much the mouse models that have defined the field translate to the vastly more powerful computational abilities of the human brain.

Harnett is in an ideal position to untangle some of these questions, owing to a rare combination of the technology and skills needed to record from dendrites—a feat in itself—as well as access to animals and human tissue, and a lab eager for a challenge.

Human interest

Most previous research on dendrites has been done in rats or mice, and Harnett’s collaboration with MGH addresses a deceptively simple question: are the brain cells of rodents really equivalent to those of humans?

Researchers have generally assumed that they are similar, but no one has studied the question in depth. It is known, however, that human dendrites are longer and more structurally complex, and Harnett suspects that these shape differences may reflect the existence of additional computational mechanisms.

To investigate this question, Harnett reached out to Sydney Cash, a neurologist at MGH and Harvard Medical School. Cash was intrigued. He’d been studying epilepsy patients with electrodes implanted in their brains to locate seizures before brain surgery, and he was seeing odd quirks in his data. The neurons seemed to be more connected than animal data would suggest, but he had no way to investigate. “And so I thought this collaboration would be fantastic,” he says. “The amazing electrophysiology that Mark’s group can do would be able to give us that insight into the behavior of these individual human neurons.”

So Cash arranged for Harnett to receive tissue from the brains of patients undergoing lobe resections—removal of chunks of tissue associated with seizures, which often works for patients for whom other treatments have failed.

Logistics were challenging—how to get a living piece of brain from one side of the Charles River to the other before it dies? Harnett initially wanted to use a drone; the legal department shot down that idea. Then he wanted to preserve the delicate tissue in bubbling oxygenated solution. But carting cylinders of hazardous compressed gas around the city was also a non-starter. “So, on the first one, we said to heck with it, we’ll just see if it works at all,” Harnett says. “We threw the brain into a bottle of ice-cold solution, screwed the top on, and told an Uber driver to go fast.”

When the cargo reaches the lab, the team starts the experiments immediately to collect as much data as possible before the neurons fail. This process involves the kind of arduous work that Harnett’s first graduate student, Lou Beaulieu-Laroche, relishes. Indeed, it’s why the young Quebecois wanted to join Harnett’s lab in the first place. “Every time I get to do this recording, I get so excited I don’t even need to sleep,” he says.

First, Beaulieu-Laroche places the precious tissue into a nutrient solution, carefully slicing it at the correct angle to reveal the neurons of interest. Then he begins patch clamp recordings, placing a tiny glass pipette to the surface of a single neuron in order to record its electrical activity. Most labs patch the larger soma; few can successfully patch the far finer dendrites. Beaulieu-Laroche can record two locations on a single dendrite simultaneously.

“It’s tricky experiment on top of tricky experiment,” Harnett says. “If you don’t succeed at every step, you get nothing out of it.” But do it right, and it’s a human neuron laid bare, whirring calculations visible in real-time.

The lab has collected samples from just seven surgeries so far, but a fascinating picture is emerging. For instance, spikes of activity in some human dendrites don’t seem to show up in the main part of the cell, a peculiar decoupling mice don’t show. What it means is still unclear, but it may be a sign of Harnett’s theorized intermediary computations between the distant dendrites and the cell body.

“It could be that the dendrite network of a human neuron is a little more complicated—maybe a little bit smarter,” Beaulieu-Laroche speculates. “And maybe that contributes to our intelligence.”

Active questioning

The human work is inherently limited to studying cells in a dish, and that gets to Harnett’s real focus. “A huge amount of time and effort has been spent identifying what dendrites are capable of doing in brain slices,” he says. Far less effort has gone into studying what they do in the behaving brain. It’s like exhaustively examining a set of tires on a car without ever testing its performance on the road.

To get at this problem, Harnett studies spatial navigation in mice, a task that requires the mouse brain to combine information about vision, motion, and self-orientation into a holistic experience. Scientists don’t know how this integration happens, but Harnett thinks it is an ideal test bed for exploring how dendritic processes contribute to complex behavioral computations. “We know the different types of information must eventually converge, but we think each type could be processed separately in the dendrites before being combined in the cell body,” he says.

The difficult part is catching neurons in the act of computing. This requires a two-pronged approach combining finegrained dendritic biophysics—like what Beaulieu-Laroche does in human cells— with behavioral studies and imaging in awake mice.

Marie-Sophie van der Goes, Harnett’s second graduate student, took up the challenge when she joined the lab in early 2016. From previous work, she knew spatial integration happened in a structure called the retrosplenial cortex, but the region was not well studied.

“We didn’t know where the information entering the RSC came from, or how it was organized,” she explains.

She and laboratory technician Derrick Barnagian used reverse tracing methods to identify inputs to the RSC, and teamed up with postdoc Mathieu Lafourcade to figure out how that information was organized and processed. Vision, motor and orientation systems are all connected to the region, as expected, but the inputs are segregated, with visual and motor information, for example, arriving at different locations within the dendritic tree. According to the patch clamp data, this is likely to be very important, since different dendrites appear to process information in different ways.

The next step for Van der Goes will be to record from neurons as mice perform a navigation task in a virtual maze. Two other postdocs, Jakob Voigts and Lukas Fischer, have already begun looking at similar questions. Working with mice genetically engineered so that their neurons light up when activated, the researchers implant a small glass window in the skull, directly over the RSC. Peering in with a two-photon microscope, they can watch, in real time, the activity of individual neurons and dendrites, as the animal processes different stimuli, including visual cues, sugar-water reward, and the sensation of its feet running along the ground.

It’s not a perfect system; the mouse’s head has to be held absolutely still for the scope to work. For now, they use a virtual reality maze and treadmill, although thanks to an ingenious rig Voigts invented, the set-up is poised to undergo a key improvement to make it feel more life-like for the mouse, and thus more accurate for the researchers.

Human questions

As much as the lab has accomplished so far, Harnett considers the people his greatest achievement. “Lab culture’s critical, in my opinion,” Harnett says. “How it manifests can really affect who wants to join your particular pirate crew.”

And his lab, he says, “is a wonderful environment and my team is incredibly successful in getting hard things to work.”

Everyone works on each other’s projects, coming in on Friday nights and weekend mornings, while ongoing jokes, lab memes, and shared meals bind the team together. Even Harnett prefers to bring his laptop to the crowded student and postdoc office rather than work in his own spacious quarters. With only three Americans in the lab—including Harnett —the space is rich in languages and friendly jabs. Canadian Beaulieu-Laroche says France-born Lafourcade speaks French like his grandmother; Lafourcade insists he speaks the best French—and the best Spanish. “But the Germans never speak German,” he wonders.

And there’s another uniting factor as well—a passion for asking big questions in life. Perhaps it is because many of the lab members are internationally educated and have studied more philosophy and literature than a typical science student. “Marie randomly dropped a Marcus Aurelius quote on me the other day,” Harnett says. He’d been flabbergasted, “But then I wondered, what is it about the fact that they’ve ended up here and we work together so incredibly well? I think it’s that we all think about this stuff—it gives us a shared humanism in the laboratory.”

A sense of timing

The ability to measure time and to control the timing of actions is critical for almost every aspect of behavior. Yet the mechanisms by which our brains process time are still largely mysterious.

We experience time on many different scales—from milliseconds to years— but of particular interest is the middle range, the scale of seconds over which we perceive time directly, and over which many of our actions and thoughts unfold.

“We speak of a sense of time, yet unlike our other senses there is no sensory organ for time,” says McGovern Investigator Mehrdad Jazayeri. “It seems to come entirely from within. So if we understand time, we should be getting close to understanding mental processes.”

Singing in the brain

Emily Mackevicius comes to work in the early morning because that’s when her birds are most likely to sing. A graduate student in the lab of McGovern Investigator Michale Fee, she is studying zebra finches, songbirds that learn to sing by copying their fathers. Bird song involves a complex and precisely timed set of movements, and Mackevicius, who plays the cello in her spare time, likens it to musical performance. “With every phrase, you have to learn a sequence of finger movements and bowing movements, and put it all together with exact timing. The birds are doing something very similar with their vocal muscles.”

A typical zebra finch song lasts about one second, and consists of several syllables, produced at a rate similar to the syllables in human speech. Each song syllable involves a precisely timed sequence of muscle commands, and understanding how the bird’s brain generates this sequence is a central goal for Fee’s lab. Birds learn it naturally without any need for training, making it an ideal model for understanding the complex action sequences that represent the fundamental “building blocks” of behavior.

Some years ago Fee and colleagues made a surprising discovery that has shaped their thinking ever since. Within a part of the bird brain called HVC, they found neurons that fire a single short burst of pulses at exactly the same point on every repetition of the song. Each burst lasts about a hundredth of a second, and different neurons fire at different times within the song. With about 20,000 neurons in HVC, it was easy to imagine that there would be specific neurons active at every point in the song, meaning that each time point could be represented by the activity of a handful of individual neurons.

Proving this was not easy—“we had to wait about ten years for the technology to catch up,” says Fee—but they finally succeeded last year, when students Tatsuo Okubo and Galen Lynch analyzed recordings from hundreds of individual HVC neurons, and found that they do indeed fire in a fixed sequence, covering the entire song period.

“We think it’s like a row of falling dominoes,” says Fee. “The neurons are connected to each other so that when one fires it triggers the next one in the chain.” It’s an appealing model, because it’s easy to see how a chain of activity could control complex action sequences, simply by connecting individual time-stamp neurons to downstream motor neurons. With the correct connections, each movement is triggered at the right time in the sequence. Fee believes these motor connections are learned through trial and error—like babies babbling as they learn to speak—and a separate project in his lab aims to understand how this learning occurs.

But the domino metaphor also begs another question: who sets up the dominoes in the first place? Mackevicius and Okubo, along with summer student Hannah Payne, set out to answer this question, asking how HVC becomes wired to produce these precisely timed chain reactions.

Mackevicius, who studied math as an undergraduate before turning to neuroscience, developed computer simulations of the HVC neuronal network, and Okubo ran experiments to test the predictions, recording from young birds at different stages in the learning process. “We found that setting up a chain is surprisingly easy,” says Mackevicius. “If we start with a randomly connected network, and some realistic assumptions about the “plasticity rules” by which synapses change with repeated use, we found that these chains emerge spontaneously. All you need is to give them a push—like knocking over the first domino.”

Their results also suggested how a young bird learns to produce different syllables, as it progresses from repetitive babbling to a more adult-like song. “At first, there’s just one big burst of neural activity, but as the song becomes more complex, the activity gradually spreads out in time and splits into different sequences, each controlling a different syllable. It’s as if you started with lots of dominos all clumped together, and then gradually they become sorted into different rows.”

Does something similar happen in the human brain? “It seems very likely,” says Fee. “Many of our movements are precisely timed—think about speaking a sentence or performing a musical instrument or delivering a tennis serve. Even our thoughts often happen in sequences. Things happen faster in birds than mammals, but we suspect the underlying mechanisms will be very similar.”

Speed control

One floor above the Fee lab, Mehrdad Jazayeri is also studying how time controls actions, using humans and monkeys rather than birds. Like Fee, Jazayeri comes from an engineering background, and his goal is to understand, with an engineer’s level of detail, how we perceive time and use it flexibly to control our actions.

To begin to answer this question, Jazayeri trained monkeys to remember time intervals of a few seconds or less, and to reproduce them by pressing a button or making an eye movement at the correct time after a visual cue appears on a screen. He then recorded brain activity as the monkeys perform this task, to find out how the brain measures elapsed time. “There were two prominent ideas in the field,” he explains. “One idea was that there is an internal clock, and that the brain can somehow count the accumulating ticks. Another class of models had proposed that there are multiple oscillators that come in and out of phase at different times.”

When they examined the recordings, however, the results did not fit either model. Despite searching across multiple brain areas, Jazayeri and his colleagues found no sign of ticking or oscillations. Instead, their recordings revealed complex patterns of activity, distributed across populations of neurons; moreover, as the monkey produced longer or shorter intervals, these activity patterns were stretched or compressed in time, to fit the overall duration of each interval. In other words, says Jazayeri, the brain circuits were able to adjust the speed with which neural signals evolve over time. He compares it to a group of musicians performing a complex piece of music. “Each player has their own part, which they can play faster or slower depending on the overall tempo of the music.”


Jazayeri is also using time as a window onto a broader question—how our perceptions and decisions are shaped by past experience. “It’s one of the great questions in neuroscience, but it’s not easy to study. One of the great advantages of studying timing is that it’s easy to measure precisely, so we can frame our questions in precise mathematical ways.”

The starting point for this work was a deceptively simple task, which Jazayeri calls “Ready-Set-Go.” In this task, the subject is given the first two beats of a regular rhythm (“Ready, Set”) and must then generate the third beat (“Go”) at the correct time. To perform this task, the brain must measure the duration between Ready and Set and then immediately reproduce it.

Humans can do this fairly accurately, but not perfectly—their response times are imprecise, presumably because there is some “noise” in the neural signals that convey timing information within the brain. In the face of this uncertainty, the optimal strategy (known mathematically as Bayesian Inference) is to bias the time estimates based on prior expectations, and this is exactly what happened in Jazayeri’s experiments. If the intervals in previous trials were shorter, then people tend to under-estimate the next interval, whereas if the previous intervals were longer, they will over-estimate. In other words, people use their memory to improve their time estimates.

Monkeys can also learn this task and show similar biases, providing an opportunity to study how the brain establishes and stores these prior expectations, and how these expectations influence subsequent behavior. Again, Jazayeri and colleagues recorded from large numbers of neurons during the task. These patterns are complex and not easily described in words, but in mathematical terms, the activity forms a geometric structure known as a manifold. “Think of it as a curved surface, analogous to a cylinder,” he says. “In the past, people could not see it because they could only record from one or a few neurons at a time. We have to measure activity across large numbers of neurons simultaneously if we want to understand the workings of the system.”

Computing time

To interpret their data, Jazayeri and his team often turn to computer models based on artificial neural networks. “These models are a powerful tool in our work because we can fully reverse-engineer them and gain insight into the underlying mechanisms,” he explains. His lab has now succeeded in training a recurrent neural network that can perform the Ready-Set-Go task, and they have found that the model develops a manifold similar to the real brain data. This has led to the intriguing conjecture that memory of past experiences can be embedded in the structure of the manifold.

Jazayeri concludes: “We haven’t connected all the dots, but I suspect that many questions about brain and behavior will find their answers in the geometry and dynamics of neural activity.” Jazayeri’s long-term ambition is to develop predictive models of brain function. As an analogy, he says, think of a pendulum. “If we know its current state—its position and speed—we can predict with complete confidence what it will do next, and how it will respond to a perturbation. We don’t have anything like that for the brain—nobody has been able to do that, not even the simplest brain functions. But that’s where we’d eventually like to be.”

A clock within the brain?

It is not yet clear how the mechanisms studied by Fee and Jazayeri are related. “We talk together often, but we are still guessing how the pieces fit together,” says Fee. But one thing they both agree on is the lack of evidence for any central clock within the brain. “Most people have this intuitive feeling that time is a unitary thing, and that there must be some central clock inside our head, coordinating everything like the conductor of the orchestra or the clock inside your computer,” says Jazayeri. “Even many experts in the field believe this, but we don’t think it’s right.” Rather, his work and Fee’s both point to the existence of separate circuits for different time-related behaviors, such as singing. If there is no clock, how do the different systems work together to create our apparently seamless perception of time? “It’s still a big mystery,” says Jazayeri. “Questions like that are what make neuroscience so interesting.”


A Google map of the brain

At the start of the twentieth century, Santiago Ramón y Cajal’s drawings of brain cells under the microscope revealed a remarkable diversity of cell types within the brain. Through sketch after sketch, Cajal showed that the brain was not, as many believed, a web of self-similar material, but rather that it is composed of billions of cells of many different sizes, shapes, and interconnections.

Yet more than a hundred years later, we still do not know how many cell types make up the human brain. Despite decades of study, the challenge remains daunting, as the brain’s complexity has overwhelmed attempts to describe it systematically or to catalog its parts.

Now, however, this appears about to change, thanks to an explosion of new technical advances in areas ranging from DNA sequencing to microfluidics to computing and microscopy. For the first time, a parts list for the human brain appears to be within reach.

Why is this important? “Until we know all the cell types, we won’t fully understand how they are connected together,” explains McGovern Investigator Guoping Feng. “We know that the brain’s wiring is incredibly complicated, and that the connections are key to understanding how it works, but we don’t yet have the full picture. That’s what we are aiming for. It’s like making a Google map of the brain.”

Identifying the cell types is also important for understanding disease. As genetic risk factors for different disorders are identified, researchers need to know where they act within the brain, and which cell types and connections are disrupted as a result. “Once we know that, we can start to think about new therapeutic approaches,” says Feng, who is also an institute member of the Broad Institute, where he leads the neurobiology program at the Stanley Center for Psychiatric Disorders Research.

Drop by drop

In 2012, computational biologist Naomi Habib arrived from the Hebrew University of Jerusalem to join the labs of McGovern Investigator Feng Zhang and his collaborator Aviv Regev at the Broad Institute. Habib’s plan was to learn new RNA methods as they were emerging. “I wanted to use these powerful tools to understand this fascinating system that is our brain,” she says.

Her rationale was simple, at least in theory. All cells of an organism carry the same DNA instructions, but the instructions are read out differently in each cell type. Stretches of DNA corresponding to individual genes are copied, sometimes thousands of times, into RNA molecules that in turn direct the synthesis of proteins. Differences in which sequences get copied are what give cells their identities: brain cells express RNAs that encode brain proteins, while blood cells express different RNAs, and so on. A given cell can express thousands of genes, providing a molecular “fingerprint” for each cell type.

Analyzing these RNAs can provide a great deal of information about the brain, including potentially the identities of its constituent cell types. But doing this is not easy, because the different cell types are mixed together like salt and pepper within the brain. For many years, studying brain RNA meant grinding up the tissue—an approach that has been compared to studying smoothies to learn about fruit salad.

As methods improved, it became possible to study the tiny quantities of RNA contained within single cells. This opened the door to studying the difference between individual cells, but this required painstaking manipulation of many samples, a slow and laborious process.

A breakthrough came in 2015, with the development of automated methods based on microfluidics. One of these, known as dropseq (droplet-based sequencing), was pioneered by Steve McCarroll at Harvard, in collaboration with Regev’s lab at Broad. In this method, individual cells are captured in tiny water droplets suspended in oil. Vast numbers of droplets are automatically pumped through tiny channels, where each undergoes its own separate sequencing reactions. By running multiple samples in parallel, the machines can process tens of thousands of cells and billions of sequences, within hours rather than weeks or months. The power of the method became clear when in an experiment on mouse retina, the researchers were able to identify almost every cell type that had ever been described in the retina, effectively recapitulating decades of work in a single experiment.

Dropseq works well for many tissues, but Habib wanted to apply it to the adult brain, which posed a unique challenge. Mature neurons often bear elaborate branches that become intertwined like tree roots in a forest, making it impossible to separate individual cells without damage.

Nuclear option

So Habib turned to another idea. RNA is made in the nucleus before moving to the cytoplasm, and because nuclei are compact and robust it is easy to recover them intact in large numbers, even from difficult tissues such as brain. The amount of RNA contained in a single nucleus is tiny, and Habib didn’t know if it would be enough to be informative, but Zhang and Regev encouraged her to keep going. “You have to be optimistic,” she says. “You have to try.”

Fortunately, the experiment worked. In a paper with Zhang and Regev, she was able to isolate nuclei from newly formed neurons in the adult mouse hippocampus (a brain structure involved in memory), and by analyzing their RNA profiles individually she could order them in a series according to their age, revealing their developmental history from birth to maturity.

Now, after much further experimentation, Habib and her colleagues have managed to apply the droplet method to nuclei, making it possible for the first time to analyze huge numbers of cells from adult brain—at least ten times more than with previous methods.

This opens up many new avenues, including the study of human postmortem tissue, given that RNA in nuclei can survive for years in frozen samples. Habib is already starting to examine tissue taken at autopsy from patients with Alzheimer’s and other neurodegenerative diseases. “The neurons are degenerating, but the other cells around them could also be contributing to the degenerative process,” she says. “Now we have these tools, we can look at what happens during the progression of the disease.”

Computing cells

Once the sequencing is completed, the results are analyzed using sophisticated computational methods. When the results emerge, data from individual cells are visualized as colored dots, clustered on a graph according to their statistical similarities. But because the cells were dissociated at the start of the experiment, information about their appearance and origin within the brain is lost.

To find out how these abstract displays correspond to the visible cells of the brain, Habib teamed up with Yinqing Li, a former graduate student with Zhang who is now a postdoc in the lab of Guoping Feng. Li began with existing maps from the Allen Institute, a public repository with thousands of images showing expression patterns for individual genes within mouse brain. By comparing these maps with the molecular fingerprints from Habib’s nuclear RNA sequencing experiments, Li was able to make a map of where in the brain each cell was likely to have come from.

It was a good first step, but still not perfect. “What we really need,” he says, “is a method that allows us to see every RNA in individual cells. If we are studying a brain disease, we want to know which neurons are involved in the disease process, where they are, what they are connected to, and which special genes might be involved so that we can start thinking about how to design a drug that could alter the disease.”

Expanding horizons

So Li partnered with Asmamaw (Oz) Wassie, a graduate student in the lab of McGovern Investigator Ed Boyden, to tackle the problem. Wassie had previously studied bioengineering as an MIT undergraduate, where he had helped build an electronic “artificial nose” for detecting trace chemicals in air. With support from a prestigious Hertz Fellowship, he joined Boyden’s lab, where he is now working on the development of a method known as expansion microscopy.

In this method, a sample of tissue is embedded with a polymer that swells when water is added. The entire sample expands in all directions, allowing scientists to see fine details such as connections between neurons, using an ordinary microscope. Wassie recently helped develop a way to anchor RNA molecules to the polymer matrix, allowing them to be physically secured during the expansion process. Now, within the expanded samples he can see the individual molecules using a method called fluorescent in situ hybridization (FISH), in which each RNA appears as a glowing dot under the microscope. Currently, he can label only a handful of RNA types at once, but by using special sets of probes, applied sequentially, he thinks it will soon be possible to distinguish thousands of different RNA sequences.

“That will help us to see what each cell looks like, how they are connected to each other, and what RNAs they contain,” says Wassie. By combining this information with the RNA expression data generated by Li and Habib, it will be possible to reveal the organization and fine structure of complex brain areas and perhaps to identify new cell types that have not yet been recognized.

Looking ahead

Li plans to apply these methods to a brain structure known as the thalamic reticular nucleus (TRN) – a sheet of tissue, about ten neurons thick in mice, that sits on top of the thalamus and close to the cortex. The TRN is not well understood, but it is important for controlling sleep, attention and sensory processing, and it has caught the interest of Feng and other neuroscientists because it expresses a disproportionate number of genes implicated in disorders such as autism, attention deficit hyperactivity disorder, and intelligence deficits. Together with Joshua Levin’s group at Broad, Li has already used nuclear RNA sequencing to identify the cell types in the TRN, and he has begun to examine them within intact brain using the expansion techniques. “When you map these precise cell types back to the tissue, you can integrate the gene expression information with everything else, like electrophysiology, connectivity, morphology,” says Li. “Then we can start to ask what’s going wrong in disease.”

Meanwhile, Feng is already looking beyond the TRN, and planning how to scale the approach to other structures and eventually to the entire brain. He returns to the metaphor of a Google map. “Microscopic images are like satellite photos,” he says. “Now with expansion microscopy we can add another layer of information, like property boundaries and individual buildings. And knowing which RNAs are in each cell will be like seeing who lives in those buildings. I think this will completely change how we view the brain.”

Finding a way in

Our perception of the world arises within the brain, based on sensory information that is sometimes ambiguous, allowing more than one interpretation. Familiar demonstrations of this point include the famous Necker cube and the “duck-rabbit” drawing (right) in which two different interpretations flip back and forth over time.

Another example is binocular rivalry, in which the two eyes are presented with different images that are perceived in alternation. Several years ago, this phenomenon caught the eye of Caroline Robertson, who is now a Harvard Fellow working in the lab of McGovern Investigator Nancy Kanwisher. Back when she was a graduate student at Cambridge University, Robertson realized that binocular rivalry might be used to probe the basis of autism, among the most mysterious of all brain disorders.

Robertson’s idea was based on the hypothesis that autism involves an imbalance between excitation and inhibition within the brain. Although widely supported by indirect evidence, this has been very difficult to test directly in human patients. Robertson realized that binocular rivalry might provide a way to perform such a test. The perceptual switches that occur during rivalry are thought to involve competition between different groups of neurons in the visual cortex, each group reinforcing its own interpretation via excitatory connections while suppressing the alternative interpretation through inhibitory connections. Thus, if the balance is altered in the brains of people with autism, the frequency of switching might also be different, providing a simple and easily measurable marker of the disease state.

To test this idea, Robertson recruited adults with and without autism, and presented them with two distinct and differently colored images in each eye. As expected, their perceptions switched back and forth between the two images, with short periods of mixed perception in between. This was true for both groups, but when she measured the timing of these switches, Robertson found that individuals with autism do indeed see the world in a measurably different way than people without the disorder. Individuals with autism cycle between the left and right images more slowly, with the intervening periods of mixed perception lasting longer than in people without autism. The more severe their autistic symptoms, as determined by a standard clinical behavioral evaluation, the greater the difference.

Robertson had found a marker for autism that is more objective than current methods that involve one person assessing the behavior of another. The measure is immediate and relies on brain activity that happens automatically, without people thinking about it. “Sensation is a very simple place to probe,” she says.

A top-down approach

When she arrived in Kanwisher’s lab, Robertson wanted to use brain imaging to probe the basis for the perceptual phenomenon that she had discovered. With Kanwisher’s encouragement, she began by repeating the behavioral experiment with a new group of subjects, to check that her previous results were not a fluke. Having confirmed that the finding was real, she then scanned the subjects using an imaging method called Magnetic Resonance Spectroscopy (MRS), in which an MRI scanner is reprogrammed to measure concentrations of neurotransmitters and other chemicals in the brain. Kanwisher had never used MRS before, but when Robertson proposed the experiment, she was happy to try it. “Nancy’s the kind of mentor who could support the idea of using a new technique and guide me to approach it rigorously,” says Robertson.

For each of her subjects, Robertson scanned their brains to measure the amounts of two key neurotransmitters, glutamate, which is the main excitatory transmitter in the brain, and GABA, which is the main source of inhibition. When she compared the brain chemistry to the behavioral results in the binocular rivalry task, she saw something intriguing and unexpected. In people without autism, the amount of GABA in the visual cortex was correlated with the strength of the suppression, consistent with the idea that GABA enables signals from one eye to inhibit those from the other eye. But surprisingly, there was no such correlation in the autistic individuals—suggesting that GABA was somehow unable to exert its normal suppressive effect. It isn’t yet clear exactly what is going wrong in the brains of these subjects, but it’s an early flag, says Robertson. “The next step is figuring out which part of the pathway is disrupted.”

A bottom-up approach

Robertson’s approach starts from the top-down, working backward from a measurable behavior to look for brain differences, but it isn’t the only way in. Another approach is to start with genes that are linked to autism in humans, and to understand how they affect neurons and brain circuits. This is the bottom-up approach of McGovern Investigator Guoping Feng, who studies a gene called Shank3 that codes for a protein that helps build synapses, the connections through which neurons send signals to each other. Several years ago Feng knocked out Shank3 in mice, and found that the mice exhibited behaviors reminiscent of human autism, including repetitive grooming, anxiety, and impaired social interaction and motor control.

These earlier studies involved a variety of different mutations that disabled the Shank3 gene. But when postdoc Yang Zhou joined Feng’s lab, he brought a new perspective. Zhou had come from a medical background and wanted to do an experiment more directly connected to human disease. So he suggested making a mouse version of a Shank3 mutation seen in human patients, and testing its effects.

Zhou’s experiment would require precise editing of the mouse Shank3 gene, previously a difficult and time-consuming task. But help was at hand, in the form of a collaboration with McGovern Investigator Feng Zhang, a pioneer in the development of genome-editing methods.

Using Zhang’s techniques, Zhou was able to generate mice with two different mutations: one that had been linked to human autism, and another that had been discovered in a few patients with schizophrenia.

The researchers found that mice with the autism-related mutation exhibited behavioral changes at a young age that paralleled behaviors seen in children with autism. They also found early changes in synapses within a brain region called the striatum. In contrast, mice with the schizophrenia-related gene appeared normal until adolescence, and then began to exhibit changes in behavior and also changes in the prefrontal cortex, a brain region that is implicated in human schizophrenia. “The consequences of the two different Shank3 mutations were quite different in certain aspects, which was very surprising to us,” says Zhou.

The fact that different mutations in just one gene can produce such different results illustrates exactly how complex these neuropsychiatric disorders can be. “Not only do we need to study different genes, but we also have to understand different mutations and which brain regions have what defects,” says Feng, who received funding from the Poitras Center for Affective Disorders research and the Simons Center for the Social Brain. Robertson and Kanwisher were also supported by the Simons Center.

Surprising plasticity

The brain alterations that lead to autism are thought to arise early in development, long before the condition is diagnosed, raising concerns that it may be difficult to reverse the effects once the damage is done. With the Shank3 knockout mice, Feng and his team were able to approach this question in a new way, asking what would happen if the missing gene were to be restored in adulthood.

To find the answer, lab members Yuan Mei and Patricia Monteiro, along with Zhou, studied another strain of mice, in which the Shank3 gene was switched off but could be reactivated at any time by adding a drug to their diet. When adult mice were tested six weeks after the gene was switched back on, they no longer showed repetitive grooming behaviors, and they also showed normal levels of social interaction with other mice, despite having grown up without a functioning Shank3 gene. Examination of their brains confirmed that many of the synaptic alterations were also rescued when the gene was restored.

Not every symptom was reversed by this treatment; even after six weeks or more of restored Shank3 expression, the mice continued to show heightened anxiety and impaired motor control. But even these deficits could be prevented if the Shank3 gene was restored earlier in life, soon after birth.

The results are encouraging because they indicate a surprising degree of brain plasticity, persisting into adulthood. If the results can be extrapolated to human patients, they suggest that even in adulthood, autism may be at least partially reversible if the right treatment can be found. “This shows us the possibility,” says Zhou. “If we could somehow put back the gene in patients who are missing it, it could help improve their life quality.”

Converging paths

Robertson and Feng are approaching the challenge of autism from different starting points, but already there are signs of convergence. Feng is finding early signs that his Shank3 mutant mice may have an altered balance of inhibitory and excitatory circuits, consistent with what Robertson and Kanwisher have found in humans.

Feng is continuing to study these mice, and he also hopes to study the effects of a similar mutation in non-human primates, whose brains and behaviors are more similar to those of humans than rodents. Robertson, meanwhile, is planning to establish a version of the binocular rivalry test in animal models, where it is possible to alter the balance between inhibition and excitation experimentally (for example, via a genetic mutation or a drug treatment). If this leads to changes in binocular rivalry, it would strongly support the link to the perceptual changes seen in humans.

One challenge, says Robertson, will be to develop new methods to measure the perceptions of mice and other animals. “The mice can’t tell us what they are seeing,” she says. “But it would also be useful in humans, because it would allow us to study young children and patients who are non-verbal.”

A multi-pronged approach

The imbalance hypothesis is a promising lead, but no single explanation is likely to encompass all of autism, according to McGovern director Bob Desimone. “Autism is a notoriously heterogeneous condition,” he explains. “We need to try multiple approaches in order to maximize the chance of success.”

McGovern researchers are doing exactly that, with projects underway that range from scanning children to developing new molecular and microscopic methods for examining brain changes in animal disease models. Although genetic studies provide some of the strongest clues, Desimone notes that there is also evidence for environmental contributions to autism and other brain disorders. “One that’s especially interesting to us is a maternal infection and inflammation, which in mice at least can affect brain development in ways we’re only beginning to understand.”

The ultimate goal, says Desimone, is to connect the dots and to understand how these diverse human risk factors affect brain function. “Ultimately, we want to know what these different pathways have in common,” he says. “Then we can come up with rational strategies for the development of new treatments.”

Bold new microscopies for the brain

McGovern researchers create unexpected new approaches to microscopy that are changing the way scientists look at the brain.

Ask McGovern Investigator Ed Boyden about his ten-year plan and you’ll get an immediate and straight-faced answer: “We would like to understand the brain.”

He means it. Boyden intends to map all of the cells in a brain, all of their connections, and even all of the molecules that form those connections and determine their strengths. He also plans to study how information flows through the brain and to use this to generate a working model. “I’d love to be able to load a map of an entire brain into a computer and see if we can simulate the brain,” he says.

Boyden likens the process to reverse-engineering a computer by opening it up and looking inside. The analogy, though not perfect, provides a sense of the enormity of the task ahead. As complicated as computers are, brains are far more complex, and they are also much harder to visualize, given the need to see features at multiple scales. For example, signals travel from cell to cell through synaptic connections that are measured in nanometers, but the signals are then propagated along nerve fibers that may span several centimeters—a difference of more than a million-fold. Modern microscopes make it possible to study features at one scale or the other, but not both together. Similarly, there are methods for visualizing electrical activity in single neurons or in whole brains, but there is no way to see both at once. So Boyden is building his own tools, and in the process is pushing the limits of imagination. “Our group is often trying to do the opposite of what other people do,” Boyden says.

Boyden’s new methods are part of a broader push to understand the brain’s connectivity, an objective that gained impetus two years ago with the President’s BRAIN Initiative, and with allied efforts such as the NIH-funded Human Connectome Project. Hundreds of researchers have already downloaded Boyden’s recently published protocols, including colleagues at the McGovern Institute who are using them to advance their studies of brain function and disease.

Just add water

Under the microscope, the brain section prepared by Jill Crittenden looks like a tight bundle of threads. The nerve fibers are from a mouse brain, from a region known to degenerate in humans with Parkinson’s disease. The loss of the tiny synaptic connections between these fibers may be the earliest signs of degeneration, so Crittenden, a research scientist who has been studying this disease for several years in the lab of McGovern Investigator Ann Graybiel, wants to be able to see them.

But she can’t. They are far too small— smaller than a wavelength of light, meaning they are beyond the limit for optical microscopy. To bring these structures into view, one of Boyden’s technologies, called expansion microscopy (ExM), simply makes the specimen bigger, allowing it to be viewed on a conventional laboratory microscope.

The idea is at once obvious and fantastical. “Expansion microscopy is the kind of thing scientists daydream about,” says Paul Tillberg, a graduate student in Boyden’s lab. “You either shrink the scientist or expand the specimen.”

Leaving Crittenden’s sample in place, Tillberg adds water. Minutes later, the tissue has expanded and become transparent, a ghostly and larger version of its former self.

Crittenden takes another look through the scope. “It’s like someone has loosened up all the fibers. I can see each one independently, and see them interconnecting,” she says. “ExM will add a lot of power to the tools we’ve developed for visualizing the connections we think are degenerating.”

It took Tillberg and his fellow graduate student Fei Chen several months of brainstorming to find a plausible way to make ExM a reality. They had found inspiration in the work of MIT physicist Toyoichi Tanaka, who in the 1970s had studied smart gels, polymers that rapidly expand in response to a change in environment. One familiar example is the absorbent material in baby diapers, and Boyden’s team turned to this substance for the expansion technique.

The process they devised involves several steps. The tissue is first labeled using fluorescent antibodies that bind to molecules of interest, and then it is impregnated with the gel-forming material. Once the gel has set, the fluorescent markers are anchored to the gel, and the original tissue sample is digested, allowing the gel to stretch evenly in all directions.

When water is added, the gel expands and the fluorescent markers spread out like a picture on a balloon. Remarkably, the 3D shapes of even the finest structures are faithfully preserved during the expansion, making it possible to see them using a conventional microscope. By labeling molecules with different colors, the researchers can even distinguish pre-synaptic from post-synaptic structures. Boyden plans eventually to use hundreds, possibly thousands, of colors, and to increase the expansion factor to 10 times original size, equivalent to a 1000-fold increase in volume.

ExM is not the only way to see fine structures such as synapses; they can also be visualized by electron microcopy, or by recently-developed ‘super-resolution’ optical methods that garnered a 2014 Nobel Prize. These techniques, however, require expensive equipment, and the images are very time-consuming to produce.

“With ExM, because the sample is physically bigger, you can scan it very quickly using just a regular microscope,” says Boyden.

Boyden is already talking to other leading researchers in the field, including Kwanghun Chung at MIT and George Church at Harvard, about ways to further enhance the ExM method. Within the McGovern Institute, among those who expect to benefit from these advances is Guoping Feng, who is developing mouse models of autism, schizophrenia and other disorders by introducing some of the same genetic changes seen in humans with these disorders. Many of the genes associated with autism and schizophrenia play a role in the formation of synapses, but even with the mouse models at his disposal, Feng isn’t sure what goes wrong with them because they are so hard to see. “If we can make parts of the brain bigger, we might be able to see how the assembly of this synaptic machinery changes in different disorders,” he says.

3D Movies Without Special Glasses

Another challenge facing Feng and many other researchers is that many brain functions, and many brain diseases, are not confined to one area, but are widely distributed across the brain. Trying to understand these processes by looking through a small microscopic window has been compared to watching a soccer game by observing just a single square foot of the playing field.

No current technology can capture millisecond-by-millisecond electrical events across the entire living brain, so Boyden and collaborators in Vienna, Austria, decided to develop one. They turned to a method called light field microscopy (LFM) as a way to capture 3D movies of an animal’s thoughts as they flash through the entire nervous system.

The idea is mind-boggling to imagine, but the hardware is quite simple. The instrument records images in depth the same way humans do, using multiple ‘eyes’ to send slightly offset 2D images to a computer that can reconstruct a 3D image of the world. (The idea had been developed in the 1990s by Boyden’s MIT colleague Ted Adelson, and a similar method was used to create Google Street View.) Boyden and his collaborators started with a microscope of standard design, attached a video camera, and inserted between them a six-by-six array of miniature lenses, designed in Austria, that projects a grid of offset images into the camera and the computer.

The rest is math. “We take the multiple, superimposed flat images projected through the lens array and combine them into a volume,” says Young-Gyu Yoon, a graduate student in the Boyden lab who designed and wrote the software.

Another graduate student, Nikita Pak, used the new method to measure neural activity in C. elegans, a tiny worm whose entire nervous system consists of just 302 neurons. By using a worm that had been genetically engineered so that its neurons light up when they become electrically active, Pak was able to make 3D movies of the activity in the entire nervous system. “The setup is just so simple,” he says. “Every time I use it, I think it’s cool.”

The team then tested their method on a larger brain, that of the larval zebra fish. They presented the larvae with a noxious odor, and found that it triggered activity in around 5000 neurons, over a period of about three minutes. Even with this relatively simple example, activity is distributed widely throughout the brain, and would be difficult to detect with previous techniques. Boyden is now working towards recording activity over much longer timespans, and he also envisions scaling it up to image the much more complex brains of mammals.

He hopes to start with the smallest known mammal, the Etruscan shrew. This animal resembles a mouse, but it is ten times smaller, no bigger than a thimble. Its brain is also much smaller, with only a few million neurons, compared to 100 million in a mouse.

Whole brain imaging in this tiny creature could provide an unprecedented view of mammalian brain activity, including its disruption in disease states. Feng cites sensory overload in autism as an example. “If we can see how sensory activity spreads through the brain, we can start to understand how overload starts and how it spills over to other brain areas,” he says.

Visions of Convergence

While Boyden’s microscopy technologies are providing his colleagues with new ways to study brain disorders, Boyden himself hopes to use them to understand the brain as a whole. He plans to use ExM to map connections and identify which molecules are where; 3D whole-brain imaging to trace brain activity as it unfolds in real time, and optogenetics techniques to stimulate the brain and directly record the resulting activity. By combining all three tools together, he hopes to pin stimuli and activity to the molecules and connections on the map and then use that to build a computational model that simulates brain activity.

The plan is grandiose, and the tools aren’t all ready yet, but to make the scheme plausible in the proposed timeframe, Boyden is adhering to a few principles. His methods are fast, capturing information-dense images rapidly rather than scanning over days, and inclusive, imaging whole brains rather than chunks that need to be assembled. They are also accessible, so researchers don’t need to spend large sums to acquire specialized equipment or expertise in-house.

The challenges ahead might appear insurmountable at times, but Boyden is undeterred. He moves forward, his mind open to even the most far-fetched ideas, because they just might work.

From genes to brains

Many brain disorders are strongly influenced by genetics, and researchers have long hoped that the identification of genetic risk factors will provide clues to the causes and possible treatments of these mysterious conditions. In the early years, progress was slow. Many claims failed to replicate, and it became clear that in order to identify the important risk genes with confidence, researchers would need to examine the genomes of very large numbers of patients.

Until recently that would have been prohibitively expensive, but genome research has been accelerating fast. Just how fast was underlined by an announcement in January from a California-based company, Illumina, that it had achieved a long-awaited milestone: sequencing an entire human genome for under $1000. Seven years ago, this task would have cost $10M and taken weeks of work. The new system does the job in a few hours, and can sequence tens of thousands of genomes per year.

In parallel with these spectacular advances, another technological revolution has been unfolding over the past several years, with the development of a new method for editing the genome of living cells. This method, known as CRISPR, allows researchers to make precise changes to a DNA sequence—an advance that is expected to transform many areas of biomedical research and may ultimately form the basis of new treatments for human genetic disease.

The CRISPR technology, which is based on a natural bacterial defense system against viruses, uses a short strand of RNA as a “search string” to locate a corresponding DNA target sequence. This RNA string can be synthesized in the lab and can be designed to recognize any desired sequence of DNA. The RNA carries with it a protein called Cas9, which cuts the target DNA at the chosen location, allowing a new sequence to be inserted—providing researchers with a fast and flexible “search-and-replace” tool for editing the genome.

One of the pioneers in this field is McGovern Investigator Feng Zhang, who along with George Church of Harvard, was the first to show that CRISPR could be used to edit the human genome in living cells. Zhang is using the technology to study human brain disorders, building on the flood of new genetic discoveries that are emerging from advances in DNA sequencing. The Broad Institute, where Zhang holds a joint appointment, is a world leader in human psychiatric genetics, and will be among the first to acquire the new Illumina sequencing machines when they reach the market later this year.

By sequencing many thousands of individuals, geneticists are identifying the rare genetic variants that contribute to risk of diseases such as autism, schizophrenia and bipolar disorder. CRISPR will allow neuroscientists to study those gene variants in cells and in animal models. The goal, says McGovern Institute director Bob Desimone, is to understand the biological roots of brain disorders. “The biggest obstacle to new treatments has been our ignorance of fundamental mechanisms. But with these new technologies, we have a real opportunity to understand what’s wrong at the level of cells and circuits, and to identify the pressure points at which therapeutic intervention may be possible.”

Culture Club

In other fields, the influence of genetic variations on disease has turned out to be surprisingly difficult to unravel, and for neuropsychiatric disease, the challenge may be even greater. The brain is the most complex organ of the body, and the underlying pathologies that lead to disease are not yet well understood. Moreover, any given disorder may show a wide variation in symptoms from patient to patient, and it may also have many different genetic causes. “There are hundreds of genes that can contribute to autism or schizophrenia,” says McGovern Investigator Guoping Feng, who is also Poitras Professor of Neuroscience.

To study these genes, Feng and collaborators at the Broad Institute’s Stanley Center for Psychiatric Research are planning to screen thousands of cultures of neurons, grown in the tiny wells of cell culture plates. The neurons, which are grown from stem cells, can be engineered using CRISPR to contain the genetic variants that are linked to neuropsychiatric disease. Each culture will contain neurons with a different variant, and these will be examined for abnormalities that might be associated with disease.

Feng and colleagues hope this high-throughput platform will allow them to identify cellular traits, or phenotypes, that may be related to disease and which can then be studied in animal models to see if they cause defects in brain function or in behavior. In the longer term, this high-throughput platform can also be used to screen for new drugs that can reverse these defects.

Animal Kingdom

Cell cultures are necessary for large-scale screens, but ultimately the results must be translated into the context of brain circuits and behavior. “That means we must study animal models too,” says Feng.

Feng has created several mouse models of human brain disease by mutating genes that are linked to these disorders and examining the behavioral and cellular defects in the mutant animals. “We have models of obsessive-compulsive disorder and autism,” he explains. “By studying these mice we want to learn what’s wrong with their brains.”

So far, Feng has focused on single-gene models, but the majority of human psychiatric disorders are triggered by multiple genes acting in combination. One advantage of the new CRISPR method is that it allows researchers to introduce several mutations in parallel, and Zhang’s lab is now working to create autistic mice with more than one gene alteration.

Perhaps the most important advantage of CRISPR is that it can be applied to any species. Currently, almost all genetic modeling of human disease is restricted to mice. But while mouse models are convenient, they are limited, especially for diseases that affect higher brain functions and for which there are no clear parallels in rodents. “We also need to study species that are closer to humans,” says Feng.

Accordingly, he and Zhang are collaborating with colleagues in Oregon and China to use CRISPR to create primate models of neuropsychiatric disorders. Earlier this year, a team in China announced that they had used CRISPR to create transgenic monkeys that will be used to study defects in metabolism and immunity.

Feng and Zhang are planning to use a similar approach to study brain disorders, but in addition to macacques, they will also work with a smaller primate species, the marmoset. These animals, with their fast breeding cycles and complex behavioral repertoires, are ideal for genetic studies of behavior and brain function. And because they are very social with highly structured communication patterns, they represent a promising new model for understanding the neural basis of social cognition and its disruption in conditions such as autism.

Given their close evolutionary relationship to humans, marmoset models could also help accelerate the development of new therapies. Many experimental drugs for brain disorders have been tested successfully in mice, only to prove ineffective in subsequent human trials. These failures, which can be enormously expensive, have led many drug companies to cut back on their neuroscience R&D programs. Better animal models could reverse this trend by allowing companies to predict more accurately which drug candidates are most promising, before investing heavily in human clinical trials.

Feng’s mouse research provides an example of how this approach can work. He previously developed a mouse model of obsessive-compulsive disorder, in which the animals engage in obsessive self-grooming, and he has now shown that this effect can be reversed when the missing gene is reintroduced, even in adulthood. Other researchers have seemed similar results with other brain disorders such as Rett Syndrome, a condition that is often accompanied by autism. “The brain is amazingly plastic,” says Feng. “At least in a mouse, we have shown that the damage can often be repaired. If we can also show this in marmosets or other primate models, that would really give us hope that something similar is possible in humans.”

Human Race

Ultimately, to understand the genetic roots of human behavior, researchers must sequence the genomes of individual subjects in parallel with measurements of those same individuals’ behavior and brain function.

Such studies typically require very large sample sizes, but the plummeting cost of sequencing is now making this feasible. In China, for instance, a project is already underway to sequence the genomes of many thousands of individuals to uncover genetic influences on cognition and intelligence.

The next step will be to link the genetics to brain activity, says McGovern Investigator John Gabrieli, who also directs the Martinos Imaging Center at MIT. “It’s a big step to go from DNA to behavioral variation or clinical diagnosis. But we know those genes must affect brain function, so neuroimaging may help us to bridge that gap.”

But brain scans can be time-consuming, given that volunteers must perform behavioral tasks in the scanner. Studies are typically limited to a few dozen subjects, not enough to detect the often subtle effects of genomic variation.

One way to enlarge these studies, says Gabrieli, is to image the brain during rest rather than in a state of prompted activity. This procedure is fast and easy to replicate from lab to lab, and patterns of resting state activity have turned out to be surprisingly reproducible; moreover, Gabrieli is finding that differences in resting activity are associated with brain disorders such as autism, and he hopes that in the future it will be possible to relate these differences to the genetic factors that are emerging from genome studies at the Broad Institute and elsewhere.

“I’m optimistic that we’re going to see dramatic advances in our understanding of neuropsychiatric disease over the next few years.” — Bob Desimone

Confirming these associations will require a “big data” approach, in which results from multiple labs are consolidated into large repositories and analyzed for significant associations. Resting state imaging lends itself to this approach, says Gabrieli. “To find the links between brain function and genetics, big data is the direction we need to go to be successful.”

How soon might this happen? “It won’t happen overnight,” cautions Desimone. “There are a lot of dots that need to be connected. But we’ve seen in the case of genome research how fast things can move once the right technologies are in place. I’m optimistic that we’re going to see equally dramatic advances in our understanding of neuropsychiatric disease over the next few years.”

MEG matters

Somewhere nearby, most likely, sits a coffee mug. Give it a glance. An image of that mug travels from desktop to retina and into the brain, where it is processed, categorized and recognized, within a fraction of a second.

All this feels effortless to us, but programming a computer to do the same reveals just how complex that process is. Computers can handle simple objects in expected positions, such as an upright mug. But tilt that cup on its side? “That messes up a lot of standard computer vision algorithms,” says Leyla Isik, a graduate student in Tomaso Poggio’s lab at the McGovern Institute.

For her thesis research, Isik is working to build better computer vision models, inspired by how human brains recognize objects. But to track this process, she needed an imaging tool that could keep up with the brain’s astonishing speed. In 2011, soon after Isik arrived at MIT, the McGovern Institute opened its magnetoencephalography (MEG) lab, one of only a few dozens in the entire country. MEG operates on the same timescale as the human brain. Now, with easy access to a MEG facility dedicated to brain research, neuroscientists at McGovern and across MIT—even those like Isik who had never scanned human subjects—are delving into human neural processing in ways never possible before.

The making of…

MEG was developed at MIT in the early 1970s by physicist David Cohen. He was searching for the tiny magnetic fields that were predicted to arise within electrically active tissues such as the brain. Magnetic fields can travel unimpeded through the skull, so Cohen hoped it might be possible to detect them noninvasively. Because the signals are so small—a billion times weaker than the magnetic field of the Earth—Cohen experimented with a newly invented device called a SQUID (short for superconducting quantum interference device), a highly sensitive magnetometer. In 1972, he succeeded in recording alpha waves, brain rhythms that occur when the eyes close. The recording scratched out on yellow graph paper with notes scrawled in the margins, led to a seminal paper that launched a new field. Cohen’s prototype has now evolved into a sophisticated machine with an array of 306 SQUID detectors contained within a helmet that sits over the subject’s head like a giant hairdryer.

As MEG technology advanced, neuroscientists watched with growing interest. Animal studies were revealing the importance of high-frequency electrical oscillations such as gamma waves, which appear to have a key role in the communication between different brain regions. But apart from occasional neurosurgery patients, it was very difficult to study these signals in the human brain or to understand how they might contribute to human cognition. The most widely used imaging method, functional magnetic resonance imaging (fMRI) could provide precise spatial localization, but it could not detect events on the necessary millisecond timescale. “We needed to bridge that gap,” says Robert Desimone, director of the McGovern Institute.

Desimone decided to make MEG a priority, and with support from donors including Thomas F. Peterson, Jr., Edward and Kay Poitras, and the Simons Foundation, the institute was able to purchase a Triux scanner from Elekta, the newest model on the market and the first to be installed in North America.

One challenge was the high level of magnetic background noise from the surrounding environment, and so the new scanner was installed in a 13-ton shielded room that deflects interference away from the scanner. “We have a challenging location, but we were able to work with it and to get clear signals,” says Desimone.

“An engineer might have picked a different site, but we cannot overstate the importance of having MEG right here, next to the MRI scanners and easily accessible for our researchers.”

To run the new lab, Desimone recruited Dimitrios Pantazis, an expert in MEG signal processing from the University of Southern California. Pantazis knew a lot about MEG data analysis, but he had never actually scanned human subjects himself. In March 2011, he watched in anticipation as Elekta engineers uncrated the new system. Within a few months, he had the lab up and running.

Computer vision quest

When the MEG lab opened, Isik attended a training session. Like Pantazis, she had no previous experience scanning human subjects, but MEG seemed an ideal tool for teasing out the complexities of human object recognition.

She recorded the brain activity of volunteers as they viewed images of objects in various orientations. She also asked them to track the color of a cross on each image, partly to keep their eyes on the screen and partly to keep them alert. “It’s a dark and quiet room and a comfy chair,” she says. “You have to give them something to do to keep them awake.”

To process the data, Isik used a computational tool called a machine learning classifier, which learns to recognize patterns of brain activity evoked by different stimuli. By comparing responses to different types of objects, or similar objects from different viewpoints (such as a cup lying on its side), she was able to show that the human visual system processes objects in stages, starting with the specific view and then generalizing to features that are independent of the size and position of the object.

Isik is now working to develop a computer model that simulates this step-wise processing. “Having this data to work with helps ground my models,” she says. Meanwhile, Pantazis was impressed by the power of machine learning classifiers to make sense of the huge quantities of data produced by MEG studies. With support from the National Science Foundation, he is working to incorporate them into a software analysis package that is widely used by the MEG community.


Because fMRI and MEG provide complementary information, it was natural that researchers would want to combine them. This is a computationally challenging task, but MIT research scientist Aude Oliva and postdoc Radoslaw Cichy, in collaboration with Pantazis, have developed a new way to do so. They presented 92 images to volunteers subjects, once in the MEG scanner, and then again in the MRI scanner across the hall. For each data set, they looked for patterns of similarity between responses to different stimuli. Then, by aligning the two ‘similarity maps,’ they could determine which MEG signals correspond to which fMRI signals, providing information about the location and timing of brain activity that could not be revealed by either method in isolation. “We could see how visual information flows from the rear of the brain to the more anterior regions where objects are recognized and categorized,” says Pantazis. “It all happens within a few hundred milliseconds. You could not see this level of detail without the combination of fMRI and MEG.”

Another study combining fMRI and MEG data focused on attention, a longstanding research interest for Desimone. Daniel Baldauf, a postdoc in Desimone’s lab, shares that fascination. “Our visual experience is amazingly rich,” says Baldauf. “Most mysteries about how we deal with all this information boil down to attention.”

Baldauf set out to study how the brain switches attention between two well-studied object categories, faces and houses. These stimuli are known to be processed by different brain areas, and Baldauf wanted to understand how signals might be routed to one area or the other during shifts of attention. By scanning subjects with MEG and fMRI, Baldauf identified a brain region, the inferior frontal junction (IFJ), that synchronizes its gamma oscillations with either the face or house areas depending on which stimulus the subject was attending to—akin to tuning a radio to a particular station.

Having found a way to trace attention within the brain, Desimone and his colleagues are now testing whether MEG can be used to improve attention. Together with Baldauf and two visiting students, Yasaman Bagherzadeh and Ben Lu, he has rigged the scanner so that subjects can be given feedback on their own activity on a screen in real time as it is being recorded. “By concentrating on a task, participants can learn to steer their own brain activity,” says Baldauf, who hopes to determine whether these exercises can help people perform better on everyday tasks that require attention.

Comfort zone

In addition to exploring basic questions about brain function, MEG is also a valuable tool for studying brain disorders such as autism. Margaret Kjelgaard, a clinical researcher at Massachusetts General Hospital, is collaborating with MIT faculty member Pawan Sinha to understand why people with autism often have trouble tolerating sounds, smells, and lights. This is difficult to study using fMRI, because subjects are often unable to tolerate the noise of the scanner, whereas they find MEG much more comfortable.

“Big things are probably going to happen here.”
— David Cohen, inventor of MEG technology

In the scanner, subjects listened to brief repetitive sounds as their brain responses were recorded. In healthy controls, the responses became weaker with repetition as the subjects adapted to the sounds. Those with autism, however, did not adapt. The results are still preliminary and as-yet unpublished, but Kjelgaard hopes that the work will lead to a biomarker for autism, and perhaps eventually for other disorders. In 2012, the McGovern Institute organized a symposium to mark the opening of the new lab. Cohen, who had invented MEG forty years earlier, spoke at the event and made a prediction: “Big things are probably going to happen here.” Two years on, researchers have pioneered new MEG data analysis techniques, invented novel ways to combine MEG and fMRI, and begun to explore the neural underpinnings of autism. Odds are, there are more big things to come.