Seven from MIT elected to American Academy of Arts and Sciences for 2022

Seven MIT faculty members are among more than 250 leaders from academia, the arts, industry, public policy, and research elected to the American Academy of Arts and Sciences, the academy announced Thursday.

One of the nation’s most prestigious honorary societies, the academy is also a leading center for independent policy research. Members contribute to academy publications, as well as studies of science and technology policy, energy and global security, social policy and American institutions, the humanities and culture, and education.

Those elected from MIT this year are:

  • Alberto Abadie, professor of economics and associate director of the Institute for Data, Systems, and Society
  • Regina Barzilay, the School of Engineering Distinguished Professor for AI and Health
  • Roman Bezrukavnikov, professor of mathematics
  • Michale S. Fee, the Glen V. and Phyllis F. Dorflinger Professor and head of the Department of Brain and Cognitive Sciences
  • Dina Katabi, the Thuan and Nicole Pham Professor
  • Ronald T. Raines, the Roger and Georges Firmenich Professor of Natural Products Chemistry
  • Rebecca R. Saxe, the John W. Jarve Professor of Brain and Cognitive Sciences

“We are celebrating a depth of achievements in a breadth of areas,” says David Oxtoby, president of the American Academy. “These individuals excel in ways that excite us and inspire us at a time when recognizing excellence, commending expertise, and working toward the common good is absolutely essential to realizing a better future.”

Since its founding in 1780, the academy has elected leading thinkers from each generation, including George Washington and Benjamin Franklin in the 18th century, Maria Mitchell and Daniel Webster in the 19th century, and Toni Morrison and Albert Einstein in the 20th century. The current membership includes more than 250 Nobel and Pulitzer Prize winners.

Michale Fee appointed head of MIT’s Brain and Cognitive Sciences Department

McGovern Investigator Michale Fee at work in the lab with postdoc Galen Lynch. Photo: Justin Knight

Michale Fee, the Glen V. and Phyllis F. Dorflinger Professor of Brain and Cognitive Sciences, has been named as the new head of the Department of Brain and Cognitive Sciences (BCS) effective May 1, 2021.

Fee, who is an investigator in the McGovern Institute for Brain Research, succeeds James DiCarlo, the Peter de Florez Professor of Neuroscience, who announced in December that he was stepping down to become director of the MIT Quest for Intelligence.

“I want to thank Jim for his impressive work over the last nine years as head,” says Fee. “I know firsthand from my time as associate department head that BCS is in good shape and on a steady course. Jim has set a standard of transparent and collaborative leadership, which is a solid foundation for making our community stronger on all fronts.” Fee notes that his first mission is to continue the initiatives begun under DiCarlo’s leadership—in academics (especially Course 6-9), mentoring, and diversity, equity, inclusion, and justice—while maintaining the highest standards of excellence in research and education.

“Jim has overseen significant growth in the faculty and its impact, as well as important academic initiatives to strengthen the department’s graduate and undergraduate programs,” says Nergis Mavalvala, dean of the School of Science. “His emphasis on building ties among BCS, the McGovern Institute for Brain Research, and the Picower Institute for Learning and Memory has brought innumerable new collaborations among researchers and helped solidify Building 46 and MIT as world leaders in brain science.”

Fee earned his BE in engineering physics in 1985 at the University of Michigan, and his PhD in applied physics at Stanford University in 1992, under the mentorship of Nobel laureate Stephen Chu. His doctoral work was followed by research in the Biological Computation Department at Bell Laboratories. He joined MIT and BCS as an associate professor in 2003 and was promoted to full professor in 2008.

He has served since 2012 as associate department head for education in BCS, overseeing significant evolution in the department’s academic programs, including a complete reworking of the Course 9 curriculum and the establishment in 2019 of Course 6-9, Computation and Cognition, in partnership with EECS.

In his research, Fee explores the neural mechanisms by which the brain learns complex sequential behaviors, using the learning of song by juvenile zebra finches as a model. He has brought new experimental and computational methods to bear on these questions, identifying a number of circuits used to learn, modify, time, and coordinate the development and utterance of song syllables.

“His work is emblematic of the department in that it crosses technical and disciplinary boundaries in search of the most significant discoveries,” says DiCarlo. “His research background gives Michale a deep appreciation of the importance of every sub-discipline in our community and a broad understanding of the importance of their connections with each other.”

Fee has received numerous honors and awards for his research and teaching, including the MIT Fundamental Science Investigator Award in 2017, the MIT School of Science Teaching Prize for Undergraduate Education in 2016, the BCS Award for Excellence in Undergraduate Teaching in 2015, and the Lawrence Katz Prize for Innovative Research in Neuroscience from Duke University in 2012.

Fee will be the sixth head of the department, after founding chair Hans-Lukas Teuber (1964–77), Richard Held (1977–86), Emilio Bizzi (1986–97), Mriganka Sur (1997–2012), and James DiCarlo (2012–21).

Joining the dots in large neural datasets

You might have played ‘join the dots’, a puzzle where numbers guide you to draw until a complete picture emerges. But imagine a complex underlying image with no numbers to guide the sequence of joining. This is a problem that challenges scientists who work with large amounts of neural data. Sometimes they can align data to a stereotyped behavior, and thus define a sequence of neuronal activity underlying navigation of a maze or singing of a song learned and repeated across generations of birds. But most natural behavior is not stereotyped, and when it comes to sleeping, imagining, and other higher order activities, there is not even a physical behavioral readout for alignment. Michale Fee and colleagues have now developed an algorithm, seqNMF, that can recognize relevant sequences of neural activity, even when there is no guide to align to, such as an overt sequence of behaviors or notes.

“This method allows you to extract structure from the internal life of the brain without being forced to make reference to inputs or output,” says Michale Fee, a neuroscientist at the McGovern Institute at MIT, Associate Department Head and Glen V. and Phyllis F. Dorflinger Professor of Neuroscience in the Department of Brain and Cognitive Sciences, and investigator with the Simons Collaboration on the Global Brain. Fee conducted the study in collaboration with Mark S. Goldman of the University of California, Davis.

In order to achieve this task, the authors of the study, co-led by Emily L. Mackevicius and Andrew H. Bahle of the McGovern Institute,  took a process called convolutional non-negative matrix factorization (NMF), a tool that allows extraction of sparse, but important, features from complex and noisy data, and developed it so that it can be used to extract sequences over time that are related to a learned behavior or song. The new algorithm also relies on repetition, but tell-tale repetitions of neural activity rather than simplistic repetitions in the animal’s behavior. seqNMF can follow repeated sequences of firing over time that are not tied to a specific external reference time framework, and can extract relevant sequences of neural firing in an unsupervised fashion without the researcher supplying prior information.

In the current study, the authors initially applied and honed the system on synthetic datasets. These datasets started to show them that the algorithm could “join the dots” without additional informational input. When seqNMF performed well in these tests, they applied it to available open source data from rats, finding that they could extract sequences of neural firing in the hippocampus that are relevant to finding a water reward in a maze.

Having passed these initial tests, the authors upped the ante and challenged seqNMF to find relevant neural activity sequences in a non-stereotypical behavior: improvised singing by zebra finches that have not learned the signature songs of their species (untutored birds). The authors analyzed neural data from the HVC, a region of the bird brain previously linked to song learning. Since normal adult bird songs are stereotyped, the researchers could align neural activity with features in the song itself for well-tutored birds. Fee and colleagues then turned to untutored birds and found that they still had repeated neural sequences related to the “improvised” song, that are reminiscent of the tutored birds, but the patterns are messier. Indeed, the brain of the untutored bird will even initiate two distinct neural signatures at the same time, but seqNMF is able to see past the resulting neural cacophony, and decipher that multiple patterns are present but overlapping. Being able to find these levels of order in such neural datasets is near impossible using previous methods of analysis.

seqNMF can be applied, potentially, to any neural activity, and the researchers are now testing whether the algorithm can indeed be generalized to extract information from other types of neural data. In other words, now that it’s clear that seqNMF can find a relevant sequence of neural activity for a non-stereotypical behavior, scientists can examine whether the neural basis of behaviors in other organisms and even for activities such as sleep and imagination can be extracted. Indeed, seqNMF is available on GitHub for researchers to apply to their own questions of interest.

Michale Fee

Song Circuits

Michale Fee studies how the brain learns and generates complex sequential behaviors, focusing on the songbird as a model system. Birdsong is a complex behavior that young birds learn from their fathers and it provides an ideal system to study the neural basis of learned behavior. Because the parts of the bird’s brain that control song learning are closely related to human circuits that are disrupted in brain disorders such as Parkinson’s and Huntington’s disease, Fee hopes the lessons learned from birdsong will provide new clues to the causes and possible treatment of these conditions.

Michale Fee receives McKnight Technological Innovations in Neuroscience Award

McGovern Institute investigator Michale Fee has been selected to receive a 2018 McKnight Technological Innovations in Neuroscience Award for his research on “new technologies for imaging and analyzing neural state-space trajectories in freely-behaving small animals.”

“I am delighted to get support from the McKnight Foundation,” says Fee, who is also the Glen V. and Phyllis F. Dorflinger Professor in the Department of Brain and Cognitive Neurosciences at MIT. “We’re very excited about this project which aims to develop technology that will be a great help to the broader neuroscience community.”

Fee studies the neural mechanisms by which the brain, specifically that of juvenile songbirds, learns complex sequential behaviors. The way that songbirds learn a song through trial and error is analogous to humans learning complex behaviors, such as riding a bicycle. While it would be insightful to link such learning to neural activity, current methods for monitoring neurons can only monitor a limited field of neurons, a big issue since such learning and behavior involve complex interactions between larger circuits. While a wider field of view for recordings would help decipher neural changes linked to this learning paradigm, current microscopy equipment is large relative to a juvenile songbird, and microscopes that can record neural activity generally constrain the behavior of small animals. Ideally, technologies need to be lightweight (about 1 gram) and compact in size (the size of a dime), a far cry from current larger microscopes that weigh in at 3 grams. Fee hopes to be able to break these technical boundaries and miniaturize the recording equipment thus allowing recording of more neurons in naturally behaving small animals.

“We are thrilled that the McKnight Foundation has chosen to support this project. The technology that Michale’s developing will help to better visualize and understand the circuits underlying learning,” says Robert Desimone, director of MIT’s McGovern Institute for Brain Research.

In addition to development and miniaturization of the microscopy hardware itself, the award will support the development of technology that helps analyze the resulting images, so that the neuroscience community at large can more easily deploy and use the technology.

A sense of timing

The ability to measure time and to control the timing of actions is critical for almost every aspect of behavior. Yet the mechanisms by which our brains process time are still largely mysterious.

We experience time on many different scales—from milliseconds to years— but of particular interest is the middle range, the scale of seconds over which we perceive time directly, and over which many of our actions and thoughts unfold.

“We speak of a sense of time, yet unlike our other senses there is no sensory organ for time,” says McGovern Investigator Mehrdad Jazayeri. “It seems to come entirely from within. So if we understand time, we should be getting close to understanding mental processes.”

Singing in the brain

Emily Mackevicius comes to work in the early morning because that’s when her birds are most likely to sing. A graduate student in the lab of McGovern Investigator Michale Fee, she is studying zebra finches, songbirds that learn to sing by copying their fathers. Bird song involves a complex and precisely timed set of movements, and Mackevicius, who plays the cello in her spare time, likens it to musical performance. “With every phrase, you have to learn a sequence of finger movements and bowing movements, and put it all together with exact timing. The birds are doing something very similar with their vocal muscles.”

A typical zebra finch song lasts about one second, and consists of several syllables, produced at a rate similar to the syllables in human speech. Each song syllable involves a precisely timed sequence of muscle commands, and understanding how the bird’s brain generates this sequence is a central goal for Fee’s lab. Birds learn it naturally without any need for training, making it an ideal model for understanding the complex action sequences that represent the fundamental “building blocks” of behavior.

Some years ago Fee and colleagues made a surprising discovery that has shaped their thinking ever since. Within a part of the bird brain called HVC, they found neurons that fire a single short burst of pulses at exactly the same point on every repetition of the song. Each burst lasts about a hundredth of a second, and different neurons fire at different times within the song. With about 20,000 neurons in HVC, it was easy to imagine that there would be specific neurons active at every point in the song, meaning that each time point could be represented by the activity of a handful of individual neurons.

Proving this was not easy—“we had to wait about ten years for the technology to catch up,” says Fee—but they finally succeeded last year, when students Tatsuo Okubo and Galen Lynch analyzed recordings from hundreds of individual HVC neurons, and found that they do indeed fire in a fixed sequence, covering the entire song period.

“We think it’s like a row of falling dominoes,” says Fee. “The neurons are connected to each other so that when one fires it triggers the next one in the chain.” It’s an appealing model, because it’s easy to see how a chain of activity could control complex action sequences, simply by connecting individual time-stamp neurons to downstream motor neurons. With the correct connections, each movement is triggered at the right time in the sequence. Fee believes these motor connections are learned through trial and error—like babies babbling as they learn to speak—and a separate project in his lab aims to understand how this learning occurs.

But the domino metaphor also begs another question: who sets up the dominoes in the first place? Mackevicius and Okubo, along with summer student Hannah Payne, set out to answer this question, asking how HVC becomes wired to produce these precisely timed chain reactions.

Mackevicius, who studied math as an undergraduate before turning to neuroscience, developed computer simulations of the HVC neuronal network, and Okubo ran experiments to test the predictions, recording from young birds at different stages in the learning process. “We found that setting up a chain is surprisingly easy,” says Mackevicius. “If we start with a randomly connected network, and some realistic assumptions about the “plasticity rules” by which synapses change with repeated use, we found that these chains emerge spontaneously. All you need is to give them a push—like knocking over the first domino.”

Their results also suggested how a young bird learns to produce different syllables, as it progresses from repetitive babbling to a more adult-like song. “At first, there’s just one big burst of neural activity, but as the song becomes more complex, the activity gradually spreads out in time and splits into different sequences, each controlling a different syllable. It’s as if you started with lots of dominos all clumped together, and then gradually they become sorted into different rows.”

Does something similar happen in the human brain? “It seems very likely,” says Fee. “Many of our movements are precisely timed—think about speaking a sentence or performing a musical instrument or delivering a tennis serve. Even our thoughts often happen in sequences. Things happen faster in birds than mammals, but we suspect the underlying mechanisms will be very similar.”

Speed control

One floor above the Fee lab, Mehrdad Jazayeri is also studying how time controls actions, using humans and monkeys rather than birds. Like Fee, Jazayeri comes from an engineering background, and his goal is to understand, with an engineer’s level of detail, how we perceive time and use it flexibly to control our actions.

To begin to answer this question, Jazayeri trained monkeys to remember time intervals of a few seconds or less, and to reproduce them by pressing a button or making an eye movement at the correct time after a visual cue appears on a screen. He then recorded brain activity as the monkeys perform this task, to find out how the brain measures elapsed time. “There were two prominent ideas in the field,” he explains. “One idea was that there is an internal clock, and that the brain can somehow count the accumulating ticks. Another class of models had proposed that there are multiple oscillators that come in and out of phase at different times.”

When they examined the recordings, however, the results did not fit either model. Despite searching across multiple brain areas, Jazayeri and his colleagues found no sign of ticking or oscillations. Instead, their recordings revealed complex patterns of activity, distributed across populations of neurons; moreover, as the monkey produced longer or shorter intervals, these activity patterns were stretched or compressed in time, to fit the overall duration of each interval. In other words, says Jazayeri, the brain circuits were able to adjust the speed with which neural signals evolve over time. He compares it to a group of musicians performing a complex piece of music. “Each player has their own part, which they can play faster or slower depending on the overall tempo of the music.”


Jazayeri is also using time as a window onto a broader question—how our perceptions and decisions are shaped by past experience. “It’s one of the great questions in neuroscience, but it’s not easy to study. One of the great advantages of studying timing is that it’s easy to measure precisely, so we can frame our questions in precise mathematical ways.”

The starting point for this work was a deceptively simple task, which Jazayeri calls “Ready-Set-Go.” In this task, the subject is given the first two beats of a regular rhythm (“Ready, Set”) and must then generate the third beat (“Go”) at the correct time. To perform this task, the brain must measure the duration between Ready and Set and then immediately reproduce it.

Humans can do this fairly accurately, but not perfectly—their response times are imprecise, presumably because there is some “noise” in the neural signals that convey timing information within the brain. In the face of this uncertainty, the optimal strategy (known mathematically as Bayesian Inference) is to bias the time estimates based on prior expectations, and this is exactly what happened in Jazayeri’s experiments. If the intervals in previous trials were shorter, then people tend to under-estimate the next interval, whereas if the previous intervals were longer, they will over-estimate. In other words, people use their memory to improve their time estimates.

Monkeys can also learn this task and show similar biases, providing an opportunity to study how the brain establishes and stores these prior expectations, and how these expectations influence subsequent behavior. Again, Jazayeri and colleagues recorded from large numbers of neurons during the task. These patterns are complex and not easily described in words, but in mathematical terms, the activity forms a geometric structure known as a manifold. “Think of it as a curved surface, analogous to a cylinder,” he says. “In the past, people could not see it because they could only record from one or a few neurons at a time. We have to measure activity across large numbers of neurons simultaneously if we want to understand the workings of the system.”

Computing time

To interpret their data, Jazayeri and his team often turn to computer models based on artificial neural networks. “These models are a powerful tool in our work because we can fully reverse-engineer them and gain insight into the underlying mechanisms,” he explains. His lab has now succeeded in training a recurrent neural network that can perform the Ready-Set-Go task, and they have found that the model develops a manifold similar to the real brain data. This has led to the intriguing conjecture that memory of past experiences can be embedded in the structure of the manifold.

Jazayeri concludes: “We haven’t connected all the dots, but I suspect that many questions about brain and behavior will find their answers in the geometry and dynamics of neural activity.” Jazayeri’s long-term ambition is to develop predictive models of brain function. As an analogy, he says, think of a pendulum. “If we know its current state—its position and speed—we can predict with complete confidence what it will do next, and how it will respond to a perturbation. We don’t have anything like that for the brain—nobody has been able to do that, not even the simplest brain functions. But that’s where we’d eventually like to be.”

A clock within the brain?

It is not yet clear how the mechanisms studied by Fee and Jazayeri are related. “We talk together often, but we are still guessing how the pieces fit together,” says Fee. But one thing they both agree on is the lack of evidence for any central clock within the brain. “Most people have this intuitive feeling that time is a unitary thing, and that there must be some central clock inside our head, coordinating everything like the conductor of the orchestra or the clock inside your computer,” says Jazayeri. “Even many experts in the field believe this, but we don’t think it’s right.” Rather, his work and Fee’s both point to the existence of separate circuits for different time-related behaviors, such as singing. If there is no clock, how do the different systems work together to create our apparently seamless perception of time? “It’s still a big mystery,” says Jazayeri. “Questions like that are what make neuroscience so interesting.”


Singing in the brain

Male zebra finches, small songbirds native to central Australia, learn their songs by copying what they hear from their fathers. These songs, often used as mating calls, develop early in life as juvenile birds experiment with mimicking the sounds they hear.

MIT neuroscientists have now uncovered the brain activity that supports this learning process. Sequences of neural activity that encode the birds’ first song syllable are duplicated and altered slightly, allowing the birds to produce several variations on the original syllable. Eventually these syllables are strung together into the bird’s signature song, which remains constant for life.

“The advantage here is that in order to learn new syllables, you don’t have to learn them from scratch. You can reuse what you’ve learned and modify it slightly. We think it’s an efficient way to learn various types of syllables,” says Tatsuo Okubo, a former MIT graduate student and lead author of the study, which appears in the Nov. 30 online edition of Nature.

Okubo and his colleagues believe that this type of neural sequence duplication may also underlie other types of motor learning. For example, the sequence used to swing a tennis racket might be repurposed for a similar motion such as playing Ping-Pong. “This seems like a way that sequences might be learned and reused for anything that involves timing,” says Emily Mackevicius, an MIT graduate student who is also an author of the paper.

The paper’s senior author is Michale Fee, a professor of brain and cognitive sciences at MIT and a member of the McGovern Institute for Brain Research.

Bursting into song

Previous studies from Fee’s lab have found that a part of the brain’s cortex known as the HVC is critical for song production.

Typically, each song lasts for about one second and consists of multiple syllables. Fee’s lab has found that in adult birds, individual HVC neurons show a very brief burst of activity — about 10 milliseconds or less — at one moment during the song. Different sets of neurons are active at different times, and collectively the song is represented by this sequence of bursts.

In the new Nature study, the researchers wanted to figure out how those neural patterns develop in newly hatched zebra finches. To do that, they recorded electrical activity in HVC neurons for up to three months after the birds hatched.

When zebra finches begin to sing, about 30 days after hatching, they produce only nonsense syllables known as subsong, similar to the babble of human babies. At first, the duration of these syllables is highly variable, but after a week or so they turn into more consistent sounds called protosyllables, which last about 100 milliseconds. Each bird learns one protosyllable that forms a scaffold for subsequent syllables.

The researchers found that within the HVC, neurons fire in a sequence of short bursts corresponding to the first protosyllable that each bird learns. Most of the neurons in the HVC participate in this original sequence, but as time goes by, some of these neurons are extracted from the original sequence and produce a new, very similar sequence. This chain of neural sequences can be repurposed to produce different syllables.

“From that short sequence it splits into new sequences for the next new syllables,” Mackevicius says. “It starts with that short chain that has a lot of redundancy in it, and splits off some neurons for syllable A and some neurons for syllable B.”

This splitting of neural sequences happens repeatedly until the birds can produce between three and seven different syllables, the researchers found. This entire process takes about two months, at which point each bird has settled on its final song.

Evolution by duplication

The researchers note that this process is similar to what is believed to drive the production of new genes and traits during evolution.

“If you duplicate a gene, then you could have separate mutations in both copies of the gene and they could eventually do different functions,” Okubo says. “It’s similar with motor programs. You can duplicate the sequence and then independently modify the two daughter motor programs so that they can now each do slightly different things.”

Mackevicius is now studying how input from sound-processing parts of the brain to the HVC contributes to the formation of these neural sequences.