Data transformed

With the tools of modern neuroscience, data accumulates quickly. Recording devices listen in on the electrical conversations between neurons, picking up the voices of hundreds of cells at a time. Microscopes zoom in to illuminate the brain’s circuitry, capturing thousands of images of cells’ elaborately branched paths. Functional MRIs detect changes in blood flow to map activity within a person’s brain, generating a complete picture by compiling hundreds of scans.

“When I entered neuroscience about 20 years ago, data were extremely precious, and ideas, as the expression went, were cheap. That’s no longer true,” says McGovern Associate Investigator Ila Fiete. “We have an embarrassment of wealth in the data but lack sufficient conceptual and mathematical scaffolds to understand it.”

Fiete will lead the McGovern Institute’s new K. Lisa Yang Integrative Computational Neuroscience (ICoN) Center, whose scientists will create mathematical models and other computational tools to confront the current deluge of data and advance our understanding of the brain and mental health. The center, funded by a $24 million donation from philanthropist Lisa Yang, will take a uniquely collaborative approach to computational neuroscience, integrating data from MIT labs to explain brain function at every level, from the molecular to the behavioral.

“Driven by technologies that generate massive amounts of data, we are entering a new era of translational neuroscience research,” says Yang, whose philanthropic investment in MIT research now exceeds $130 million. “I am confident that the multidisciplinary expertise convened by this center will revolutionize how we synthesize this data and ultimately understand the brain in health and disease.”

Data integration

Fiete says computation is particularly crucial to neuroscience because the brain is so staggeringly complex. Its billions of neurons, which are themselves complicated and diverse, interact with one other through trillions of connections.

“Conceptually, it’s clear that all these interactions are going to lead to pretty complex things. And these are not going to be things that we can explain in stories that we tell,” Fiete says. “We really will need mathematical models. They will allow us to ask about what changes when we perturb one or several components — greatly accelerating the rate of discovery relative to doing those experiments in real brains.”

By representing the interactions between the components of a neural circuit, a model gives researchers the power to explore those interactions, manipulate them, and predict the circuit’s behavior under different conditions.

“You can observe these neurons in the same way that you would observe real neurons. But you can do even more, because you have access to all the neurons and you have access to all the connections and everything in the network,” explains computational neuroscientist and McGovern Associate Investigator Guangyu Robert Yang (no relation to Lisa Yang), who joined MIT as a junior faculty member in July 2021.

Many neuroscience models represent specific functions or parts of the brain. But with advances in computation and machine learning, along with the widespread availability of experimental data with which to test and refine models, “there’s no reason that we should be limited to that,” he says.

Robert Yang’s team at the McGovern Institute is working to develop models that integrate multiple brain areas and functions. “The brain is not just about vision, just about cognition, just about motor control,” he says. “It’s about all of these things. And all these areas, they talk to one another.” Likewise, he notes, it’s impossible to separate the molecules in the brain from their effects on behavior – although those aspects of neuroscience have traditionally been studied independently, by researchers with vastly different expertise.

The ICoN Center will eliminate the divides, bringing together neuroscientists and software engineers to deal with all types of data about the brain. To foster interdisciplinary collaboration, every postdoctoral fellow and engineer at the center will work with multiple faculty mentors. Working in three closely interacting scientific cores, fellows will develop computational technologies for analyzing molecular data, neural circuits, and behavior, such as tools to identify pat-terns in neural recordings or automate the analysis of human behavior to aid psychiatric diagnoses. These technologies will also help researchers model neural circuits, ultimately transforming data into knowledge and understanding.

“Lisa is focused on helping the scientific community realize its goals in translational research,” says Nergis Mavalvala, dean of the School of Science and the Curtis and Kathleen Marble Professor of Astrophysics. “With her generous support, we can accelerate the pace of research by connecting the data to the delivery of tangible results.”

Computational modeling

In its first five years, the ICoN Center will prioritize four areas of investigation: episodic memory and exploration, including functions like navigation and spatial memory; complex or stereotypical behavior, such as the perseverative behaviors associated with autism and obsessive-compulsive disorder; cognition and attention; and sleep. The goal, Fiete says, is to model the neuronal interactions that underlie these functions so that researchers can predict what will happen when something changes — when certain neurons become more active or when a genetic mutation is introduced, for example. When paired with experimental data from MIT labs, the center’s models will help explain not just how these circuits work, but also how they are altered by genes, the environment, aging, and disease.

These focus areas encompass circuits and behaviors often affected by psychiatric disorders and neurodegeneration, and models will give researchers new opportunities to explore their origins and potential treatment strategies. “I really think that the future of treating disorders of the mind is going to run through computational modeling,” says McGovern Associate Investigator Josh McDermott.

In McDermott’s lab, researchers are modeling the brain’s auditory circuits. “If we had a perfect model of the auditory system, we would be able to understand why when somebody loses their hearing, auditory abilities degrade in the very particular ways in which they degrade,” he says. Then, he says, that model could be used to optimize hearing aids by predicting how the brain would interpret sound altered in various ways by the device.

Similar opportunities will arise as researchers model other brain systems, McDermott says, noting that computational models help researchers grapple with a dauntingly vast realm of possibilities. “There’s lots of different ways the brain can be set up, and lots of different potential treatments, but there is a limit to the number of neuroscience or behavioral experiments you can run,” he says. “Doing experiments on a computational system is cheap, so you can explore the dynamics of the system in a very thorough way.”

The ICoN Center will speed the development of the computational tools that neuroscientists need, both for basic understanding of the brain and clinical advances. But Fiete hopes for a culture shift within neuroscience, as well. “There are a lot of brilliant students and postdocs who have skills that are mathematics and computational and modeling based,” she says. “I think once they know that there are these possibilities to collaborate to solve problems related to psychiatric disorders and how we think, they will see that this is an exciting place to apply their skills, and we can bring them in.”

New integrative computational neuroscience center established at MIT’s McGovern Institute

With the tools of modern neuroscience, researchers can peer into the brain with unprecedented accuracy. Recording devices listen in on the electrical conversations between neurons, picking up the voices of hundreds of cells at a time. Genetic tools allow us to focus on specific types of neurons based on their molecular signatures. Microscopes zoom in to illuminate the brain’s circuitry, capturing thousands of images of elaborately branched dendrites. Functional MRIs detect changes in blood flow to map activity within a person’s brain, generating a complete picture by compiling hundreds of scans.

This deluge of data provides insights into brain function and dynamics at different levels – molecules, cells, circuits, and behavior — but the insights often remain compartmentalized in separate research silos. An innovative new center at MIT’s McGovern Institute aims to leverage them into powerful revelations of the brain’s inner workings.

The K. Lisa Yang Integrative Computational Neuroscience (ICoN) Center will create advanced mathematical models and computational tools to synthesize the deluge of data across scales and advance our understanding of the brain and mental health.

The center, funded by a $24 million donation from philanthropist Lisa Yang and led by McGovern Institute Associate Investigator Ila Fiete, will take a collaborative approach to computational neuroscience, integrating cutting-edge modeling techniques and data from MIT labs to explain brain function at every level, from the molecular to the behavioral.

“Our goal is that sophisticated, truly integrated computational models of the brain will make it possible to identify how ‘control knobs’ such as genes, proteins, chemicals, and environment drive thoughts and behavior, and to make inroads toward urgent unmet needs in understanding and treating brain disorders,” says Fiete, who is also a brain and cognitive sciences professor at MIT.

“Driven by technologies that generate massive amounts of data, we are entering a new era of translational neuroscience research,” says Yang, whose philanthropic investment in MIT research now exceeds $130 million. “I am confident that the multidisciplinary expertise convened by the ICoN center will revolutionize how we synthesize this data and ultimately understand the brain in health and disease.”

Connecting the data

It is impossible to separate the molecules in the brain from their effects on behavior – although those aspects of neuroscience have traditionally been studied independently, by researchers with vastly different expertise. The ICoN Center will eliminate the divides, bringing together neuroscientists and software engineers to deal with all types of data about the brain.

“The center’s highly collaborative structure, which is essential for unifying multiple levels of understanding, will enable us to recruit talented young scientists eager to revolutionize the field of computational neuroscience,” says Robert Desimone, director of the McGovern Institute. “It is our hope that the ICoN Center’s unique research environment will truly demonstrate a new academic research structure that catalyzes bold, creative research.”

To foster interdisciplinary collaboration, every postdoctoral fellow and engineer at the center will work with multiple faculty mentors. In order to attract young scientists and engineers to the field of computational neuroscience, the center will also provide four graduate fellowships to MIT students each year in perpetuity. Interacting closely with three scientific cores, engineers and fellows will develop computational models and technologies for analyzing molecular data, neural circuits, and behavior, such as tools to identify patterns in neural recordings or automate the analysis of human behavior to aid psychiatric diagnoses. These technologies and models will be instrumental in synthesizing data into knowledge and understanding.

Center priorities

In its first five years, the ICoN Center will prioritize four areas of investigation: episodic memory and exploration, including functions like navigation and spatial memory; complex or stereotypical behavior, such as the perseverative behaviors associated with autism and obsessive-compulsive disorder; cognition and attention; and sleep. Models of complex behavior will be created in collaboration with clinicians and researchers at Children’s Hospital of Philadelphia.

The goal, Fiete says, is to model the neuronal interactions that underlie these functions so that researchers can predict what will happen when something changes — when certain neurons become more active or when a genetic mutation is introduced, for example. When paired with experimental data from MIT labs, the center’s models will help explain not just how these circuits work, but also how they are altered by genes, the environment, aging, and disease. These focus areas encompass circuits and behaviors often affected by psychiatric disorders and neurodegeneration, and models will give researchers new opportunities to explore their origins and potential treatment strategies.

“Lisa Yang is focused on helping the scientific community realize its goals in translational research,” says Nergis Mavalvala, dean of the School of Science and the Curtis and Kathleen Marble Professor of Astrophysics. “With her generous support, we can accelerate the pace of research by connecting the data to the delivery of tangible results.”

 

Artificial networks learn to smell like the brain

Using machine learning, a computer model can teach itself to smell in just a few minutes. When it does, researchers have found, it builds a neural network that closely mimics the olfactory circuits that animal brains use to process odors.

Animals from fruit flies to humans all use essentially the same strategy to process olfactory information in the brain. But neuroscientists who trained an artificial neural network to take on a simple odor classification task were surprised to see it replicate biology’s strategy so faithfully.

“The algorithm we use has no resemblance to the actual process of evolution,” says Guangyu Robert Yang, an associate investigator at MIT’s McGovern Institute, who led the work as a postdoctoral fellow at Columbia University. The similarities between the artificial and biological systems suggest that the brain’s olfactory network is optimally suited to its task.

Yang and his collaborators, who reported their findings October 6, 2021, in the journal Neuron, say their artificial network will help researchers learn more about the brain’s olfactory circuits. The work also helps demonstrate artificial neural networks’ relevance to neuroscience. “By showing that we can match the architecture [of the biological system] very precisely, I think that gives more confidence that these neural networks can continue to be useful tools for modeling the brain,” says Yang, who is also an assistant professor in MIT’s Departments of Brain and Cognitive Sciences and Electrical Engineering and Computer Science and a member of the Center for Brains, Minds and Machines.

Mapping natural olfactory circuits

For fruit flies, the organism in which the brain’s olfactory circuitry has been best mapped, smell begins in the antennae. Sensory neurons there, each equipped with odor receptors specialized to detect specific scents, transform the binding of odor molecules into electrical activity. When an odor is detected, these neurons, which make up the first layer of the olfactory network, signal to the second-layer: a set of neurons that reside in a part of the brain called the antennal lobe. In the antennal lobe, sensory neurons that share the same receptor converge onto the same second-layer neuron. “They’re very choosy,” Yang says. “They don’t receive any input from neurons expressing other receptors.” Because it has fewer neurons than the first layer, this part of the network is considered a compression layer. These second-layer neurons, in turn, signal to a larger set of neurons in the third layer. Puzzlingly, those connections appear to be random.

For Yang, a computational neuroscientist, and Columbia University graduate student Peter Yiliu Wang, this knowledge of the fly’s olfactory system represented a unique opportunity. Few parts of the brain have been mapped as comprehensively, and that has made it difficult to evaluate how well certain computational models represent the true architecture of neural circuits, they say.

Building an artificial smell network

Neural networks, in which artificial neurons rewire themselves to perform specific tasks, are computational tools inspired by the brain. They can be trained to pick out patterns within complex datasets, making them valuable for speech and image recognition and other forms of artificial intelligence. There are hints that the neural networks that do this best replicate the activity of the nervous system. But, says Wang, who is now a postdoctoral researcher at Stanford University, differently structured networks could generate similar results, and neuroscientists still need to know whether artificial neural networks reflect the actual structure of biological circuits. With comprehensive anatomical data about fruit fly olfactory circuits, he says: “We’re able to ask this question: Can artificial neural networks truly be used to study the brain?”

Collaborating closely with Columbia neuroscientists Richard Axel and Larry Abbott, Yang and Wang constructed a network of artificial neurons comprising an input layer, a compression layer, and an expansion layer—just like the fruit fly olfactory system. They gave it the same number of neurons as the fruit fly system, but no inherent structure: connections between neurons would be rewired as the model learned to classify odors.

The scientists asked the network to assign data representing different odors to categories, and to correctly categorize not just single odors, but also mixtures of odors. This is something that the brain’s olfactory system is uniquely good at, Yang says. If you combine the scents of two different apples, he explains, the brain still smells apple. In contrast, if two photographs of cats are blended pixel by pixel, the brain no longer sees a cat. This ability is just one feature of the brain’s odor-processing circuits, but captures the essence of the system, Yang says.

It took the artificial network only minutes to organize itself. The structure that emerged was stunningly similar to that found in the fruit fly brain. Each neuron in the compression layer received inputs from a particular type of input neuron and connected, seemingly randomly, to multiple neurons in the expansion layer. What’s more, each neuron in the expansion layer receives connections, on average, from six compression-layer neurons—exactly as occurs in the fruit fly brain.

“It could have been one, it could have been 50. It could have been anywhere in between,” Yang says. “Biology finds six, and our network finds about six as well.” Evolution found this organization through random mutation and natural selection; the artificial network found it through standard machine learning algorithms.

The surprising convergence provides strong support that the brain circuits that interpret olfactory information are optimally organized for their task, he says. Now, researchers can use the model to further explore that structure, exploring how the network evolves under different conditions and manipulating the circuitry in ways that cannot be done experimentally.

Tracking time in the brain

By studying how primates mentally measure time, scientists at MIT’s McGovern Institute have discovered that the brain runs an internal clock whose speed is set by prior experience. In new experiences, the brain closely tracks how elapsed time intervals differ from its preset expectation—indicating that for the brain, time is relative.

The findings, reported September 15, 2021, in the journal Neuron, help explain how the brain uses past experience to make predictions—a powerful strategy for navigating a complex and ever-changing world. The research was led by McGovern Investigator Mehrdad Jazayeri, who is working to understand how the brain forms internal models of the world.

Internal clock

Sensory information tells us a lot about our environment, but the brain needs more than data, Jazayeri says. Internal models are vital for understanding the relationships between things, making generalizations, and interpreting and acting on our perceptions. They help us focus on what’s most important and make predictions about our surroundings, as well as the consequences of our actions. “To be efficient in learning about the world and interacting with the world, we need those predictions,” Jazayeri says. When we enter a new grocery store, for example, we don’t have to check every aisle for the peanut butter, because we know it is likely to be near the jam. Likewise, an experienced racquetball player knows how the ball will move when her paddle hits it a certain way.

Jazayeri’s team was interested in how the brain might make predictions about time. Previously, his team showed how neurons in the frontal cortex—a part of the brain involved in planning—can tick off the passage of time like a metronome. By training monkeys to use an eye movement to indicate the duration of time that separated two flashes of light, they found that cells that track time during this task cooperate to form an adjustable internal clock. Those cells generate a pattern of activity that can be drawn out to measure long time intervals or compressed to track shorter ones. The changes in these signal dynamics reflect elapsed time so precisely that by monitoring the right neurons, Jazayeri’s team can determine exactly how fast a monkey’s internal clock is running.

Predictive processing

Nicolas Meirhaeghe, a graduate student in Mehrdad Jazayeri’s lab, studies how we plan and perform movements in the face of uncertainty. He is pictured here as part of the McGovern Institute 20th anniversary “Rising Stars” photo series. Photo: Michael Spencer

For their most recent experiments, graduate student Nicolas Meirhaeghe designed a series of experiments in which the delay between the two flashes of light changed as the monkeys repeated the task. Sometimes the flashes were separated by just a fraction of a second, sometimes the delay was a bit longer. He found that the time-keeping activity pattern in the frontal cortex occurred over different time scales as the monkeys came to expect delays of different durations. As the duration of the delay fluctuated, the brain appeared to take all prior experience into account, setting the clock to measure the average of those times in anticipation of the next interval.

The behavior of the neurons told the researchers that as a monkey waited for a new set of light cues, it already had an expectation about how long the delay would be. To make such a prediction, Meirhaeghe says, “the brain has no choice but to use all the different values that you perceive from your experience, average those out, and use this as the expectation.”

By analyzing neuronal behavior during their experiments, Jazayeri and Meirhaeghe determined that the brain’s signals were not encoding the full time elapsed between light cues, but instead how that time differed from the predicted time. Calculating this prediction error enabled the monkeys to report back how much time had elapsed.

Neuroscientists have suspected that this strategy, known as predictive processing, is widely used by the brain—although until now there has been little evidence of it outside early sensory areas. “You have a lot of stimuli that are coming from the environment, but lots of stimuli are actually predictable,” Meirhaeghe says. “The idea is that your brain is learning through experience patterns in the environment, and is subtracting your expectation from the incoming signal. What the brain actually processes in the end is the result of this subtraction.”

Finally, the researchers investigated the brain’s ability to update its expectations about time. After presenting monkeys with delays within a particular time range, they switched without warning to times that fluctuated within a new range. The brain responded quickly, updating its internal clock. “If you look inside the brain, after about 100 trials the monkeys have already figured out that these statistics have changed,” says Jazayeri.

It took longer, however—as many as 1,000 trials—for the monkeys to change their behavior in response to the change. “It seems like this prediction, and updating the internal model about the statistics of the world, is way faster than our muscles are able to implement,” Jazayeri says. “Our motor system is kind of lagging behind what our cognitive abilities tell us.” This makes sense, he says, because not every change in the environment merits a change in behavior. “You don’t want to be distracted by every small thing that deviates from your prediction. You want to pay attention to things that have a certain level of consistency.”

School of Science welcomes new faculty

This fall, MIT welcomes new faculty members — six assistant professors and two tenured professors — to the departments of Biology; Brain and Cognitive Sciences; Chemistry; Earth, Atmospheric and Planetary Sciences; and Physics.

A physicist, Soonwon Choi is interested in dynamical phenomena that occur in strongly interacting quantum many-body systems far from equilibrium and designing their applications for quantum information science. He takes a variety of interdisciplinary approaches from analytic theory and numerical computations to collaborations on experiments with controlled quantum degrees of freedom. Recently, Choi’s research has encompassed studying the phenomenon of a phase transition in the dynamics of quantum entanglement and information, drawing on machine learning to introduce a quantum convolutional neural network that can recognize quantum states associated with a one-dimensional symmetry-protected topological phase, and exploring a range of quantum applications of the nitrogen-vacancy color center of diamond.

After completing his undergraduate study in physics at Caltech in 2012, Choi received his PhD degree in physics from Harvard University in 2018. He then worked as a Miller Postdoctoral Fellow at the University of California at Berkeley before joining the Department of Physics and the Center for Theoretical Physics as an assistant professor in July 2021.

Olivia Corradin investigates how genetic variants contribute to disease. She focuses on non-coding DNA variants — changes in DNA sequence that can alter the regulation of gene expression — to gain insight into pathogenesis. With her novel outside-variant approach, Corradin’s lab singled out a type of brain cell involved in multiple sclerosis, increasing total heritability identified by three- to five-fold. A recipient of the Avenir Award through the NIH Director’s Pioneer Award Program, Corradin also scrutinizes how genetic and epigenetic variation influence susceptibility to substance abuse disorders. These critical insights into multiple sclerosis, opioid use disorder, and other diseases have the potential to improve risk assessment, diagnosis, treatment, and preventative care for patients.

Corradin completed a bachelor’s degree in biochemistry from Marquette University in 2010 and a PhD in genetics from Case Western Reserve University in 2016. A Whitehead Institute Fellow since 2016, she also became an institute member in July 2021. The Department of Biology welcomes Corradin as an assistant professor.

Arlene Fiore seeks to understand processes that control two-way interactions between air pollutants and the climate system, as well as the sensitivity of atmospheric chemistry to different chemical, physical, and biological sources and sinks at scales ranging from urban to global and daily to decadal. Combining chemistry-climate models and observations from ground, airborne, and satellite platforms, Fiore has identified global dimensions to ground-level ozone smog and particulate haze that arise from linkages with the climate system, global atmospheric composition, and the terrestrial biosphere. She also investigates regional meteorology and climate feedbacks due to aerosols versus greenhouse gases, future air pollution responses to climate change, and drivers of atmospheric oxidizing capacity. A new research direction involves using chemistry-climate model ensemble simulations to identify imprints of climate variability on observational records of trace gases in the troposphere.

After earning a bachelor’s degree and PhD from Harvard University, Fiore held a research scientist position at the Geophysical Fluid Dynamics Laboratory and was appointed as an associate professor with tenure at Columbia University in 2011. Over the last decade, she has worked with air and health management partners to develop applications of satellite and other Earth science datasets to address their emerging needs. Fiore’s honors include the American Geophysical Union (AGU) James R. Holton Junior Scientist Award, Presidential Early Career Award for Scientists and Engineers (the highest honor bestowed by the United States government on outstanding scientists and engineers in the early stages of their independent research careers), and AGU’s James B. Macelwane Medal. The Department of Earth, Atmospheric and Planetary Sciences welcomes Fiore as the first Peter H. Stone and Paola Malanotte Stone Professor.

With a background in magnetism, Danna Freedman leverages inorganic chemistry to solve problems in physics. Within this paradigm, she is creating the next generation of materials for quantum information by designing spin-based quantum bits, or qubits, based in molecules. These molecular qubits can be precisely controlled, opening the door for advances in quantum computation, sensing, and more. She also harnesses high pressure to synthesize new emergent materials, exploring the possibilities of intermetallic compounds and solid-state bonding. Among other innovations, Freedman has realized millisecond coherence times in molecular qubits, created a molecular analogue of an NV center featuring optical read-out of spin, and discovered the first iron-bismuth binary compound.

Freedman received her bachelor’s degree from Harvard University and her PhD from the University of California at Berkeley, then conducted postdoctoral research at MIT before joining the faculty at Northwestern University as an assistant professor in 2012, earning an NSF CAREER Award, the Presidential Early Career Award for Scientists and Engineers, the ACS Award in Pure Chemistry, and more. She was promoted to associate professor in 2018 and full professor with tenure in 2020. Freedman returns to MIT as the Frederick George Keyes Professor of Chemistry.

Kristin Knouse PhD ’17 aims to understand how tissues sense and respond to damage, with the goal of developing new approaches for regenerative medicine. She focuses on the mammalian liver — which has the unique ability to completely regenerate itself — to ask how organisms react to organ injury, how certain cells retain the ability to grow and divide while others do not, and what genes regulate this process. Knouse creates innovative tools, such as a genome-wide CRISPR screening within a living mouse, to examine liver regeneration from the level of a single-cell to the whole organism.

Knouse received a bachelor’s degree in biology from Duke University in 2010 and then enrolled in the Harvard and MIT MD-PhD Program, where she earned a PhD through the MIT Department of Biology in 2016 and an MD through the Harvard-MIT Program in Health Sciences and Technology in 2018. In 2018, she established her independent laboratory at the Whitehead Institute for Biomedical Research and was honored with the NIH Director’s Early Independence Award. Knouse joins the Department of Biology and the Koch Institute for Integrative Cancer Research as an assistant professor.

Lina Necib PhD ’17 is an astroparticle physicist exploring the origin of dark matter through a combination of simulations and observational data that correlate the dynamics of dark matter with that of the stars in the Milky Way. She has investigated the local dynamic structures in the solar neighborhood using the Gaia satellite, contributed to building a catalog of local accreted stars using machine learning techniques, and discovered a new stream called Nyx, after the Greek goddess of the night. Necib is interested in employing Gaia in conjunction with other spectroscopic surveys to understand the dark matter profile in the local solar neighborhood, the center of the galaxy, and in dwarf galaxies.

After obtaining a bachelor’s degree in mathematics and physics from Boston University in 2012 and a PhD in theoretical physics from MIT in 2017, Necib was a Sherman Fairchild Fellow at Caltech, a Presidential Fellow at the University of California at Irvine, and a fellow in theoretical astrophysics at Carnegie Observatories. She returns to MIT as an assistant professor in the Department of Physics and a member of the MIT Kavli Institute for Astrophysics and Space Research.

Andrew Vanderburg studies exoplanets, or planets that orbit stars other than the sun. Conducting astronomical observations from Earth as well as space, he develops cutting-edge methods to learn about planets outside of our solar system. Recently, he has leveraged machine learning to optimize searches and identify planets that were missed by previous techniques. With collaborators, he discovered the eighth planet in the Kepler-90 solar system, a Jupiter-like planet with unexpectedly close orbiting planets, and rocky bodies disintegrating near a white dwarf, providing confirmation of a theory that such stars may accumulate debris from their planetary systems.

Vanderburg received a bachelor’s degree in physics and astrophysics from the University of California at Berkeley in 2013 and a PhD in Astronomy from Harvard University in 2017. Afterward, Vanderburg moved to the University of Texas at Austin as a NASA Sagan Postdoctoral Fellow, then to the University of Wisconsin at Madison as a faculty member. He joins MIT as an assistant professor in the Department of Physics and a member of the Kavli Institute for Astrophysics and Space Research.

A computational neuroscientist, Guangyu Robert Yang is interested in connecting artificial neural networks to the actual functions of cognition. His research incorporates computational and biological systems and uses computational modeling to understand the optimization of neural systems which function to accomplish multiple tasks. As a postdoc, Yang applied principles of machine learning to study the evolution and organization of the olfactory system. The neural networks his models generated show important similarities to the biological circuitry, suggesting that the structure of the olfactory system evolved in order to optimally enable the specific tasks needed for odor recognition.

Yang received a bachelor’s degree in physics from Peking University before obtaining a PhD in computational neuroscience at New York University, followed by an internship in software engineering at Google Brain. Before coming to MIT, he conducted postdoctoral research at the Center for Theoretical Neuroscience of Columbia University, where he was a junior fellow at the Simons Society of Fellows. Yang is an assistant professor in the Department of Brain and Cognitive Sciences with a shared appointment in the Department of Electrical Engineering and Computer Science in the School of Engineering and the MIT Schwarzman College of Computing as well as an associate investigator with the McGovern Institute.

Mehrdad Jazayeri wants to know how our brains model the external world

Much of our daily life requires us to make inferences about the world around us. As you think about which direction your tennis opponent will hit the ball, or try to figure out why your child is crying, your brain is searching for answers about possibilities that are not directly accessible through sensory experiences.

MIT Associate Professor Mehrdad Jazayeri has devoted most of his career to exploring how the brain creates internal representations, or models, of the external world to make intelligent inferences about hidden states of the world.

“The one question I am most interested in is how does the brain form internal models of the external world? Studying inference is really a powerful way of gaining insight into these internal models,” says Jazayeri, who recently earned tenure in the Department of Brain and Cognitive Sciences and is also a member of MIT’s McGovern Institute for Brain Research.

Using a variety of approaches, including detailed analysis of behavior, direct recording of activity of neurons in the brain, and mathematical modeling, he has discovered how the brain builds models of statistical regularities in the environment. He has also found circuits and mechanisms that enable the brain to capture the causal relationships between observations and outcomes.

An unusual path

Jazayeri, who has been on the faculty at MIT since 2013, took an unusual path to a career in neuroscience. Growing up in Tehran, Iran, he was an indifferent student until his second year of high school when he got interested in solving challenging geometry puzzles. He also started programming with the ZX Spectrum, an early 8-bit personal computer, that his father had given him.

During high school, he was chosen to train for Iran’s first ever National Physics Olympiad team, but when he failed to make it to the international team, he became discouraged and temporarily gave up on the idea of going to college. Eventually, he participated in the University National Entrance Exam and was admitted to the electrical engineering department at Sharif University of Technology.

Jazayeri didn’t enjoy his four years of college education. The experience mostly helped him realize that he was not meant to become an engineer. “I realized that I’m not an inventor. What inspires me is the process of discovery,” he says. “I really like to figure things out, not build things, so those four years were not very inspiring.”

After graduating from college, Jazayeri spent a few years working on a banana farm near the Caspian Sea, along with two friends. He describes those years as among the best and most formative of his life. He would wake by 4 a.m., work on the farm until late afternoon, and spend the rest of the day thinking and reading. One topic he read about with great interest was neuroscience, which led him a few years later to apply to graduate school.

He immigrated to Canada and was admitted to the University of Toronto, where he earned a master’s degree in physiology and neuroscience. While there, he worked on building small circuit models that would mimic the activity of neurons in the hippocampus.

From there, Jazayeri went on to New York University to earn a PhD in neuroscience, where he studied how signals in the visual cortex support perception and decision-making. “I was less interested in how the visual cortex encodes the external world,” he says. “I wanted to understand how the rest of the brain decodes the signals in visual cortex, which is, in effect, an inference problem.”

He continued pursuing his interest in the neurobiology of inference as a postdoc at the University of Washington, where he investigated how the brain uses temporal regularities in the environment to estimate time intervals, and uses knowledge about those intervals to plan for future actions.

Building internal models to make inferences

Inference is the process of drawing conclusions based on information that is not readily available. Making rich inferences from scarce data is one of humans’ core mental capacities, one that is central to what makes us the most intelligent species on Earth. To do so, our nervous system builds internal models of the external world, and those models that help us think through possibilities without directly experiencing them.

The problem of inferences presents itself in many behavioral settings.

“Our nervous system makes all sorts of internal models for different behavioral goals, some that capture the statistical regularities in the environment, some that link potential causes to effects, some that reflect relationships between entities, and some that enable us to think about others,” Jazayeri says.

Jazayeri’s lab at MIT is made up of a group of cognitive scientists, electrophysiologists, engineers, and physicists with a shared interest in understanding the nature of internal models in the brain and how those models enable us to make inferences in different behavioral tasks.

Early work in the lab focused on a simple timing task to examine the problem of statistical inference, that is, how we use statistical regularities in the environment to make accurate inference. First, they found that the brain coordinates movements in time using a dynamic process, akin to an analog timer. They also found that the neural representation of time in the frontal cortex is being continuously calibrated based on prior experience so that we can make more accurate time estimates in the presence of uncertainty.

Later, the lab developed a complex decision-making task to examine the neural basis of causal inference, or the process of deducing a hidden cause based on its effects. In a paper that appeared in 2019, Jazayeri and his colleagues identified a hierarchical and distributed brain circuit in the frontal cortex that helps the brain to determine the most probable cause of failure within a hierarchy of decisions.

More recently, the lab has extended its investigation to other behavioral domains, including relational inference and social inference. Relational inference is about situating an ambiguous observation using relational memory. For example, coming out of a subway in a new neighborhood, we may use our knowledge of the relationship between visible landmarks to infer which way is north. Social inference, which is extremely difficult to study, involves deducing other people’s beliefs and goals based on their actions.

Along with studies in human volunteers and animal models, Jazayeri’s lab develops computational models based on neural networks, which helps them to test different possible hypotheses of how the brain performs specific tasks. By comparing the activity of those models with neural activity data from animals, the researchers can gain insight into how the brain actually performs a particular type of inference task.

“My main interest is in how the brain makes inferences about the world based on the neural signals,” Jazayeri says. “All of my work is about looking inside the brain, measuring signals, and using mathematical tools to try to understand how those signals are manifestations of an internal model within the brain.”

Josh McDermott seeks to replicate the human auditory system

The human auditory system is a marvel of biology. It can follow a conversation in a noisy restaurant, learn to recognize words from languages we’ve never heard before, and identify a familiar colleague by their footsteps as they walk by our office.

So far, even the most sophisticated computational models cannot perform such tasks as well as the human auditory system, but MIT neuroscientist Josh McDermott hopes to change that. Achieving this goal would be a major step toward developing new ways to help people with hearing loss, says McDermott, who recently earned tenure in MIT’s Department of Brain and Cognitive Sciences.

“Our long-term goal is to build good predictive models of the auditory system,” McDermott says.

“If we were successful in that goal, then it would really transform our ability to make people hear better, because we could design a computer program to figure out what to do to incoming sound to make it easier to recognize what somebody said or where a sound is coming from.”

McDermott’s lab also explores how exposure to different types of music affects people’s music preferences and even how they perceive music. Such studies can help to reveal elements of sound perception that are “hardwired” into our brains, and other elements that are influenced by exposure to different kinds of sounds.

“We have found that there is cross-cultural variation in things that people had widely supposed were universal and possibly even innate,” McDermott says.

Sound perception

As an undergraduate at Harvard University, McDermott originally planned to study math and physics, but “I was very quickly seduced by the brain,” he says. At the time, Harvard did not offer a major in neuroscience, so McDermott created his own, with a focus on vision.

After earning a master’s degree from University College London, he came to MIT to do a PhD in brain and cognitive sciences. His focus was still on vision, which he studied with Ted Adelson, the John and Dorothy Wilson Professor of Vision Science, but he found himself increasingly interested in audition. He had always loved music, and around this time, he started working as a radio and club DJ. “I was spending a lot of time thinking about sound and why things sound the way they do,” he recalls.

To pursue his new interest, he served as a postdoc at the University of Minnesota, where he worked in a lab devoted to psychoacoustics — the study of how humans perceive sound. There, he studied auditory phenomena such as the “cocktail party effect,” or the ability to focus on a particular person’s voice while tuning out background noise. During another postdoc at New York University, he started working on computational models of the auditory system. That interest in computation is part of what drew him back to MIT as a faculty member, in 2013.

“The culture here surrounding brain and cognitive science really prioritizes and values computation, and that was a perspective that was important to me,” says McDermott, who is also a member of MIT’s McGovern Institute for Brain Research and the Center for Brains, Minds and Machines. “I knew that was the kind of work I really wanted to do in my lab, so it just felt like a natural environment for doing that work.”

One aspect of audition that McDermott’s lab focuses on is “auditory scene analysis,” which includes tasks such as inferring what events in the environment caused a particular sound, and determining where a particular sound came from. This requires the ability to disentangle sounds produced by different events or objects, and the ability to tease out the effects of the environment. For instance, a basketball bouncing on a hardwood floor in a gym makes a different sound than a basketball bouncing on an outdoor paved court.

“Sounds in the world have very particular properties, due to physics and the way that the world works,” McDermott says. “We believe that the brain internalizes those regularities, and you have models in your head of the way that sound is generated. When you hear something, you are performing an inference in that model to figure out what is likely to have happened that caused the sound.”

A better understanding of how the brain does this may eventually lead to new strategies to enhance human hearing, McDermott says.

“Hearing impairment is the most common sensory disorder. It affects almost everybody as they get older, and the treatments are OK, but they’re not great,” he says. “We’re eventually going to all have personalized hearing aids that we walk around with, and we just need to develop the right algorithms in order to tell them what to do. That’s something we’re actively working on.”

Music in the brain

About 10 years ago, when McDermott was a postdoc, he started working on cross-cultural studies of how the human brain perceives music. Richard Godoy, an anthropologist at Brandeis University, asked McDermott to join him for some studies of the Tsimane’ people, who live in the Amazon rainforest. Since then, McDermott and some of his students have gone to Bolivia most summers to study sound perception among the Tsimane’. The Tsimane’ have had very little exposure to Western music, making them ideal subjects to study how listening to certain kinds of music influences human sound perception.

These studies have revealed both differences and similarities between Westerners and the Tsimane’ people. McDermott, who counts soul, disco, and jazz-funk among his favorite types of music, has found that Westerners and the Tsimane’ differ in their perceptions of dissonance. To Western ears, for example, the chord of C and F# sounds very unpleasant, but not to the Tsimane’.

He has also shown that that people in Western society perceive sounds that are separated by an octave to be similar, but the Tsimane’ do not. However, there are also some similarities between the two groups. For example, the upper limit of frequencies that can be perceived appears to be the same regardless of music exposure.

“We’re finding both striking variation in some perceptual traits that many people presumed were common across cultures and listeners, and striking similarities in others,” McDermott says. “The similarities and differences across cultures dissociate aspects of perception that are tightly coupled in Westerners, helping us to parcellate perceptual systems into their underlying components.”

Nine MIT students awarded 2021 Paul and Daisy Soros Fellowships for New Americans

An MIT senior and eight MIT graduate students are among the 30 recipients of this year’s P.D. Soros Fellowships for New Americans. In addition to senior Fiona Chen, MIT’s newest Soros winners include graduate students Aziza Almanakly, Alaleh Azhir, Brian Y. Chang PhD ’18, James Diao, Charlie ChangWon Lee, Archana Podury, Ashwin Sah ’20, and Enrique Toloza. Six of the recipients are enrolled at the Harvard-MIT Program in Health Sciences and Technology.

P.D. Soros Fellows receive up to $90,000 to fund their graduate studies and join a lifelong community of new Americans from different backgrounds and fields. The 2021 class was selected from a pool of 2,445 applicants, marking the most competitive year in the fellowship’s history.

The Paul & Daisy Soros Fellowships for New Americans program honors the contributions of immigrants and children of immigrants to the United States. As Fiona Chen says, “Being a new American has required consistent confrontation with the struggles that immigrants and racial minorities face in the U.S. today. It has meant frequent difficulties with finding security and comfort in new contexts. But it has also meant continual growth in learning to love the parts of myself — the way I look; the things that my family and I value — that have marked me as different, or as an outsider.”

Students interested in applying to the P.D. Soros fellowship should contact Kim Benard, assistant dean of distinguished fellowships in Career Advising and Professional Development.

Aziza Almanakly

Aziza Almanakly, a PhD student in electrical engineering and computer science, researches microwave quantum optics with superconducting qubits for quantum communication under Professor William Oliver in the Department of Physics. Almanakly’s career goal is to engineer multi-qubit systems that push boundaries in quantum technology.

Born and raised in northern New Jersey, Almanakly is the daughter of Syrian immigrants who came to the United States in the early 1990s in pursuit of academic opportunities. As the civil war in Syria grew dire, more of her relatives sought asylum in the U.S. Almanakly grew up around extended family who built a new version of their Syrian home in New Jersey.

Following in the footsteps of her mathematically minded father, Almanakly studied electrical engineering at The Cooper Union for the Advancement of Science and Art. She also pursued research opportunities in experimental quantum computing at Princeton University, the City University of New York, New York University, and Caltech.

Almanakly recognizes the importance of strong mentorship in diversifying engineering. She uses her unique experience as a New American and female engineer to encourage students from underrepresented backgrounds to enter STEM fields.

Alaleh Azhir

Alaleh Azhir grew up in Iran, where she pursued her passion for mathematics. She immigrated with her mother to the United States at age 14. Determined to overcome strict gender roles she had witnessed for women, Azhir is dedicated to improving health care for them.

Azhir graduated from Johns Hopkins University in 2019 with a perfect GPA as a triple major in biomedical engineering, computer science, and applied mathematics and statistics. A Rhodes and Barry Goldwater Scholar, she has developed many novel tools for visualization and analysis of genomics data at Johns Hopkins University, Harvard University, MIT, the National Institutes of Health, and laboratories in Switzerland.

After completing a master’s in statistical science at Oxford University, Azhir began her MD studies in the Harvard-MIT Program in Health Sciences and Technology. Her thesis focuses on the role of X and Y sex chromosomes on disease manifestations. Through medical training, she aims to build further computational tools specifically for preventive care for women. She has also founded and directs the nonprofit organization, Frappa, aimed at mentoring women living in Iran and helping them to immigrate abroad through the graduate school application process.

Brian Y. Chang PhD ’18

Born in Johnson City, New York, Brian Y. Chang PhD ’18 is the son of immigrants from the Shanghai municipality and Shandong Province in China. He pursued undergraduate and master’s degrees in mechanical engineering at Carnegie Mellon University, graduating in a combined four years with honors.

In 2018, Chang completed a PhD in medical engineering at MIT. Under the mentorship of Professor Elazer Edelman, Chang developed methods that make advanced cardiac technologies more accessible. The resulting approaches are used in hospitals around the world. Chang has published extensively and holds five patents.

With the goal of harnessing the power of engineering to improve patient care, Chang co-founded X-COR Therapeutics, a seed-funded medical device startup developing a more accessible treatment for lung failure with the potential to support patients with severe Covid-19 and chronic obstructive pulmonary disease.

After spending time in the hospital connecting with patients and teaching cardiovascular pathophysiology to medical students, Chang decided to attend medical school. He is currently a medical student in the Harvard-MIT Program in Health Sciences and Technology. Chang hopes to advance health care through medical device innovation and education as a future physician-scientist, entrepreneur, and educator.

Fiona Chen

MIT senior Fiona Chen was born in Cedar Park, Texas, the daughter of immigrants from China. Witnessing how her own and many other immigrant families faced significant difficulties finding work and financial stability sparked her interest in learning about poverty and economic inequality.

At MIT, Chen has pursued degrees in economics and mathematics. Her economics research projects have examined important policy issues — social isolation among students, global development and poverty, universal health-care systems, and the role of technology in shaping the labor market.

An active member of the MIT community, Chen has served as the officer on governance and officer on policy of the Undergraduate Association, MIT’s student government; the opinion editor of The Tech student newspaper; the undergraduate representative of several Institute-wide committees, including MIT’s Corporation Joint Advisory Committee; and one of the founding members of MIT Students Against War. In each of these roles, she has worked to advocate for policies to support underrepresented groups at MIT.

As a Soros fellow, Chen will pursue a PhD in economics to deepen her understanding of economic policy. Her ultimate goal is to become a professor who researches poverty and economic inequality, and applies her findings to craft policy solutions.

James Diao

James Diao graduated from Yale University with degrees in statistics and biochemistry and is currently a medical student at the Harvard-MIT Program in Health Sciences and Technology. He aspires to give voice to patient perspectives in the development and evaluation of health-care technology.

Diao grew up in Houston’s Chinatown, and spent summers with his extended family in Jiangxian. Diao’s family later moved to Fort Bend, Texas, where he found a pediatric oncologist mentor who introduced him to the wonders of modern molecular biology.

Diao’s interests include the responsible development of technology. At Apple, he led projects to validate wearable health features in diverse populations; at PathAI, he built deep learning models to broaden access to pathologist services; at Yale, where he worked on standardizing analyses of exRNA biomarkers; and at Harvard, he studied the impacts of clinical guidelines on marginalized groups.

Diao’s lead author research in the New England Journal of Medicine and JAMA systematically compared race-based and race-free equations for kidney function, and demonstrated that up to 1 million Black Americans may receive unequal kidney care due to their race. He has also published articles on machine learning and precision medicine.

Charlie ChangWon Lee

Born in Seoul, South Korea, Charlie ChangWon Lee was 10 when his family immigrated to the United States and settled in Palisades Park, New Jersey. The stress of his parents’ lack of health coverage ignited Lee’s determination to study the reasons for the high cost of health care in the U.S. and learn how to care for uninsured families like his own.

Lee graduated summa cum laude in integrative biology from Harvard College, winning the Hoopes Prize for his thesis on the therapeutic potential of human gut microbes. Lee’s research on novel therapies led him to question how newly approved, and expensive, medications could reach more patients.

At the Program on Regulation, Therapeutics, and Law (PORTAL) at Brigham and Women’s Hospital, Lee studied policy issues involving pharmaceutical drug pricing, drug development, and medication use and safety. His articles have appeared in JAMA, Health Affairs, and Mayo Clinic Proceedings.

As a first-year medical student at the Harvard-MIT Health Sciences and Technology program, Lee is investigating policies to incentivize vaccine and biosimilar drug development. He hopes to find avenues to bridge science and policy and translate medical innovations into accessible, affordable therapies.

Archana Podury

The daughter of Indian immigrants, Archana Podury was born in Mountain View, California. As an undergraduate at Cornell University, she studied the neural circuits underlying motor learning. Her growing interest in whole-brain dynamics led her to the Princeton Neuroscience Institute and Neuralink, where she discovered how brain-machine interfaces could be used to understand diffuse networks in the brain.

While studying neural circuits, Podury worked at a syringe exchange in Ithaca, New York, where she witnessed firsthand the mechanics of court-based drug rehabilitation. Now, as an MD student in the Harvard-MIT Health Sciences and Technology program, Podury is interested in combining computational and social approaches to neuropsychiatric disease.

In the Boyden Lab at the MIT McGovern Institute for Brain Research, Podury is developing human brain organoid models to better characterize circuit dysfunction in neurodevelopmental disorders. Concurrently, her work in the Dhand Lab at Brigham and Women’s Hospital applies network science tools to understand how patients’ social environments influence their health outcomes following acute neurological injury.

Podury hopes that focusing on both neural and social networks can lead toward a more comprehensive, and compassionate, approach to health and disease.

Ashwin Sah ’20

Ashwin Sah ’20 was born and raised in Portland, Oregon, the son of Indian immigrants. He developed a passion for mathematics research as an undergraduate at MIT, where he conducted research under Professor Yufei Zhao, as well as at the Duluth and Emory REU (Research Experience for Undergraduates) programs.

Sah has given talks on his work at multiple professional venues. His undergraduate research in varied areas of combinatorics and discrete mathematics culminated in the Barry Goldwater Scholarship and the Frank and Brennie Morgan Prize for Outstanding Research in Mathematics by an Undergraduate Student. Additionally, his work on diagonal Ramsey numbers was recently featured in Quanta Magazine.

Beyond research, Sah has pursued opportunities to give back to the math community, helping to organize or grade competitions such as the Harvard-MIT Mathematics Tournament and the USA Mathematical Olympiad. He has also been a grader at the Mathematical Olympiad Program, a camp for talented high-school students in the United States, and an instructor for the Monsoon Math Camp, a virtual program aimed at teaching higher mathematics to high school students in India.

Sah is currently a PhD student in mathematics at MIT, where he continues to work with Zhao.

Enrique Toloza

Enrique Toloza was born in Los Angeles, California, the child of two immigrants: one from Colombia who came to the United States for a PhD and the other from the Philippines who grew up in California and went on to medical school. Their literal marriage of science and medicine inspired Toloza to become a physician-scientist.

Toloza majored in physics and Spanish literature at the University of North Carolina at Chapel Hill. He eventually settled on an interest in theoretical neuroscience after a summer research internship at MIT and completing an honors thesis on noninvasive brain stimulation.

After college, Toloza joined Professor Mark Harnett’s laboratory at MIT for a year. He went on to enroll in the Harvard-MIT MD/PhD program, studying within the Health Sciences and Technology MD curriculum at Harvard and the PhD program at MIT. For his PhD, Toloza rejoined Harnett to conduct research on the biophysics of dendritic integration and the contribution of dendrites to cortical computations in the brain.

Toloza is passionate about expanding health care access to immigrant populations. In college, he led the interpreting team at the University of North Carolina at Chapel Hill’s student-run health clinic; at Harvard Medical School, he has worked with Spanish-speaking patients as a student clinician.

James DiCarlo named director of the MIT Quest for Intelligence

James DiCarlo, the Peter de Florez Professor of Neuroscience, has been appointed to the role of director of the MIT Quest for Intelligence. MIT Quest was launched in 2018 to discover the basis of natural intelligence, create new foundations for machine intelligence, and deliver new tools and technologies for humanity.

As director, DiCarlo will forge new collaborations with researchers within MIT and beyond to accelerate progress in understanding intelligence and developing the next generation of intelligence tools.

“We have discovered and developed surprising new connections between natural and artificial intelligence,” says DiCarlo, currently head of the Department of Brain and Cognitive Sciences (BCS). “The scientific understanding of natural intelligence, and advances in building artificial intelligence with positive real-world impact, are interlocked aspects of a unified, collaborative grand challenge, and MIT must continue to lead the way.”

Aude Oliva, senior research scientist at the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the MIT director of the MIT-IBM Watson AI Lab, will lead industry engagements as director of MIT Quest Corporate. Nicholas Roy, professor of aeronautics and astronautics and a member of CSAIL, will lead the development of systems to deliver on the mission as director of MIT Quest Systems Engineering. Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing, will serve as chair of MIT Quest.

“The MIT Quest’s leadership team has positioned this initiative to spearhead our understanding of natural and artificial intelligence, and I am delighted that Jim is taking on this role,” says Huttenlocher, the Henry Ellis Warren (1894) Professor of Electrical Engineering and Computer Science.

DiCarlo will step down from his current role as head of BCS, a position he has held for nearly nine years, and will continue as faculty in BCS and as an investigator in the McGovern Institute for Brain Research.

“Jim has been a highly productive leader for his department, the School of Science, and the Institute at large. I’m excited to see the impact he will make in this new role,” says Nergis Mavalvala, dean of the School of Science and the Curtis and Kathleen Marble Professor of Astrophysics.

As department head, DiCarlo oversaw significant progress in the department’s scientific and educational endeavors. Roughly a quarter of current BCS faculty were hired on his watch, strengthening the department’s foundations in cognitive, systems, and cellular and molecular brain science. In addition, DiCarlo developed a new departmental emphasis in computation, deepening BCS’s ties with the MIT Schwarzman College of Computing and other MIT units such as the Center for Brains, Minds and Machines. He also developed and leads an NIH-funded graduate training program in computationally-enabled integrative neuroscience. As a result, BCS is one of the few departments in the world that is attempting to decipher, in engineering terms, how the human mind emerges from the biological components of the brain.

To prepare students for this future, DiCarlo collaborated with BCS Associate Department Head Michale Fee to design and execute a total overhaul of the Course 9 curriculum. In addition, partnering with the Department of Electrical Engineering and Computer Science, BCS developed a new major, Course 6-9 (Computation and Cognition), to fill the rapidly growing interest in this interdisciplinary topic. In only its second year, Course 6-9 already has more than 100 undergraduate majors.

DiCarlo has also worked tirelessly to build a more open, connected, and supportive culture across the entire BCS community in Building 46. In this work, as in everything, DiCarlo sought to bring people together to address challenges collaboratively. He attributes progress to strong partnerships with Li-Huei Tsai, the Picower Professor of Neuroscience in BCS and director of the Picower Institute for Learning and Memory; Robert Desimone, the Doris and Don Berkey Professor in BCS and director of the McGovern Institute for Brain Research; and to the work of dozens of faculty and staff. For example, in collaboration with associate department head Professor Rebecca Saxe, the department has focused on faculty mentorship of graduate students, and, in collaboration with postdoc officer Professor Mark Bear, the department developed postdoc salary and benefit standards. Both initiatives have become models for the Institute. In recent months, DiCarlo partnered with new associate department head Professor Laura Schulz to constructively focus renewed energy and resources on initiatives to address systemic racism and promote diversity, equity, inclusion, and social justice.

“Looking ahead, I share Jim’s vision for the research and educational programs of the department, and for enhancing its cohesiveness as a community, especially with regard to issues of diversity, equity, inclusion, and justice,” says Mavalvala. “I am deeply committed to supporting his successor in furthering these goals while maintaining the great intellectual strength of BCS.”

In his own research, DiCarlo uses a combination of large-scale neurophysiology, brain imaging, optogenetic methods, and high-throughput computational simulations to understand the neuronal mechanisms and cortical computations that underlie human visual intelligence. Working in animal models, he and his research collaborators have established precise connections between the internal workings of the visual system and the internal workings of particular computer vision systems. And they have demonstrated that these science-to-engineering connections lead to new ways to modulate neurons deep in the brain as well as to improved machine vision systems. His lab’s goals are to help develop more human-like machine vision, new neural prosthetics to restore or augment lost senses, new learning strategies, and an understanding of how visual cognition is impaired in agnosia, autism, and dyslexia.

DiCarlo earned both a PhD in biomedical engineering and an MD from The Johns Hopkins University in 1998, and completed his postdoc training in primate visual neurophysiology at Baylor College of Medicine. He joined the MIT faculty in 2002.

A search committee will convene early this year to recommend candidates for the next department head of BCS. DiCarlo will continue to lead the department until that new head is selected.

To the brain, reading computer code is not the same as reading language

In some ways, learning to program a computer is similar to learning a new language. It requires learning new symbols and terms, which must be organized correctly to instruct the computer what to do. The computer code must also be clear enough that other programmers can read and understand it.

In spite of those similarities, MIT neuroscientists have found that reading computer code does not activate the regions of the brain that are involved in language processing. Instead, it activates a distributed network called the multiple demand network, which is also recruited for complex cognitive tasks such as solving math problems or crossword puzzles.

However, although reading computer code activates the multiple demand network, it appears to rely more on different parts of the network than math or logic problems do, suggesting that coding does not precisely replicate the cognitive demands of mathematics either.

“Understanding computer code seems to be its own thing. It’s not the same as language, and it’s not the same as math and logic,” says Anna Ivanova, an MIT graduate student and the lead author of the study.

Evelina Fedorenko, the Frederick A. and Carole J. Middleton Career Development Associate Professor of Neuroscience and a member of the McGovern Institute for Brain Research, is the senior author of the paper, which appears today in eLife. Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory and Tufts University were also involved in the study.

Language and cognition

McGovern Investivator Ev Fedorenko in the Martinos Imaging Center at MIT. Photo: Caitlin Cunningham

A major focus of Fedorenko’s research is the relationship between language and other cognitive functions. In particular, she has been studying the question of whether other functions rely on the brain’s language network, which includes Broca’s area and other regions in the left hemisphere of the brain. In previous work, her lab has shown that music and math do not appear to activate this language network.

“Here, we were interested in exploring the relationship between language and computer programming, partially because computer programming is such a new invention that we know that there couldn’t be any hardwired mechanisms that make us good programmers,” Ivanova says.

There are two schools of thought regarding how the brain learns to code, she says. One holds that in order to be good at programming, you must be good at math. The other suggests that because of the parallels between coding and language, language skills might be more relevant. To shed light on this issue, the researchers set out to study whether brain activity patterns while reading computer code would overlap with language-related brain activity.

The two programming languages that the researchers focused on in this study are known for their readability — Python and ScratchJr, a visual programming language designed for children age 5 and older. The subjects in the study were all young adults proficient in the language they were being tested on. While the programmers lay in a functional magnetic resonance (fMRI) scanner, the researchers showed them snippets of code and asked them to predict what action the code would produce.

The researchers saw little to no response to code in the language regions of the brain. Instead, they found that the coding task mainly activated the so-called multiple demand network. This network, whose activity is spread throughout the frontal and parietal lobes of the brain, is typically recruited for tasks that require holding many pieces of information in mind at once, and is responsible for our ability to perform a wide variety of mental tasks.

“It does pretty much anything that’s cognitively challenging, that makes you think hard,” says Ivanova, who was also named one of the McGovern Institute’s rising stars in neuroscience.

Previous studies have shown that math and logic problems seem to rely mainly on the multiple demand regions in the left hemisphere, while tasks that involve spatial navigation activate the right hemisphere more than the left. The MIT team found that reading computer code appears to activate both the left and right sides of the multiple demand network, and ScratchJr activated the right side slightly more than the left. This finding goes against the hypothesis that math and coding rely on the same brain mechanisms.

Effects of experience

The researchers say that while they didn’t identify any regions that appear to be exclusively devoted to programming, such specialized brain activity might develop in people who have much more coding experience.

“It’s possible that if you take people who are professional programmers, who have spent 30 or 40 years coding in a particular language, you may start seeing some specialization, or some crystallization of parts of the multiple demand system,” Fedorenko says. “In people who are familiar with coding and can efficiently do these tasks, but have had relatively limited experience, it just doesn’t seem like you see any specialization yet.”

In a companion paper appearing in the same issue of eLife, a team of researchers from Johns Hopkins University also reported that solving code problems activates the multiple demand network rather than the language regions.

The findings suggest there isn’t a definitive answer to whether coding should be taught as a math-based skill or a language-based skill. In part, that’s because learning to program may draw on both language and multiple demand systems, even if — once learned — programming doesn’t rely on the language regions, the researchers say.

“There have been claims from both camps — it has to be together with math, it has to be together with language,” Ivanova says. “But it looks like computer science educators will have to develop their own approaches for teaching code most effectively.”

The research was funded by the National Science Foundation, the Department of the Brain and Cognitive Sciences at MIT, and the McGovern Institute for Brain Research.