Study finds a striking difference between neurons of humans and other mammals

McGovern Institute Investigator Mark Harnett. Photo: Justin Knight

Neurons communicate with each other via electrical impulses, which are produced by ion channels that control the flow of ions such as potassium and sodium. In a surprising new finding, MIT neuroscientists have shown that human neurons have a much smaller number of these channels than expected, compared to the neurons of other mammals.

The researchers hypothesize that this reduction in channel density may have helped the human brain evolve to operate more efficiently, allowing it to divert resources to other energy-intensive processes that are required to perform complex cognitive tasks.

“If the brain can save energy by reducing the density of ion channels, it can spend that energy on other neuronal or circuit processes,” says Mark Harnett, an associate professor of brain and cognitive sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Harnett and his colleagues analyzed neurons from 10 different mammals, the most extensive electrophysiological study of its kind, and identified a “building plan” that holds true for every species they looked at — except for humans. They found that as the size of neurons increases, the density of channels found in the neurons also increases.

However, human neurons proved to be a striking exception to this rule.

“Previous comparative studies established that the human brain is built like other mammalian brains, so we were surprised to find strong evidence that human neurons are special,” says former MIT graduate student Lou Beaulieu-Laroche.

Beaulieu-Laroche is the lead author of the study, which appears today in Nature.

A building plan

Neurons in the mammalian brain can receive electrical signals from thousands of other cells, and that input determines whether or not they will fire an electrical impulse called an action potential. In 2018, Harnett and Beaulieu-Laroche discovered that human and rat neurons differ in some of their electrical properties, primarily in parts of the neuron called dendrites — tree-like antennas that receive and process input from other cells.

One of the findings from that study was that human neurons had a lower density of ion channels than neurons in the rat brain. The researchers were surprised by this observation, as ion channel density was generally assumed to be constant across species. In their new study, Harnett and Beaulieu-Laroche decided to compare neurons from several different mammalian species to see if they could find any patterns that governed the expression of ion channels. They studied two types of voltage-gated potassium channels and the HCN channel, which conducts both potassium and sodium, in layer 5 pyramidal neurons, a type of excitatory neurons found in the brain’s cortex.

 

Former McGovern Institute graduate student Lou Beaulieu-Laroche is the lead author of the 2021 Nature paper.

They were able to obtain brain tissue from 10 mammalian species: Etruscan shrews (one of the smallest known mammals), gerbils, mice, rats, Guinea pigs, ferrets, rabbits, marmosets, and macaques, as well as human tissue removed from patients with epilepsy during brain surgery. This variety allowed the researchers to cover a range of cortical thicknesses and neuron sizes across the mammalian kingdom.

The researchers found that in nearly every mammalian species they looked at, the density of ion channels increased as the size of the neurons went up. The one exception to this pattern was in human neurons, which had a much lower density of ion channels than expected.

The increase in channel density across species was surprising, Harnett says, because the more channels there are, the more energy is required to pump ions in and out of the cell. However, it started to make sense once the researchers began thinking about the number of channels in the overall volume of the cortex, he says.

In the tiny brain of the Etruscan shrew, which is packed with very small neurons, there are more neurons in a given volume of tissue than in the same volume of tissue from the rabbit brain, which has much larger neurons. But because the rabbit neurons have a higher density of ion channels, the density of channels in a given volume of tissue is the same in both species, or any of the nonhuman species the researchers analyzed.

“This building plan is consistent across nine different mammalian species,” Harnett says. “What it looks like the cortex is trying to do is keep the numbers of ion channels per unit volume the same across all the species. This means that for a given volume of cortex, the energetic cost is the same, at least for ion channels.”

Energy efficiency

The human brain represents a striking deviation from this building plan, however. Instead of increased density of ion channels, the researchers found a dramatic decrease in the expected density of ion channels for a given volume of brain tissue.

The researchers believe this lower density may have evolved as a way to expend less energy on pumping ions, which allows the brain to use that energy for something else, like creating more complicated synaptic connections between neurons or firing action potentials at a higher rate.

“We think that humans have evolved out of this building plan that was previously restricting the size of cortex, and they figured out a way to become more energetically efficient, so you spend less ATP per volume compared to other species,” Harnett says.

He now hopes to study where that extra energy might be going, and whether there are specific gene mutations that help neurons of the human cortex achieve this high efficiency. The researchers are also interested in exploring whether primate species that are more closely related to humans show similar decreases in ion channel density.

The research was funded by the Natural Sciences and Engineering Research Council of Canada, a Friends of the McGovern Institute Fellowship, the National Institute of General Medical Sciences, the Paul and Daisy Soros Fellows Program, the Dana Foundation David Mahoney Neuroimaging Grant Program, the National Institutes of Health, the Harvard-MIT Joint Research Grants Program in Basic Neuroscience, and Susan Haar.

Other authors of the paper include Norma Brown, an MIT technical associate; Marissa Hansen, a former post-baccalaureate scholar; Enrique Toloza, a graduate student at MIT and Harvard Medical School; Jitendra Sharma, an MIT research scientist; Ziv Williams, an associate professor of neurosurgery at Harvard Medical School; Matthew Frosch, an associate professor of pathology and health sciences and technology at Harvard Medical School; Garth Rees Cosgrove, director of epilepsy and functional neurosurgery at Brigham and Women’s Hospital; and Sydney Cash, an assistant professor of neurology at Harvard Medical School and Massachusetts General Hospital.

Giving robots social skills

Press Mentions

Robots can deliver food on a college campus and hit a hole-in-one on the golf course, but even the most sophisticated robot can’t perform basic social interactions that are critical to everyday human life.

MIT researchers have now incorporated certain social interactions into a framework for robotics, enabling machines to understand what it means to help or hinder one another, and to learn to perform these social behaviors on their own. In a simulated environment, a robot watches its companion, guesses what task it wants to accomplish, and then helps or hinders this other robot based on its own goals.

The researchers also showed that their model creates realistic and predictable social interactions. When they showed videos of these simulated robots interacting with one another to humans, the human viewers mostly agreed with the model about what type of social behavior was occurring.

Enabling robots to exhibit social skills could lead to smoother and more positive human-robot interactions. For instance, a robot in an assisted living facility could use these capabilities to help create a more caring environment for elderly individuals. The new model may also enable scientists to measure social interactions quantitatively, which could help psychologists study autism or analyze the effects of antidepressants.

“Robots will live in our world soon enough, and they really need to learn how to communicate with us on human terms. They need to understand when it is time for them to help and when it is time for them to see what they can do to prevent something from happening. This is very early work and we are barely scratching the surface, but I feel like this is the first very serious attempt for understanding what it means for humans and machines to interact socially,” says Boris Katz, principal research scientist and head of the InfoLab Group in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and a member of the Center for Brains, Minds, and Machines (CBMM).

Joining Katz on the paper are co-lead author Ravi Tejwani, a research assistant at CSAIL; co-lead author Yen-Ling Kuo, a CSAIL PhD student; Tianmin Shu, a postdoc in the Department of Brain and Cognitive Sciences; and senior author Andrei Barbu, a research scientist at CSAIL and CBMM. The research will be presented at the Conference on Robot Learning in November.

A social simulation

To study social interactions, the researchers created a simulated environment where robots pursue physical and social goals as they move around a two-dimensional grid.

A physical goal relates to the environment. For example, a robot’s physical goal might be to navigate to a tree at a certain point on the grid. A social goal involves guessing what another robot is trying to do and then acting based on that estimation, like helping another robot water the tree.

The researchers use their model to specify what a robot’s physical goals are, what its social goals are, and how much emphasis it should place on one over the other. The robot is rewarded for actions it takes that get it closer to accomplishing its goals. If a robot is trying to help its companion, it adjusts its reward to match that of the other robot; if it is trying to hinder, it adjusts its reward to be the opposite. The planner, an algorithm that decides which actions the robot should take, uses this continually updating reward to guide the robot to carry out a blend of physical and social goals.

“We have opened a new mathematical framework for how you model social interaction between two agents. If you are a robot, and you want to go to location X, and I am another robot and I see that you are trying to go to location X, I can cooperate by helping you get to location X faster. That might mean moving X closer to you, finding another better X, or taking whatever action you had to take at X. Our formulation allows the plan to discover the ‘how’; we specify the ‘what’ in terms of what social interactions mean mathematically,” says Tejwani.

Blending a robot’s physical and social goals is important to create realistic interactions, since humans who help one another have limits to how far they will go. For instance, a rational person likely wouldn’t just hand a stranger their wallet, Barbu says.

The researchers used this mathematical framework to define three types of robots. A level 0 robot has only physical goals and cannot reason socially. A level 1 robot has physical and social goals but assumes all other robots only have physical goals. Level 1 robots can take actions based on the physical goals of other robots, like helping and hindering. A level 2 robot assumes other robots have social and physical goals; these robots can take more sophisticated actions like joining in to help together.

Evaluating the model

To see how their model compared to human perspectives about social interactions, they created 98 different scenarios with robots at levels 0, 1, and 2. Twelve humans watched 196 video clips of the robots interacting, and then were asked to estimate the physical and social goals of those robots.

In most instances, their model agreed with what the humans thought about the social interactions that were occurring in each frame.

“We have this long-term interest, both to build computational models for robots, but also to dig deeper into the human aspects of this. We want to find out what features from these videos humans are using to understand social interactions. Can we make an objective test for your ability to recognize social interactions? Maybe there is a way to teach people to recognize these social interactions and improve their abilities. We are a long way from this, but even just being able to measure social interactions effectively is a big step forward,” Barbu says.

Toward greater sophistication

The researchers are working on developing a system with 3D agents in an environment that allows many more types of interactions, such as the manipulation of household objects. They are also planning to modify their model to include environments where actions can fail.

The researchers also want to incorporate a neural network-based robot planner into the model, which learns from experience and performs faster. Finally, they hope to run an experiment to collect data about the features humans use to determine if two robots are engaging in a social interaction.

“Hopefully, we will have a benchmark that allows all researchers to work on these social interactions and inspire the kinds of science and engineering advances we’ve seen in other areas such as object and action recognition,” Barbu says.

“I think this is a lovely application of structured reasoning to a complex yet urgent challenge,” says Tomer Ullman, assistant professor in the Department of Psychology at Harvard University and head of the Computation, Cognition, and Development Lab, who was not involved with this research. “Even young infants seem to understand social interactions like helping and hindering, but we don’t yet have machines that can perform this reasoning at anything like human-level flexibility. I believe models like the ones proposed in this work, that have agents thinking about the rewards of others and socially planning how best to thwart or support them, are a good step in the right direction.”

This research was supported by the Center for Brains, Minds, and Machines; the National Science Foundation; the MIT CSAIL Systems that Learn Initiative; the MIT-IBM Watson AI Lab; the DARPA Artificial Social Intelligence for Successful Teams program; the U.S. Air Force Research Laboratory; the U.S. Air Force Artificial Intelligence Accelerator; and the Office of Naval Research.

Artificial intelligence sheds light on how the brain processes language

In the past few years, artificial intelligence models of language have become very good at certain tasks. Most notably, they excel at predicting the next word in a string of text; this technology helps search engines and texting apps predict the next word you are going to type.

The most recent generation of predictive language models also appears to learn something about the underlying meaning of language. These models can not only predict the word that comes next, but also perform tasks that seem to require some degree of genuine understanding, such as question answering, document summarization, and story completion.

Such models were designed to optimize performance for the specific function of predicting text, without attempting to mimic anything about how the human brain performs this task or understands language. But a new study from MIT neuroscientists suggests the underlying function of these models resembles the function of language-processing centers in the human brain.

Computer models that perform well on other types of language tasks do not show this similarity to the human brain, offering evidence that the human brain may use next-word prediction to drive language processing.

“The better the model is at predicting the next word, the more closely it fits the human brain,” says Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience, a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds, and Machines (CBMM), and an author of the new study. “It’s amazing that the models fit so well, and it very indirectly suggests that maybe what the human language system is doing is predicting what’s going to happen next.”

Joshua Tenenbaum, a professor of computational cognitive science at MIT and a member of CBMM and MIT’s Artificial Intelligence Laboratory (CSAIL); and Evelina Fedorenko, the Frederick A. and Carole J. Middleton Career Development Associate Professor of Neuroscience and a member of the McGovern Institute, are the senior authors of the study, which appears this week in the Proceedings of the National Academy of Sciences.

Martin Schrimpf, an MIT graduate student who works in CBMM, is the first author of the paper.

Making predictions

The new, high-performing next-word prediction models belong to a class of models called deep neural networks. These networks contain computational “nodes” that form connections of varying strength, and layers that pass information between each other in prescribed ways.

Over the past decade, scientists have used deep neural networks to create models of vision that can recognize objects as well as the primate brain does. Research at MIT has also shown that the underlying function of visual object recognition models matches the organization of the primate visual cortex, even though those computer models were not specifically designed to mimic the brain.

In the new study, the MIT team used a similar approach to compare language-processing centers in the human brain with language-processing models. The researchers analyzed 43 different language models, including several that are optimized for next-word prediction. These include a model called GPT-3 (Generative Pre-trained Transformer 3), which, given a prompt, can generate text similar to what a human would produce. Other models were designed to perform different language tasks, such as filling in a blank in a sentence.

As each model was presented with a string of words, the researchers measured the activity of the nodes that make up the network. They then compared these patterns to activity in the human brain, measured in subjects performing three language tasks: listening to stories, reading sentences one at a time, and reading sentences in which one word is revealed at a time. These human datasets included functional magnetic resonance (fMRI) data and intracranial electrocorticographic measurements taken in people undergoing brain surgery for epilepsy.

They found that the best-performing next-word prediction models had activity patterns that very closely resembled those seen in the human brain. Activity in those same models was also highly correlated with measures of human behavioral measures such as how fast people were able to read the text.

“We found that the models that predict the neural responses well also tend to best predict human behavior responses, in the form of reading times. And then both of these are explained by the model performance on next-word prediction. This triangle really connects everything together,” Schrimpf says.

“A key takeaway from this work is that language processing is a highly constrained problem: The best solutions to it that AI engineers have created end up being similar, as this paper shows, to the solutions found by the evolutionary process that created the human brain. Since the AI network didn’t seek to mimic the brain directly — but does end up looking brain-like — this suggests that, in a sense, a kind of convergent evolution has occurred between AI and nature,” says Daniel Yamins, an assistant professor of psychology and computer science at Stanford University, who was not involved in the study.

Game changer

One of the key computational features of predictive models such as GPT-3 is an element known as a forward one-way predictive transformer. This kind of transformer is able to make predictions of what is going to come next, based on previous sequences. A significant feature of this transformer is that it can make predictions based on a very long prior context (hundreds of words), not just the last few words.

Scientists have not found any brain circuits or learning mechanisms that correspond to this type of processing, Tenenbaum says. However, the new findings are consistent with hypotheses that have been previously proposed that prediction is one of the key functions in language processing, he says.

“One of the challenges of language processing is the real-time aspect of it,” he says. “Language comes in, and you have to keep up with it and be able to make sense of it in real time.”

The researchers now plan to build variants of these language processing models to see how small changes in their architecture affect their performance and their ability to fit human neural data.

“For me, this result has been a game changer,” Fedorenko says. “It’s totally transforming my research program, because I would not have predicted that in my lifetime we would get to these computationally explicit models that capture enough about the brain so that we can actually leverage them in understanding how the brain works.”

The researchers also plan to try to combine these high-performing language models with some computer models Tenenbaum’s lab has previously developed that can perform other kinds of tasks such as constructing perceptual representations of the physical world.

“If we’re able to understand what these language models do and how they can connect to models which do things that are more like perceiving and thinking, then that can give us more integrative models of how things work in the brain,” Tenenbaum says. “This could take us toward better artificial intelligence models, as well as giving us better models of how more of the brain works and how general intelligence emerges, than we’ve had in the past.”

The research was funded by a Takeda Fellowship; the MIT Shoemaker Fellowship; the Semiconductor Research Corporation; the MIT Media Lab Consortia; the MIT Singleton Fellowship; the MIT Presidential Graduate Fellowship; the Friends of the McGovern Institute Fellowship; the MIT Center for Brains, Minds, and Machines, through the National Science Foundation; the National Institutes of Health; MIT’s Department of Brain and Cognitive Sciences; and the McGovern Institute.

Other authors of the paper are Idan Blank PhD ’16 and graduate students Greta Tuckute, Carina Kauf, and Eghbal Hosseini.

Five with MIT ties elected to the National Academy of Medicine for 2021

The National Academy of Medicine (NAM) has announced the election of 100 new members for 2021, including two MIT faculty members and three additional Institute affiliates.

Faculty honorees include Linda G. Griffith, a professor in the MIT departments of Biological Engineering and Mechanical Engineering; and Feng Zhang, a professor in the MIT departments of Brain and Cognitive Sciences and Biological Engineering. Guillermo Antonio Ameer SCD ’99, a professor of biomedical engineering and surgery at Northwestern University; Darrell Gaskin SM ’87, a professor of health policy and management at Johns Hopkins University; and Vamsi Mootha, an institute member of the Broad Institute of MIT and Harvard and former student in the Harvard-MIT Program in Health Sciences and Technology, were also honored.

The new inductees were elected through a process that recognizes individuals who have made major contributions to the advancement of the medical sciences, health care, and public health. Election to the academy is considered one of the highest honors in the fields of health and medicine and recognizes individuals who have demonstrated outstanding professional achievement and commitment to service.

Griffith, the School of Engineering Professor of Teaching Innovation and director of the Center for Gynepathology Research at MIT, is credited for her longstanding leadership in research, education, and medical translation. Specifically, the NAM recognizes her pioneering work in tissue engineering, biomaterials, and systems biology, including the development of the first “liver chip” technology. Griffith is also recognized for inventing 3D biomaterials printing and organotypic models for systems gynopathology, and for the establishment of the biological engineering department at MIT.

The academy recognizes Zhang, the Patricia and James Poitras ’63 Professor in Neuroscience at MIT, for revolutionizing molecular biology and powering transformative leaps forward in our ability to study and treat human diseases. Zhang, who also is an investigator at the Howard Hughes Medical Institute and the McGovern Institute for Brain Research, and a core member of the Broad Institute of MIT and Harvard, is specifically credited for the discovery of novel microbial enzymes and their development as molecular technologies, including optogenetics and CRISPR-mediated genome editing. The academy also commends Zhang for his outstanding mentoring and professional services.

Ameer, the Daniel Hale Williams Professor of Biomedical Engineering and Surgery at the Northwestern University Feinberg School of Medicine, earned his Doctor of Science degree from the MIT Department of Chemical Engineering in 1999. A professor of biomedical engineering and of surgery who is also the director of the Center for Advanced Regenerative Engineering, he is cited by the NAM “For pioneering contributions to regenerative engineering and medicine through the development, dissemination, and translation of citrate-based biomaterials, a new class of biodegradable polymers that enabled the commercialization of innovative medical devices approved by the U.S. Food and Drug Administration for use in a variety of surgical procedures.”

Gaskin, the William C. and Nancy F. Richardson Professor in Health Policy and Management, Bloomberg School of Public Health at Johns Hopkins University, earned his Master of Science degree from the MIT Department of Economics in 1987. A health economist who advances community, neighborhood, and market-level policies and programs that reduce health disparities, he is cited by the NAM “For his work as a leading health economist and health services researcher who has advanced fundamental understanding of the role of place as a driver in racial and ethnic health disparities.”

Mootha, the founding co-director of the Broad Institute’s Metabolism Program, is a professor of systems biology and medicine at Harvard Medical School and a professor in the Department of Molecular Biology at Massachusetts General Hospital. An alumnus of the Harvard-MIT Program in Health Sciences and Technology and former postdoc with the Whitehead Institute for Biomedical Research, Mootha is an expert in the mitochondrion, the “powerhouse of the cell,” and its role in human disease. The NAM cites Mootha “For transforming the field of mitochondrial biology by creatively combining modern genomics with classical bioenergetics.”

Established in 1970 by the National Academy of Sciences, the NAM addresses critical issues in health, science, medicine, and related policy and inspires positive actions across sectors. NAM works alongside the National Academy of Sciences and National Academy of Engineering to provide independent, objective analysis and advice to the nation and conduct other activities to solve complex problems and inform public policy decisions. The National Academies of Sciences, Engineering, and Medicine also encourage education and research, recognize outstanding contributions to knowledge, and increase public understanding of STEMM. With their election, NAM members make a commitment to volunteer their service in National Academies activities.

Seven from MIT receive National Institutes of Health awards

On Oct. 5, the National Institutes of Health announced the names of 106 scientists who have been awarded grants through the High-Risk, High-Reward Research program to advance highly innovative biomedical and behavioral research. Seven of the recipients are MIT faculty members.

The High-Risk, High-Reward Research program catalyzes scientific discovery by supporting research proposals that, due to their inherent risk, may struggle in the traditional peer-review process despite their transformative potential. Program applicants are encouraged to pursue trailblazing ideas in any area of research relevant to the NIH’s mission to advance knowledge and enhance health.

“The science put forward by this cohort is exceptionally novel and creative and is sure to push at the boundaries of what is known,” says NIH Director Francis S. Collins. “These visionary investigators come from a wide breadth of career stages and show that groundbreaking science can happen at any career level given the right opportunity.”

New innovators

Four MIT researchers received New Innovator Awards, which recognize “unusually innovative research from early career investigators.” They are:

  • Pulin Li is a member at the Whitehead Institute for Biomedical Research and an assistant professor in the Department of Biology. Li combines approaches from synthetic biology, developmental biology, biophysics and systems biology to quantitatively understand the genetic circuits underlying cell-cell communication that creates multicellular behaviors.
  • Seychelle Vos, the Robert A. Swanson (1969) Career Development Professor of Life Sciences in the Department of Biology, studies the interplay of gene expression and genome organization. Her work focuses on understanding how large molecular machineries involved in genome organization and gene transcription regulate each others’ function to ultimately determine cell fate and identity.
  • Xiao Wang, the Thomas D. and Virginia Cabot Assistant Professor of Chemistry and a member of the Broad Institute of MIT and Harvard, aims to develop high-resolution and highly-multiplexed molecular imaging methods across multiple scales toward understanding the physical and chemical basis of brain wiring and function.
  • Alison Wendlandt is a Cecil and Ida Green Career Development Assistant Professor of Chemistry. Wendlandt focuses on the development of selective, catalytic reactions using the tools of organic and organometallic synthesis and physical organic chemistry. Mechanistic study plays a central role in the development of these new transformations.

Transformative researchers

Two MIT researchers have received Transformative Research Awards, which “promote cross-cutting, interdisciplinary approaches that could potentially create or challenge existing paradigms.” The recipients are:

  • Manolis Kellis is a professor of computer science at MIT in the area of computational biology, an associate member of the Broad Institute, and a principal investigator with MIT’s Computer Science and Artificial Intelligence Laboratory. He aims to further our understanding of the human genome by computational integration of large-scale functional and comparative genomics datasets.
  • Myriam Heiman is the Latham Family Career Development Associate Professor of Neuroscience in the Department of Brain and Cognitive Sciences and an investigator in the Picower Institute for Learning and Memory. Heiman studies the selective vulnerability and pathophysiology seen in two neurodegenerative diseases of the basal ganglia, Huntington’s disease, and Parkinson’s disease.

Together, Heiman, Kellis and colleagues will launch a five-year investigation to pinpoint what may be going wrong in specific brain cells and to help identify new treatment approaches for amyotrophic lateral sclerosis (ALS) and frontotemporal lobar degeneration with motor neuron disease (FTLD/MND). The project will bring together four labs, including Heiman and Kellis’ labs at MIT, to apply innovative techniques ranging from computational, genomic, and epigenomic analyses of cells from a rich sample of central nervous system tissue, to precision genetic engineering of stem cells and animal models.

Pioneering researchers

  • Polina Anikeeva received a Pioneer Award, which “challenges investigators at all career levels to pursue new research directions and develop groundbreaking, high-impact approaches to a broad area of biomedical, behavioral, or social science.” Anikeeva is an MIT professor of materials science and engineering, a professor of brain and cognitive sciences, and a McGovern Institute for Brain Research associate investigator. She has established a research program that uniquely combines materials synthesis, device fabrication, neurophysiology, and animal models of behavior. Her group carries out projects that understand, invent, and design materials from the level of atoms to functional devices with applications in fundamental neuroscience.

The program is supported by the NIH Common Fund, which oversees programs that pursue major opportunities and gaps throughout the research enterprise that are of great importance to NIH and require collaboration across the agency to succeed. It issues four awards each year: the Pioneer Award, the New Innovator Award, the Transformative Research Award, and the Early Independence Award.

This year, NIH issued 10 Pioneer awards, 64 New Innovator awards, 19 Transformative Research awards (10 general, four ALS-related, and five Covid-19-related), and 13 Early Independence awards for 2021. Funding for the awards comes from the NIH Common Fund, the National Institute of General Medical Sciences, the National Institute of Mental Health, and the National Institute of Neurological Disorders and Stroke.

School of Science welcomes new faculty

This fall, MIT welcomes new faculty members — six assistant professors and two tenured professors — to the departments of Biology; Brain and Cognitive Sciences; Chemistry; Earth, Atmospheric and Planetary Sciences; and Physics.

A physicist, Soonwon Choi is interested in dynamical phenomena that occur in strongly interacting quantum many-body systems far from equilibrium and designing their applications for quantum information science. He takes a variety of interdisciplinary approaches from analytic theory and numerical computations to collaborations on experiments with controlled quantum degrees of freedom. Recently, Choi’s research has encompassed studying the phenomenon of a phase transition in the dynamics of quantum entanglement and information, drawing on machine learning to introduce a quantum convolutional neural network that can recognize quantum states associated with a one-dimensional symmetry-protected topological phase, and exploring a range of quantum applications of the nitrogen-vacancy color center of diamond.

After completing his undergraduate study in physics at Caltech in 2012, Choi received his PhD degree in physics from Harvard University in 2018. He then worked as a Miller Postdoctoral Fellow at the University of California at Berkeley before joining the Department of Physics and the Center for Theoretical Physics as an assistant professor in July 2021.

Olivia Corradin investigates how genetic variants contribute to disease. She focuses on non-coding DNA variants — changes in DNA sequence that can alter the regulation of gene expression — to gain insight into pathogenesis. With her novel outside-variant approach, Corradin’s lab singled out a type of brain cell involved in multiple sclerosis, increasing total heritability identified by three- to five-fold. A recipient of the Avenir Award through the NIH Director’s Pioneer Award Program, Corradin also scrutinizes how genetic and epigenetic variation influence susceptibility to substance abuse disorders. These critical insights into multiple sclerosis, opioid use disorder, and other diseases have the potential to improve risk assessment, diagnosis, treatment, and preventative care for patients.

Corradin completed a bachelor’s degree in biochemistry from Marquette University in 2010 and a PhD in genetics from Case Western Reserve University in 2016. A Whitehead Institute Fellow since 2016, she also became an institute member in July 2021. The Department of Biology welcomes Corradin as an assistant professor.

Arlene Fiore seeks to understand processes that control two-way interactions between air pollutants and the climate system, as well as the sensitivity of atmospheric chemistry to different chemical, physical, and biological sources and sinks at scales ranging from urban to global and daily to decadal. Combining chemistry-climate models and observations from ground, airborne, and satellite platforms, Fiore has identified global dimensions to ground-level ozone smog and particulate haze that arise from linkages with the climate system, global atmospheric composition, and the terrestrial biosphere. She also investigates regional meteorology and climate feedbacks due to aerosols versus greenhouse gases, future air pollution responses to climate change, and drivers of atmospheric oxidizing capacity. A new research direction involves using chemistry-climate model ensemble simulations to identify imprints of climate variability on observational records of trace gases in the troposphere.

After earning a bachelor’s degree and PhD from Harvard University, Fiore held a research scientist position at the Geophysical Fluid Dynamics Laboratory and was appointed as an associate professor with tenure at Columbia University in 2011. Over the last decade, she has worked with air and health management partners to develop applications of satellite and other Earth science datasets to address their emerging needs. Fiore’s honors include the American Geophysical Union (AGU) James R. Holton Junior Scientist Award, Presidential Early Career Award for Scientists and Engineers (the highest honor bestowed by the United States government on outstanding scientists and engineers in the early stages of their independent research careers), and AGU’s James B. Macelwane Medal. The Department of Earth, Atmospheric and Planetary Sciences welcomes Fiore as the first Peter H. Stone and Paola Malanotte Stone Professor.

With a background in magnetism, Danna Freedman leverages inorganic chemistry to solve problems in physics. Within this paradigm, she is creating the next generation of materials for quantum information by designing spin-based quantum bits, or qubits, based in molecules. These molecular qubits can be precisely controlled, opening the door for advances in quantum computation, sensing, and more. She also harnesses high pressure to synthesize new emergent materials, exploring the possibilities of intermetallic compounds and solid-state bonding. Among other innovations, Freedman has realized millisecond coherence times in molecular qubits, created a molecular analogue of an NV center featuring optical read-out of spin, and discovered the first iron-bismuth binary compound.

Freedman received her bachelor’s degree from Harvard University and her PhD from the University of California at Berkeley, then conducted postdoctoral research at MIT before joining the faculty at Northwestern University as an assistant professor in 2012, earning an NSF CAREER Award, the Presidential Early Career Award for Scientists and Engineers, the ACS Award in Pure Chemistry, and more. She was promoted to associate professor in 2018 and full professor with tenure in 2020. Freedman returns to MIT as the Frederick George Keyes Professor of Chemistry.

Kristin Knouse PhD ’17 aims to understand how tissues sense and respond to damage, with the goal of developing new approaches for regenerative medicine. She focuses on the mammalian liver — which has the unique ability to completely regenerate itself — to ask how organisms react to organ injury, how certain cells retain the ability to grow and divide while others do not, and what genes regulate this process. Knouse creates innovative tools, such as a genome-wide CRISPR screening within a living mouse, to examine liver regeneration from the level of a single-cell to the whole organism.

Knouse received a bachelor’s degree in biology from Duke University in 2010 and then enrolled in the Harvard and MIT MD-PhD Program, where she earned a PhD through the MIT Department of Biology in 2016 and an MD through the Harvard-MIT Program in Health Sciences and Technology in 2018. In 2018, she established her independent laboratory at the Whitehead Institute for Biomedical Research and was honored with the NIH Director’s Early Independence Award. Knouse joins the Department of Biology and the Koch Institute for Integrative Cancer Research as an assistant professor.

Lina Necib PhD ’17 is an astroparticle physicist exploring the origin of dark matter through a combination of simulations and observational data that correlate the dynamics of dark matter with that of the stars in the Milky Way. She has investigated the local dynamic structures in the solar neighborhood using the Gaia satellite, contributed to building a catalog of local accreted stars using machine learning techniques, and discovered a new stream called Nyx, after the Greek goddess of the night. Necib is interested in employing Gaia in conjunction with other spectroscopic surveys to understand the dark matter profile in the local solar neighborhood, the center of the galaxy, and in dwarf galaxies.

After obtaining a bachelor’s degree in mathematics and physics from Boston University in 2012 and a PhD in theoretical physics from MIT in 2017, Necib was a Sherman Fairchild Fellow at Caltech, a Presidential Fellow at the University of California at Irvine, and a fellow in theoretical astrophysics at Carnegie Observatories. She returns to MIT as an assistant professor in the Department of Physics and a member of the MIT Kavli Institute for Astrophysics and Space Research.

Andrew Vanderburg studies exoplanets, or planets that orbit stars other than the sun. Conducting astronomical observations from Earth as well as space, he develops cutting-edge methods to learn about planets outside of our solar system. Recently, he has leveraged machine learning to optimize searches and identify planets that were missed by previous techniques. With collaborators, he discovered the eighth planet in the Kepler-90 solar system, a Jupiter-like planet with unexpectedly close orbiting planets, and rocky bodies disintegrating near a white dwarf, providing confirmation of a theory that such stars may accumulate debris from their planetary systems.

Vanderburg received a bachelor’s degree in physics and astrophysics from the University of California at Berkeley in 2013 and a PhD in Astronomy from Harvard University in 2017. Afterward, Vanderburg moved to the University of Texas at Austin as a NASA Sagan Postdoctoral Fellow, then to the University of Wisconsin at Madison as a faculty member. He joins MIT as an assistant professor in the Department of Physics and a member of the Kavli Institute for Astrophysics and Space Research.

A computational neuroscientist, Guangyu Robert Yang is interested in connecting artificial neural networks to the actual functions of cognition. His research incorporates computational and biological systems and uses computational modeling to understand the optimization of neural systems which function to accomplish multiple tasks. As a postdoc, Yang applied principles of machine learning to study the evolution and organization of the olfactory system. The neural networks his models generated show important similarities to the biological circuitry, suggesting that the structure of the olfactory system evolved in order to optimally enable the specific tasks needed for odor recognition.

Yang received a bachelor’s degree in physics from Peking University before obtaining a PhD in computational neuroscience at New York University, followed by an internship in software engineering at Google Brain. Before coming to MIT, he conducted postdoctoral research at the Center for Theoretical Neuroscience of Columbia University, where he was a junior fellow at the Simons Society of Fellows. Yang is an assistant professor in the Department of Brain and Cognitive Sciences with a shared appointment in the Department of Electrical Engineering and Computer Science in the School of Engineering and the MIT Schwarzman College of Computing as well as an associate investigator with the McGovern Institute.

Jacqueline Lees and Rebecca Saxe named associate deans of science

Jaqueline Lees and Rebecca Saxe have been named associate deans serving in the MIT School of Science. Lees is the Virginia and D.K. Ludwig Professor for Cancer Research and is currently the associate director of the Koch Institute for Integrative Cancer Research, as well as an associate department head and professor in the Department of Biology at MIT. Saxe is the John W. Jarve (1978) Professor in Brain and Cognitive Sciences and the associate head of the Department of Brain and Cognitive Sciences (BCS); she is also an associate investigator in the McGovern Institute for Brain Research.

Lees and Saxe will both contribute to the school’s diversity, equity, inclusion, and justice (DEIJ) activities, as well as develop and implement mentoring and other career-development programs to support the community. From their home departments, Saxe and Lees bring years of DEIJ and mentorship experience to bear on the expansion of school-level initiatives.

Lees currently serves on the dean’s science council in her capacity as associate director of the Koch Institute. In this new role as associate dean for the School of Science, she will bring her broad administrative and programmatic experiences to bear on the next phase for DEIJ and mentoring activities.

Lees joined MIT in 1994 as a faculty member in MIT’s Koch Institute (then the Center for Cancer Research) and Department of Biology. Her research focuses on regulators that control cellular proliferation, terminal differentiation, and stemness — functions that are frequently deregulated in tumor cells. She dissects the role of these proteins in normal cell biology and development, and establish how their deregulation contributes to tumor development and metastasis.

Since 2000, she has served on the Department of Biology’s graduate program committee, and played a major role in expanding the diversity of the graduate student population. Lees also serves on DEIJ committees in her home department, as well as at the Koch Institute.

With co-chair with Boleslaw Wyslouch, director of the Laboratory for Nuclear Science, Lees led the ReseArch Scientist CAreer LadderS (RASCALS) committee tasked to evaluate career trajectories for research staff in the School of Science and make recommendations to recruit and retain talented staff, rewarding them for their contributions to the school’s research enterprise.

“Jackie is a powerhouse in translational research, demonstrating how fundamental work at the lab bench is critical for making progress at the patient bedside,” says Nergis Mavalvala, dean of the School of Science. “With Jackie’s dedicated and thoughtful partnership, we can continue to lead in basic research and develop the recruitment, retention, and mentoring and necessary to support our community.”

Saxe will join Lees in supporting and developing programming across the school that could also provide direction more broadly at the Institute.

“Rebecca is an outstanding researcher in social cognition and a dedicated educator — someone who wants our students not only to learn, but to thrive,” says Mavalvala. “I am grateful that Rebecca will join the dean’s leadership team and bring her mentorship and leadership skills to enhance the school.”

For example, in collaboration with former department head James DiCarlo, the BCS department has focused on faculty mentorship of graduate students; and, in collaboration with Professor Mark Bear, the department developed postdoc salary and benefit standards. Both initiatives have become models at MIT.

With colleague Laura Schulz, Saxe also served as co-chair of the Committee on Medical Leave and Hospitalizations (CMLH), which outlined ways to enhance MIT’s current leave and hospitalization procedures and policies for undergraduate and graduate students. Saxe was also awarded MIT’s Committed to Caring award for excellence in graduate student mentorship, as well as the School of Science’s award for excellence in undergraduate teaching.

In her research, Saxe studies human social cognition, using a combination of behavioral testing and brain imaging technologies. She is best known for her work on brain regions specialized for abstract concepts, such as “theory of mind” tasks that involve understanding the mental states of other people. Her TED Talk, “How we read each other’s minds” has been viewed more than 3 million times. She also studies the development of the human brain during early infancy.

She obtained her PhD from MIT and was a Harvard University junior fellow before joining the MIT faculty in 2006. In 2014, the National Academy of Sciences named her one of two recipients of the Troland Award for investigators age 40 or younger “to recognize unusual achievement and further empirical research in psychology regarding the relationships of consciousness and the physical world.” In 2020, Saxe was named a John Simon Guggenheim Foundation Fellow.

Saxe and Lees will also work closely with Kuheli Dutt, newly hired assistant dean for diversity, equity, and inclusion, and other members of the dean’s science council on school-level initiatives and strategy.

“I’m so grateful that Rebecca and Jackie have agreed to take on these new roles,” Mavalvala says. “And I’m super excited to work with these outstanding thought partners as we tackle the many puzzles that I come across as dean.”

Mehrdad Jazayeri wants to know how our brains model the external world

Much of our daily life requires us to make inferences about the world around us. As you think about which direction your tennis opponent will hit the ball, or try to figure out why your child is crying, your brain is searching for answers about possibilities that are not directly accessible through sensory experiences.

MIT Associate Professor Mehrdad Jazayeri has devoted most of his career to exploring how the brain creates internal representations, or models, of the external world to make intelligent inferences about hidden states of the world.

“The one question I am most interested in is how does the brain form internal models of the external world? Studying inference is really a powerful way of gaining insight into these internal models,” says Jazayeri, who recently earned tenure in the Department of Brain and Cognitive Sciences and is also a member of MIT’s McGovern Institute for Brain Research.

Using a variety of approaches, including detailed analysis of behavior, direct recording of activity of neurons in the brain, and mathematical modeling, he has discovered how the brain builds models of statistical regularities in the environment. He has also found circuits and mechanisms that enable the brain to capture the causal relationships between observations and outcomes.

An unusual path

Jazayeri, who has been on the faculty at MIT since 2013, took an unusual path to a career in neuroscience. Growing up in Tehran, Iran, he was an indifferent student until his second year of high school when he got interested in solving challenging geometry puzzles. He also started programming with the ZX Spectrum, an early 8-bit personal computer, that his father had given him.

During high school, he was chosen to train for Iran’s first ever National Physics Olympiad team, but when he failed to make it to the international team, he became discouraged and temporarily gave up on the idea of going to college. Eventually, he participated in the University National Entrance Exam and was admitted to the electrical engineering department at Sharif University of Technology.

Jazayeri didn’t enjoy his four years of college education. The experience mostly helped him realize that he was not meant to become an engineer. “I realized that I’m not an inventor. What inspires me is the process of discovery,” he says. “I really like to figure things out, not build things, so those four years were not very inspiring.”

After graduating from college, Jazayeri spent a few years working on a banana farm near the Caspian Sea, along with two friends. He describes those years as among the best and most formative of his life. He would wake by 4 a.m., work on the farm until late afternoon, and spend the rest of the day thinking and reading. One topic he read about with great interest was neuroscience, which led him a few years later to apply to graduate school.

He immigrated to Canada and was admitted to the University of Toronto, where he earned a master’s degree in physiology and neuroscience. While there, he worked on building small circuit models that would mimic the activity of neurons in the hippocampus.

From there, Jazayeri went on to New York University to earn a PhD in neuroscience, where he studied how signals in the visual cortex support perception and decision-making. “I was less interested in how the visual cortex encodes the external world,” he says. “I wanted to understand how the rest of the brain decodes the signals in visual cortex, which is, in effect, an inference problem.”

He continued pursuing his interest in the neurobiology of inference as a postdoc at the University of Washington, where he investigated how the brain uses temporal regularities in the environment to estimate time intervals, and uses knowledge about those intervals to plan for future actions.

Building internal models to make inferences

Inference is the process of drawing conclusions based on information that is not readily available. Making rich inferences from scarce data is one of humans’ core mental capacities, one that is central to what makes us the most intelligent species on Earth. To do so, our nervous system builds internal models of the external world, and those models that help us think through possibilities without directly experiencing them.

The problem of inferences presents itself in many behavioral settings.

“Our nervous system makes all sorts of internal models for different behavioral goals, some that capture the statistical regularities in the environment, some that link potential causes to effects, some that reflect relationships between entities, and some that enable us to think about others,” Jazayeri says.

Jazayeri’s lab at MIT is made up of a group of cognitive scientists, electrophysiologists, engineers, and physicists with a shared interest in understanding the nature of internal models in the brain and how those models enable us to make inferences in different behavioral tasks.

Early work in the lab focused on a simple timing task to examine the problem of statistical inference, that is, how we use statistical regularities in the environment to make accurate inference. First, they found that the brain coordinates movements in time using a dynamic process, akin to an analog timer. They also found that the neural representation of time in the frontal cortex is being continuously calibrated based on prior experience so that we can make more accurate time estimates in the presence of uncertainty.

Later, the lab developed a complex decision-making task to examine the neural basis of causal inference, or the process of deducing a hidden cause based on its effects. In a paper that appeared in 2019, Jazayeri and his colleagues identified a hierarchical and distributed brain circuit in the frontal cortex that helps the brain to determine the most probable cause of failure within a hierarchy of decisions.

More recently, the lab has extended its investigation to other behavioral domains, including relational inference and social inference. Relational inference is about situating an ambiguous observation using relational memory. For example, coming out of a subway in a new neighborhood, we may use our knowledge of the relationship between visible landmarks to infer which way is north. Social inference, which is extremely difficult to study, involves deducing other people’s beliefs and goals based on their actions.

Along with studies in human volunteers and animal models, Jazayeri’s lab develops computational models based on neural networks, which helps them to test different possible hypotheses of how the brain performs specific tasks. By comparing the activity of those models with neural activity data from animals, the researchers can gain insight into how the brain actually performs a particular type of inference task.

“My main interest is in how the brain makes inferences about the world based on the neural signals,” Jazayeri says. “All of my work is about looking inside the brain, measuring signals, and using mathematical tools to try to understand how those signals are manifestations of an internal model within the brain.”

Some brain disorders exhibit similar circuit malfunctions

Many neurodevelopmental disorders share similar symptoms, such as learning disabilities or attention deficits. A new study from MIT has uncovered a common neural mechanism for a type of cognitive impairment seen in some people with autism and schizophrenia, even though the genetic variations that produce the impairments are different for each condition.

In a study of mice, the researchers found that certain genes that are mutated or missing in some people with those disorders cause similar dysfunctions in a neural circuit in the thalamus. If scientists could develop drugs that target this circuit, they could be used to treat people who have different disorders with common behavioral symptoms, the researchers say.

“This study reveals a new circuit mechanism for cognitive impairment and points to a future direction for developing new therapeutics, by dividing patients into specific groups not by their behavioral profile, but by the underlying neurobiological mechanisms,” says Guoping Feng, the James W. and Patricia T. Poitras Professor in Brain and Cognitive Sciences at MIT, a member of the Broad Institute of Harvard and MIT, the associate director of the McGovern Institute for Brain Research at MIT, and the senior author of the new study.

Dheeraj Roy, a Warren Alpert Distinguished Scholar and a McGovern Fellow at the Broad Institute, and Ying Zhang, a postdoc at the McGovern Institute, are the lead authors of the paper, which appears today in Neuron.

Thalamic connections

The thalamus plays a key role in cognitive tasks such as memory formation and learning. Previous studies have shown that many of the gene variants linked to brain disorders such as autism and schizophrenia are highly expressed in the thalamus, suggesting that it may play a role in those disorders.

One such gene is called Ptchd1, which Feng has studied extensively. In boys, loss of this gene, which is carried on the X chromosome, can lead to attention deficits, hyperactivity, aggression, intellectual disability, and autism spectrum disorders.

In a study published in 2016, Feng and his colleagues showed that Ptchd1 exerts many of its effects in a part of the thalamus called the thalamic reticular nucleus (TRN). When the gene is knocked out in the TRN of mice, the mice show attention deficits and hyperactivity. However, that study did not find any role for the TRN in the learning disabilities also seen in people with mutations in Ptchd1.

In the new study, the researchers decided to look elsewhere in the thalamus to try to figure out how Ptchd1 loss might affect learning and memory. Another area they identified that highly expresses Ptchd1 is called the anterodorsal (AD) thalamus, a tiny region that is involved in spatial learning and communicates closely with the hippocampus.

Using novel techniques that allowed them to trace the connections between the AD thalamus and another brain region called the retrosplenial cortex (RSC), the researchers determined a key function of this circuit. They found that in mice, the AD-to-RSC circuit is essential for encoding fearful memories of a chamber in which they received a mild foot shock. It is also necessary for working memory, such as creating mental maps of physical spaces to help in decision-making.

The researchers found that a nearby part of the thalamus called the anteroventral (AV) thalamus also plays a role in this memory formation process: AV-to-RSC communication regulates the specificity of the encoded memory, which helps us distinguish this memory from others of similar nature.

“These experiments showed that two neighboring subdivisions in the thalamus contribute differentially to memory formation, which is not what we expected,” Roy says.

Circuit malfunction

Once the researchers discovered the roles of the AV and AD thalamic regions in memory formation, they began to investigate how this circuit is affected by loss of Ptchd1. When they knocked down expression of Ptchd1 in neurons of the AD thalamus, they found a striking deficit in memory encoding, for both fearful memories and working memory.

The researchers then did the same experiments with a series of four other genes — one that is linked with autism and three linked with schizophrenia. In all of these mice, they found that knocking down gene expression produced the same memory impairments. They also found that each of these knockdowns produced hyperexcitability in neurons of the AD thalamus.

These results are consistent with existing theories that learning occurs through the strengthening of synapses that occurs as a memory is formed, the researchers say.

“The dominant theory in the field is that when an animal is learning, these neurons have to fire more, and that increase correlates with how well you learn,” Zhang says. “Our simple idea was if a neuron fires too high at baseline, you may lack a learning-induced increase.”

The researchers demonstrated that each of the genes they studied affects different ion channels that influence neurons’ firing rates. The overall effect of each mutation is an increase in neuron excitability, which leads to the same circuit-level dysfunction and behavioral symptoms.

The researchers also showed that they could restore normal cognitive function in mice with these genetic mutations by artificially turning down hyperactivity in neurons of the AD thalamus. The approach they used, chemogenetics, is not yet approved for use in humans. However, it may be possible to target this circuit in other ways, the researchers say.

The findings lend support to the idea that grouping diseases by the circuit malfunctions that underlie them may help to identify potential drug targets that could help many patients, Feng says.

“There are so many genetic factors and environmental factors that can contribute to a particular disease, but in the end, it has to cause some type of neuronal change that affects a circuit or a few circuits involved in this behavior,” he says. “From a therapeutic point of view, in such cases you may not want to go after individual molecules because they may be unique to a very small percentage of patients, but at a higher level, at the cellular or circuit level, patients may have more commonalities.”

The research was funded by the Stanley Center at the Broad Institute, the Hock E. Tan and K. Lisa Yang Center for Autism Research at MIT, the James and Patricia Poitras Center for Psychiatric Disorders Research at MIT, and the National Institutes of Health BRAIN Initiative.

Queen of hearts

Amphibians and humans differ in many ways, but Laurie Boyer, a professor of biology and biological engineering at MIT, is particularly interested in one of those differences. Certain types of amphibians and fish can regenerate and heal their hearts after an injury. In contrast, human adults who have experienced trauma to the heart, such as in the case of a heart attack or exposure to certain medications, are unable to repair the damage. Often, the injured heart ends up with scar tissue that can lead to heart failure.

Recent research in this area now indicates that mice, and even humans, have some capacity for cardiac repair for a short period after birth. But after even just a few days of age, that ability starts to shut off. “The heart has very limited ability to repair itself in response to injury, disease, or aging,” Boyer says.

Alexander Auld, a postdoc in the Boyer Lab, studies the key cellular mechanisms that lead heart cells to mature and lose regenerative potential. Specifically, he’s interested in understanding how cardiomyocytes, the heart cells responsible for pumping blood, develop an ability to contract and relax repeatedly. Auld tests the function of proteins that serve as signals to assemble the cardiac muscle structure after birth. The assembly of these structures coincides with the loss of regenerative ability.

“I’m trying to piece together: What are the different mechanisms that push cardiomyocytes to assemble their contractile apparatus and to stop dividing?” Auld says. “Solving this puzzle may have potential to stimulate regeneration in the adult heart muscle.”

“The holy grail of regenerative biology would be to stimulate your own heart cells to replenish themselves,” says Boyer, who joined the MIT faculty in 2007. “Before this approach is possible, we need to achieve a deep understanding of the fundamental processes that drive heart development.”

Boyer’s lab studies how many different signals and genes interact to affect heart development. The work will enable a better understanding of how faulty regulation can lead to disease, and may also enable new therapies for people suffering from a variety of heart conditions.

Critical connections

Recently, Boyer’s lab has been studying heart development in people with Trisomy 21, or Down syndrome. Every year, 6,000 babies born in the United States have Down syndrome. Around half have heart defects. The most common heart defect in babies with Down syndrome is a hole in the heart’s center, called an atrioventricular septal defect. It is often repaired with surgery, but the repair can cause scar tissue and cardiovascular complications.

Somatic cells are those that compose an organism’s body; they differ from sex cells, which are used for reproduction. Most people have 46 chromosomes, arranged in 23 pairs, in their body’s somatic cells. In 95 percent of cases, Down syndrome results when a person has three copies of chromosome 21 instead of two –– a total of 47 chromosomes per cell. It’s an example of aneuploidy, when a cell has an abnormal number of chromosomes. Cellular attempts to adapt to the extra chromosome can cause stress on the body’s cells, including those of the heart.

MIT’s Alana Down Syndrome Center (ADSC) brings together biologists, neuroscientists, engineers, and other experts to increase knowledge about Down syndrome. ADSC launched in early 2019, led by Angelika Amon, professor of biology and a member of the Koch Institute for Integrative Cancer Research, along with co-director Li-Huei Tsai, Picower Professor and director of the Picower Institute for Learning and Memory. Amon died at age 53 in 2020 after a battle with ovarian cancer. At MIT, Amon had studied the effects of aneuploidy on cells.

“In my many wonderful scientific and personal discussions with Angelika, who was a beacon of inspiration to me, it became clear that studying Trisomy 21 in the context of heart development could ultimately improve the lives of these individuals,” Boyer says.

Change of heart

To conduct their research, Boyer’s group uses human induced pluripotent cells (hiPSCs), obtained through somatic cell reprogramming. The revolutionary technique was developed by Sir John B. Gurdon and Shinya Yamanaka, who in 2012 won the Nobel Prize in Physiology or Medicine for their work. Reprogramming works by converting specialized, mature somatic cells with one particular function into specialized, mature, cells with a different function.

Boyer’s lab uses hiPSCs from human adults with Down syndrome and converts them into cardiomyocytes through somatic cell reprogramming. Then, they compare those cardiomyocytes with reprogrammed cells from individuals who do not have Down syndrome. This work helps them deduce why the extra chromosome in people with Down syndrome may cause congenital heart defects.

“We can now begin to pinpoint the faulty signals and genes in Trisomy 21 cardiac cells that affect heart development,” Boyer says. “And with that same idea, we can also discover how we might actually be able to ameliorate or fix these defects.”

With this technique, the team can track how aspects of a specific patient’s cell development correlate with their clinical presentation. The ability to analyze patient-specific cells also has implications for personalized medicine, Boyer says. For instance, a patient’s skin or blood cells –– which are more easily obtained –– could be converted into a highly specialized mature cell, like a cardiac muscle cell, and tested for its response to drugs that could possibly cause damage to the heart before they reach the clinic. This process can also be used to screen for new therapies that can improve the outcome for heart failure patients.

Boyer presented the group’s research on Down syndrome at the New England Down Syndrome Symposium, co-organized in November 2020 by MIT, ADSC, Massachusetts Down Syndrome Congress, and LuMind IDSC Foundation.

Heart of the operation

Boyer’s lab employs students at the undergraduate, graduate, and postdoc levels from engineering, life sciences, and computer sciences –– each of whom, Boyer says, brings unique expertise and value to the team.

“It’s important for me to have a lab where everyone feels welcome, and that they feel that they can contribute to these fundamental discoveries,” Boyer says.

The Boyer Lab often works with scholars across disciplines at MIT. “It’s really great,” Auld says. “You can investigate a problem using multiple tools and perspectives.”

One project, in partnership with George Barbastathis, a professor in mechanical engineering, uses image-based machine learning to understand structural differences within cardiomyocytes when the proteins that signal cells to develop have been manipulated. Auld generates high-resolution images that the machine learning algorithms can analyze.

Another project, in collaboration with Ed Boyden, a professor in the Department of Biological Engineering as well as the McGovern Institute for Brain Research, involves the development of new technologies that allow high-throughput imaging of cardiac cells. The cross-pollination across departments and areas of expertise at MIT, Boyer says, often has her feeling like “a kid in a candy shop.”

“That our work could ultimately impact human health is very fulfilling for me, and the ability to use our scientific discoveries to improve medical outcomes is an important direction of my lab,” Boyer says. “Given the enormous talent at MIT and the excitement and willingness of everyone here to work together, we have an unprecedented opportunity to solve important problems that can make a difference in people’s lives.”