Finding some stability in adaptable brains

One of the brain’s most celebrated qualities is its adaptability. Changes to neural circuits, whose connections are continually adjusted as we experience and interact with the world, are key to how we learn. But to keep knowledge and memories intact, some parts of the circuitry must be resistant to this constant change.

“Brains have figured out how to navigate this landscape of balancing between stability and flexibility, so that you can have new learning and you can have lifelong memory,” says neuroscientist Mark Harnett, an investigator at MIT’s McGovern Institute.

In the August 27, 2024 of the journal Cell Reports, Harnett and his team show how individual neurons can contribute to both parts of this vital duality. By studying the synapses through which pyramidal neurons in the brain’s sensory cortex communicate, they have learned how the cells preserve their understanding of some of the world’s most fundamental features, while also maintaining the flexibility they need to adapt to a changing world.

McGovern Institute Investigator Mark Harnett. Photo: Adam Glanzman

Visual connections

Pyramidal neurons receive input from other neurons via thousands of connection points. Early in life, these synapses are extremely malleable; their strength can shift as a young animal takes in visual information and learns to interpret it. Most remain adaptable into adulthood, but Harnett’s team discovered that some of the cells’ synapses lose their flexibility when the animals are less than a month old. Having both stable and flexible synapses means these neurons can combine input from different sources to use visual information in flexible ways.

Microscopic image of a mouse brain.
A confocal image of a mouse brain showing dLGN neurons in pink. Image: Courtney Yaeger, Mark Harnett.

Postdoctoral fellow Courtney Yaeger took a close look at these unusually stable synapses, which cluster together along a narrow region of the elaborately branched pyramidal cells. She was interested in the connections through which the cells receive primary visual information, so she traced their connections with neurons in a vision-processing center of the brain’s thalamus called the dorsal lateral geniculate nucleus (dLGN).

The long extensions through which a neuron receives signals from other cells are called dendrites, and they branch of from the main body of the cell into a tree-like structure. Spiny protrusions along the dendrites form the synapses that connect pyramidal neurons to other cells. Yaeger’s experiments showed that connections from the dLGN all led to a defined region of the pyramidal cells—a tight band within what she describes as the trunk of the dendritic tree.

Yaeger found several ways in which synapses in this region— formally known as the apical oblique dendrite domain—differ from other synapses on the same cells. “They’re not actually that far away from each other, but they have completely different properties,” she says.

Stable synapses

In one set of experiments, Yaeger activated synapses on the pyramidal neurons and measured the effect on the cells’ electrical potential. Changes to a neuron’s electrical potential generate the impulses the cells use to communicate with one another. It is common for a synapse’s electrical effects to amplify when synapses nearby are also activated. But when signals were delivered to the apical oblique dendrite domain, each one had the same effect, no matter how many synapses were stimulated. Synapses there don’t interact with one another at all, Harnett says. “They just do what they do. No matter what their neighbors are doing, they all just do kind of the same thing.”

Two rows of seven confocal microscope images of dendrites.
Representative oblique (top) and basal (bottom) dendrites from the same Layer 5 pyramidal neuron imaged across 7 days. Transient spines are labeled with yellow arrowheads the day before disappearance. Image: Courtney Yaeger, Mark Harnett.

The team was also able to visualize the molecular contents of individual synapses. This revealed a surprising lack of a certain kind of neurotransmitter receptor, called NMDA receptors, in the apical oblique dendrites. That was notable because of NMDA receptors’ role in mediating changes in the brain. “Generally when we think about any kind of learning and memory and plasticity, it’s NMDA receptors that do it,” Harnett says. “That is the by far most common substrate of learning and memory in all brains.”

When Yaeger stimulated the apical oblique synapses with electricity, generating patterns of activity that would strengthen most synapses, the team discovered a consequence of the limited presence of NMDA receptors. The synapses’ strength did not change. “There’s no activity-dependent plasticity going on there, as far as we have tested,” Yaeger says.

That makes sense, the researchers say, because the cells’ connections from the thalamus relay primary visual information detected by the eyes. It is through these connections that the brain learns to recognize basic visual features like shapes and lines.

“These synapses are basically a robust, high fidelity readout of this visual information,” Harnett explains. “That’s what they’re conveying, and it’s not context sensitive. So it doesn’t matter how many other synapses are active, they just do exactly what they’re going to do, and you can’t modify them up and down based on activity. So they’re very, very stable.”

“You actually don’t want those to be plastic,” adds Yaeger.

“Can you imagine going to sleep and then forgetting what a vertical line looks like? That would be disastrous.” – Courtney Yaeger

By conducting the same experiments in mice of different ages, the researchers determined that the synapses that connect pyramidal neurons to the thalamus become stable a few weeks after young mice first open their eyes. By that point, Harnett says, they have learned everything they need to learn. On the other hand, if mice spend the first weeks of their lives in the dark, the synapses never stabilize—further evidence that the transition depends on visual experience.

The team’s findings not only help explain how the brain balances flexibility and stability, they could help researchers teach artificial intelligence how to do the same thing. Harnett says artificial neural networks are notoriously bad at this: When an artificial neural network that does something well is trained to do something new, it almost always experiences “catastrophic forgetting” and can no longer perform its original task. Harnett’s team is exploring how they can use what they’ve learned about real brains to overcome this problem in artificial networks.

Finding the way

This story also appears in the Fall 2024 issue of BrainScan.

___

When you arrive in a new city, every outing can be an exploration. You may know your way to a few places, but only if you follow a specific route. As you wander around a bit, get lost a few times, and familiarize yourself with some landmarks and where they are relative to each other, your brain develops a cognitive map of the space. You learn how things are laid out, and navigating gets easier.

It takes a lot to generate a useful mental map. “You have to understand the structure of relationships in the world,” says McGovern Investigator Mehrdad Jazayeri. “You need learning and experience to construct clever representations. The advantage is that when you have them, the world is an easier place to deal with.”

Indeed, Jazayeri says, internal models like these are the core of intelligent behavior.

Mehrdad Jazayeri (right) and graduate student Jack Gabel sit inside a rig designed to probe the brain’s ability to solve real-world problems with internal models. Photo: Steph Stevens

Many McGovern scientists see these cognitive maps as windows into their biggest questions about the brain: how it represents the external world, how it lets us learn and adapt, and how it forms and reconstructs memories. Researchers are learning that cells and strategies that the brain uses to understand the layout of a space also help track other kinds of structures in the world, too — from variations in sound to sequences of events. By studying how neurons behave as animals navigate their environments, McGovern researchers also expect to deepen their understanding of other important cognitive functions as well.

Decoding spatial maps

McGovern Investigator Ila Fiete builds theoretical models that help explain how spatial maps are formed in the brain. Previous research has shown that “place cells” and “grid cells” are place-sensitive neurons in the brain’s hippocampus and entorhinal cortex whose firing patterns help an animal map out a space. As an animal becomes familiar with its environment, subsets of these cells become tied to specific locations, firing only when the animal is in them.

Microscopic image of the mouse hippocampus
The brain’s ability to navigate the world is made possible by a brain circuit that includes the hippocampus (above), entorhinal cortex, and retrosplenial cortex. The firing pattern of “grid cells” and “place cells” in this circuit help form mental representations, or cognitive maps, of the external world. These brain regions are also among the first areas to be affected in people with Alzheimer’s, who often have trouble navigating. Image: Qian Chen, Guoping Feng

Fiete’s models have shown how these circuits can integrate information about movement, like signals from the muscles and vestibular system that change as an animal moves around, to calculate and update its estimate of an animal’s position in space. Fiete suspects the cells that do this can use the same strategy to keep track of other kinds of movement or change.

Mapping a space is about understanding where things are in relationship to one another, says Jazayeri, and tracking relationships is useful for modeling many kinds of structure in the world. For example, the hippocampus and entorhinal cortex are also closely linked to episodic memory, which keeps track of the connections between events and experiences.

“These brain areas are thought to be critical for learning relationships,” Jazayeri says.

Navigating virtual worlds

A key feature of cognitive maps is that they enable us to make predictions and respond to new situations without relying on immediate sensory cues. In a study published in Nature this June, Jazayeri and Fiete saw evidence of the brain’s ability to call up an internal model of an abstract domain: they watched neurons in the brain’s entorhinal cortex register a sequence of images, even when they were hidden from view.

Two scientists write equations on a glass wall with a marker.
Ila Fiete and postdoc Sarthak Chandra (right) develop theoretical models to study the brain. Photo: Steph Stevens

We can remember the layout of our home from far away or plan a walk through the neighborhood without stepping outside — so it may come as no surprise that the brain can call up its internal model in the absence of movement or sensory inputs. Indeed, previous research has shown that the circuits that encode physical space also encode abstract spaces like auditory sound sequences. But these experiments were performed in the presence of the stimuli, and Jazayeri and his team wanted to know whether simply imagining movement through an abstract domain may also evoke the same cognitive maps.

To test the entorhinal cortex’s ability to do this, Jazayeri and his team designed an experiment where animals had to “mentally” navigate through a previously explored, but now invisible, sequence of images. Working with Fiete, they found that the neurons that had become responsive to particular images in the visible sequence would also fire when mentally navigating the sequence in which images were hidden from view — suggesting the animal was conjuring a representation of the image in its mind.

Colored dots in the shape of a ring.
Ila Fiete has shown that the brain generates a one-dimensional ring of neural activity that acts as a compass. Here, head direction is indicated by color. Image: Ila Fiete

“You see these neurons in the entorhinal cortex undergo very clear dynamic patterns that are in correspondence with what we think the animal might be thinking at the time,” Jazayeri says. “They are updating themselves without any change out there in the world.”

The team then incorporated their data into a computational model to explore how neural circuits might form a mental model of abstract sequences. Their artificial circuit showed that the external inputs (eg., image sequences) become associated with internal models through a simple associative learning rule in which neurons that fire together, wire together. This model suggests that imagined movement could update the internal representations, and the learned association of these internal representations with external inputs might enable a recall of the corresponding inputs even when they are absent.

More broadly, Fiete’s research on cognitive mapping in the hippocampus is leading to some interesting predictions: “One of the conclusions we’re coming to in my group is that when you reconstruct a memory, the area that’s driving that reconstruction is the entorhinal cortex and hippocampus but the reconstruction may happen in the sensory periphery, using the representations that played a role in experiencing that stimulus in the first place,” Fiete explains. “So when I reconstruct an image, I’m likely using my visual cortex to do that reconstruction, driven by the hippocampal complex.” Signals from the entorhinal cortex to the visual cortex during navigation could help an animal visualize landmarks and find its way, even when those landmarks are not visible in the external world.

Landmark coding

Near the entorhinal cortex is the retrosplenial cortex, another brain area that seems to be important for navigation. It is positioned to integrate visual signals with information about the body’s position and movement through space. Both the retrosplenial cortex and the entorhinal cortex are among the first areas impacted by Alzheimer’s disease; spatial disorientation and navigation difficulties may be consequences of their degeneration.

Researchers suspect the retrosplenial cortex may be key to letting an animal know not just where something is, but also how to get there. McGovern Investigator Mark Harnett explains that to generate a cognitive map that can be used to navigate, an animal must understand not just where objects or other cues are in relationship to itself, but also where they are in relationship to each other.

In a study reported in eLife in 2020, Harnett and colleagues may have glimpsed both of these kinds of representations of space inside the brain. They watched neurons there light up as mice ran on a treadmill and tracked the passage of a virtual environment. As the mice became familiar with the landscape and learned where they were likely to find a reward, activity in the retrosplenial cortex changed.

A scientist looks at a computer monitor and adjusts a small wheel.
Lukas Fischer, a Harnett lab postdoc, operates a rig designed to study how mice navigate a virtual environment. Photo: Justin Knight

“What we found was this representation started off sort of crude and mostly about what the animal was doing. And then eventually it became more about the task, the landscape, and the reward,” Harnett says.

Harnett’s team has since begun investigating how the retrosplenial cortex enables more complex spatial reasoning. They designed an experiment in which mice must understand many spatial relationships to access a treat. The experimental setup requires mice to consider the location of reward ports, the center of their environment, and their own viewing angle. Most of the time, they succeed. “They have to really do some triangulation, and the retrosplenial cortex seems to be critical for that,” Harnett says.

When the team monitored neural activity during the task, they found evidence that when an animal wasn’t quite sure where to go, its brain held on to multiple spatial hypotheses at the same time, until new information ruled one out.

Fiete, who has worked with Harnett to explore how neural circuits can execute this kind of spatial reasoning, points out that Jazayeri’s team has observed similar reasoning in animals that must make decisions based on temporarily ambiguous auditory cues. “In both cases, animals are able to hold multiple hypotheses in mind and do the inference,” she says. “Mark’s found that the retrosplenial cortex contains all the signals necessary to do that reasoning.”

Beyond spatial reasoning

As his team learns more about the how the brain creates and uses cognitive maps, Harnett hopes activity in the retrosplenial cortex will shed light on a fundamental aspect of the brain’s organization. The retrosplenial cortex doesn’t just receive information from the brain’s vision-processing center, it also sends signals back. He suspects these may direct the visual cortex to relay information that is particularly pertinent to forming or using a meaningful cognitive map.

“The brain’s navigation system is a beautiful playground.” – Ila Fiete

This kind of connectivity, where parts of the brain that carry out complex cognitive processing send signals back to regions that handle simpler functions, is common in the brain. Figuring out why is a key pursuit in Harnett’s lab. “I want to use that as a model for thinking about the larger cortical computations, because you see this kind of motif repeated in a lot of ways, and it’s likely key for understanding how learning works,” he says.

Fiete is particularly interested in unpacking the common set of principles that allow cell circuits to generate maps of both our physical environment and our abstract experiences. What is it about this set of brain areas and circuits that, on the one hand, permits specific map-building computations, and, on the other hand, generalizes across physical space and abstract experience?

“The brain’s navigation system is a beautiful playground,” she says, “and an amazing system in which to investigate all of these questions.”

Do we only use 10 percent of our brain?

Movies like “Limitless” and “Lucy” play on the notion that humans use only 10 percent of their brains—and those who unlock a higher percentage wield powers like infinite memory or telekinesis. It’s enticing to think that so much of the brain remains untapped and is ripe for boosting human potential.

But the idea that we use 10 percent of our brain is 100 percent a myth.

In fact, scientists believe that we use our entire brain every day. Mila Halgren is a graduate student in the lab of Mark Harnett, an associate professor of brain and cognitive sciences and an investigator at the McGovern Institute. The Harnett lab studies the computational power of neurons, that is, how neural networks rapidly process massive amounts of information.

“All of our brain is constantly in use and consumes a tremendous amount of energy,” Halgren says. “Despite making up only two percent of our body weight, it devours 20 percent of our calories.” This doesn’t appear to change significantly with different tasks, from typing on a computer to doing yoga. “Even while we sleep, our entire brain remains intensely active.”

When did this myth take root?

Portrait of scientist Mila Halgren
Mila Halgren is a PhD student in MIT’s Department of Brain and Cognitive Sciences. Photo: Mila Halgren

The myth is thought to have gained traction when scientists first began exploring the brain’s abilities but lacked the tools to capture its exact workings. In 1907, William James, a founder of American psychology, suggested in his book “The Energies of Men” that “we are making use of only a small part of our possible mental and physical resources.” This influential work likely sparked the idea that humans access a mere fraction of the brain—setting this common misconception ablaze.

Brainpower lore even suggests that Albert Einstein credited his genius to being able to access more than 10 percent of his brain. However, no such quote has been documented and this too is perhaps a myth of cosmic proportion.

Halgren believes that there may be some fact backing this fiction. “People may think our brain is underutilized in the sense that some neurons fire very infrequently—once every few minutes or less. But this isn’t true of most neurons, some of which fire hundreds of times per second,” she says.

In the nascent years of neuroscience, scientists also argued that a large portion of the brain must be inactive because some people experience brain injuries and can still function at a high level, like the famous case of Phineas Gage. Halgren points to the brain’s remarkable plasticity—the reshaping of neural connections. “Entire brain hemispheres can be removed during early childhood and the rest of the brain will rewire and compensate for the loss. In other words, the brain will use 100 percent of what it has, but can make do with less depending on which structures are damaged.”

Is there a limit to the brain?

If we indeed use our entire brain, can humans tease out any problem? Or, are there enigmas in the world that we will never unravel?

“This is still in contention,” Halgren says. “There may be certain problems that the human brain is fundamentally unable to solve, like how a mouse will never understand chemistry and a chimpanzee can’t do calculus.”

Can we increase our brainpower?

The brain may have its limits, but there are ways to boost our cognitive prowess to ace that midterm or crank up productivity in the workplace. According to Halgren, “You can increase your brainpower, but there’s no ‘trick’ that will allow you to do so. Like any organ in your body, the brain works best with proper sleep, exercise, low stress, and a well-balanced diet.”

The truth is, we may never rearrange furniture with our minds or foresee which team will win the Super Bowl. The idea of a largely latent brain is draped in fantasy, but debunking this myth speaks to the immense growth of neuroscience over the years—and the allure of other misconceptions that scientists have yet to demystify.

Fourteen MIT School of Science professors receive tenure for 2022 and 2023

In 2022, nine MIT faculty were granted tenure in the School of Science:

Gloria Choi examines the interaction of the immune system with the brain and the effects of that interaction on neurodevelopment, behavior, and mood. She also studies how social behaviors are regulated according to sensory stimuli, context, internal state, and physiological status, and how these factors modulate neural circuit function via a combinatorial code of classic neuromodulators and immune-derived cytokines. Choi joined the Department of Brain and Cognitive Sciences after a postdoc at Columbia University. She received her bachelor’s degree from the University of California at Berkeley, and her PhD from Caltech. Choi is also an investigator in The Picower Institute for Learning and Memory.

Nikta Fakhri develops experimental tools and conceptual frameworks to uncover laws governing fluctuations, order, and self-organization in active systems. Such frameworks provide powerful insight into dynamics of nonequilibrium living systems across scales, from the emergence of thermodynamic arrow of time to spatiotemporal organization of signaling protein patterns and discovery of odd elasticity. Fakhri joined the Department of Physics in 2015 following a postdoc at University of Göttingen. She completed her undergraduate degree at Sharif University of Technology and her PhD at Rice University.

Geobiologist Greg Fournier uses a combination of molecular phylogeny insights and geologic records to study major events in planetary history, with the hope of furthering our understanding of the co-evolution of life and environment. Recently, his team developed a new technique to analyze multiple gene evolutionary histories and estimated that photosynthesis evolved between 3.4 and 2.9 billion years ago. Fournier joined the Department of Earth, Atmospheric and Planetary Sciences in 2014 after working as a postdoc at the University of Connecticut and as a NASA Postdoctoral Program Fellow in MIT’s Department of Civil and Environmental Engineering. He earned his BA from Dartmouth College in 2001 and his PhD in genetics and genomics from the University of Connecticut in 2009.

Daniel Harlow researches black holes and cosmology, viewed through the lens of quantum gravity and quantum field theory. His work generates new insights into quantum information, quantum field theory, and gravity. Harlow joined the Department of Physics in 2017 following postdocs at Princeton University and Harvard University. He obtained a BA in physics and mathematics from Columbia University in 2006 and a PhD in physics from Stanford University in 2012. He is also a researcher in the Center for Theoretical Physics.

A biophysicist, Gene-Wei Li studies how bacteria optimize the levels of proteins they produce at both mechanistic and systems levels. His lab focuses on design principles of transcription, translation, and RNA maturation. Li joined the Department of Biology in 2015 after completing a postdoc at the University of California at San Francisco. He earned an BS in physics from National Tsinghua University in 2004 and a PhD in physics from Harvard University in 2010.

Michael McDonald focuses on the evolution of galaxies and clusters of galaxies, and the role that environment plays in dictating this evolution. This research involves the discovery and study of the most distant assemblies of galaxies alongside analyses of the complex interplay between gas, galaxies, and black holes in the closest, most massive systems. McDonald joined the Department of Physics and the Kavli Institute for Astrophysics and Space Research in 2015 after three years as a Hubble Fellow, also at MIT. He obtained his BS and MS degrees in physics at Queen’s University, and his PhD in astronomy at the University of Maryland in College Park.

Gabriela Schlau-Cohen combines tools from chemistry, optics, biology, and microscopy to develop new approaches to probe dynamics. Her group focuses on dynamics in membrane proteins, particularly photosynthetic light-harvesting systems that are of interest for sustainable energy applications. Following a postdoc at Stanford University, Schlau-Cohen joined the Department of Chemistry faculty in 2015. She earned a bachelor’s degree in chemical physics from Brown University in 2003 followed by a PhD in chemistry at the University of California at Berkeley.

Phiala Shanahan’s research interests are focused around theoretical nuclear and particle physics. In particular, she works to understand the structure and interactions of hadrons and nuclei from the fundamental degrees of freedom encoded in the Standard Model of particle physics. After a postdoc at MIT and a joint position as an assistant professor at the College of William and Mary and senior staff scientist at the Thomas Jefferson National Accelerator Facility, Shanahan returned to the Department of Physics as faculty in 2018. She obtained her BS from the University of Adelaide in 2012 and her PhD, also from the University of Adelaide, in 2015.

Omer Yilmaz explores the impact of dietary interventions on stem cells, the immune system, and cancer within the intestine. By better understanding how intestinal stem cells adapt to diverse diets, his group hopes to identify and develop new strategies that prevent and reduce the growth of cancers involving the intestinal tract. Yilmaz joined the Department of Biology in 2014 and is now also a member of Koch Institute for Integrative Cancer Research. After receiving his BS from the University of Michigan in 1999 and his PhD and MD from University of Michigan Medical School in 2008, he was a resident in anatomic pathology at Massachusetts General Hospital and Harvard Medical School until 2013.

In 2023, five MIT faculty were granted tenure in the School of Science:

Physicist Riccardo Comin explores the novel phases of matter that can be found in electronic solids with strong interactions, also known as quantum materials. His group employs a combination of synthesis, scattering, and spectroscopy to obtain a comprehensive picture of these emergent phenomena, including superconductivity, (anti)ferromagnetism, spin-density-waves, charge order, ferroelectricity, and orbital order. Comin joined the Department of Physics in 2016 after postdoctoral work at the University of Toronto. He completed his undergraduate studies at the Universita’ degli Studi di Trieste in Italy, where he also obtained a MS in physics in 2009. Later, he pursued doctoral studies at the University of British Columbia, Canada, earning a PhD in 2013.

Netta Engelhardt researches the dynamics of black holes in quantum gravity and uses holography to study the interplay between gravity and quantum information. Her primary focus is on the black hole information paradox, that black holes seem to be destroying information that, according to quantum physics, cannot be destroyed. Engelhardt was a postdoc at Princeton University and a member of the Princeton Gravity Initiative prior to joining the Department of Physics in 2019. She received her BS in physics and mathematics from Brandeis University and her PhD in physics from the University of California at Santa Barbara. Engelhardt is a researcher in the Center for Theoretical Physics and the Black Hole Initiative at Harvard University.

Mark Harnett studies how the biophysical features of individual neurons endow neural circuits with the ability to process information and perform the complex computations that underlie behavior. As part of this work, his lab was the first to describe the physiological properties of human dendrites. He joined the Department of Brain and Cognitive Sciences and the McGovern Institute for Brain Research in 2015. Prior, he was a postdoc at the Howard Hughes Medical Institute’s Janelia Research Campus. He received his BA in biology from Reed College in Portland, Oregon and his PhD in neuroscience from the University of Texas at Austin.

Or Hen investigates quantum chromodynamic effects in the nuclear medium and the interplay between partonic and nucleonic degrees of freedom in nuclei. Specifically, Hen utilizes high-energy scattering of electron, neutrino, photon, proton and ion off atomic nuclei to study short-range correlations: temporal fluctuations of high-density, high-momentum, nucleon clusters in nuclei with important implications for nuclear, particle, atomic, and astrophysics. Hen was an MIT Pappalardo Fellow in the Department of Physics from 2015 to 2017 before joining the faculty in 2017. He received his undergraduate degree in physics and computer engineering from the Hebrew University and earned his PhD in experimental physics at Tel Aviv University.

Sebastian Lourido is interested in learning about the vulnerabilities of parasites in order to develop treatments for infectious diseases and expand our understanding of eukaryotic diversity. His lab studies many important human pathogens, including Toxoplasma gondii, to model features conserved throughout the phylum. Lourido was a Whitehead Fellow at the Whitehead Institute for Biomedical Research until 2017, when he joined the Department of Biology and became a Whitehead Member. He earned his BS from Tulane University in 2004 and his PhD from Washington University in St. Louis in 2012.

Silent synapses are abundant in the adult brain

MIT neuroscientists have discovered that the adult brain contains millions of “silent synapses” — immature connections between neurons that remain inactive until they’re recruited to help form new memories.

Until now, it was believed that silent synapses were present only during early development, when they help the brain learn the new information that it’s exposed to early in life. However, the new MIT study revealed that in adult mice, about 30 percent of all synapses in the brain’s cortex are silent.

The existence of these silent synapses may help to explain how the adult brain is able to continually form new memories and learn new things without having to modify existing conventional synapses, the researchers say.

“These silent synapses are looking for new connections, and when important new information is presented, connections between the relevant neurons are strengthened. This lets the brain create new memories without overwriting the important memories stored in mature synapses, which are harder to change,” says Dimitra Vardalaki, an MIT graduate student and the lead author of the new study.

Mark Harnett, an associate professor of brain and cognitive sciences and an investigator at the McGovern Institute for Brain Research, is the senior author of the paper, which appears today in Nature. Kwanghun Chung, an associate professor of chemical engineering at MIT, is also an author.

A surprising discovery

When scientists first discovered silent synapses decades ago, they were seen primarily in the brains of young mice and other animals. During early development, these synapses are believed to help the brain acquire the massive amounts of information that babies need to learn about their environment and how to interact with it. In mice, these synapses were believed to disappear by about 12 days of age (equivalent to the first months of human life).

However, some neuroscientists have proposed that silent synapses may persist into adulthood and help with the formation of new memories. Evidence for this has been seen in animal models of addiction, which is thought to be largely a disorder of aberrant learning.

Theoretical work in the field from Stefano Fusi and Larry Abbott of Columbia University has also proposed that neurons must display a wide range of different plasticity mechanisms to explain how brains can both efficiently learn new things and retain them in long-term memory. In this scenario, some synapses must be established or modified easily, to form the new memories, while others must remain much more stable, to preserve long-term memories.

In the new study, the MIT team did not set out specifically to look for silent synapses. Instead, they were following up on an intriguing finding from a previous study in Harnett’s lab. In that paper, the researchers showed that within a single neuron, dendrites — antenna-like extensions that protrude from neurons — can process synaptic input in different ways, depending on their location.

As part of that study, the researchers tried to measure neurotransmitter receptors in different dendritic branches, to see if that would help to account for the differences in their behavior. To do that, they used a technique called eMAP (epitope-preserving Magnified Analysis of the Proteome), developed by Chung. Using this technique, researchers can physically expand a tissue sample and then label specific proteins in the sample, making it possible to obtain super-high-resolution images.

The first thing we saw, which was super bizarre and we didn’t expect, was that there were filopodia everywhere.

While they were doing that imaging, they made a surprising discovery. “The first thing we saw, which was super bizarre and we didn’t expect, was that there were filopodia everywhere,” Harnett says.

Filopodia, thin membrane protrusions that extend from dendrites, have been seen before, but neuroscientists didn’t know exactly what they do. That’s partly because filopodia are so tiny that they are difficult to see using traditional imaging techniques.

After making this observation, the MIT team set out to try to find filopodia in other parts of the adult brain, using the eMAP technique. To their surprise, they found filopodia in the mouse visual cortex and other parts of the brain, at a level 10 times higher than previously seen. They also found that filopodia had neurotransmitter receptors called NMDA receptors, but no AMPA receptors.

A typical active synapse has both of these types of receptors, which bind the neurotransmitter glutamate. NMDA receptors normally require cooperation with AMPA receptors to pass signals because NMDA receptors are blocked by magnesium ions at the normal resting potential of neurons. Thus, when AMPA receptors are not present, synapses that have only NMDA receptors cannot pass along an electric current and are referred to as “silent.”

Unsilencing synapses

To investigate whether these filopodia might be silent synapses, the researchers used a modified version of an experimental technique known as patch clamping. This allowed them to monitor the electrical activity generated at individual filopodia as they tried to stimulate them by mimicking the release of the neurotransmitter glutamate from a neighboring neuron.

Using this technique, the researchers found that glutamate would not generate any electrical signal in the filopodium receiving the input, unless the NMDA receptors were experimentally unblocked. This offers strong support for the theory the filopodia represent silent synapses within the brain, the researchers say.

The researchers also showed that they could “unsilence” these synapses by combining glutamate release with an electrical current coming from the body of the neuron. This combined stimulation leads to accumulation of AMPA receptors in the silent synapse, allowing it to form a strong connection with the nearby axon that is releasing glutamate.

The researchers found that converting silent synapses into active synapses was much easier than altering mature synapses.

“If you start with an already functional synapse, that plasticity protocol doesn’t work,” Harnett says. “The synapses in the adult brain have a much higher threshold, presumably because you want those memories to be pretty resilient. You don’t want them constantly being overwritten. Filopodia, on the other hand, can be captured to form new memories.”

“Flexible and robust”

The findings offer support for the theory proposed by Abbott and Fusi that the adult brain includes highly plastic synapses that can be recruited to form new memories, the researchers say.

“This paper is, as far as I know, the first real evidence that this is how it actually works in a mammalian brain,” Harnett says. “Filopodia allow a memory system to be both flexible and robust. You need flexibility to acquire new information, but you also need stability to retain the important information.”

The researchers are now looking for evidence of these silent synapses in human brain tissue. They also hope to study whether the number or function of these synapses is affected by factors such as aging or neurodegenerative disease.

“It’s entirely possible that by changing the amount of flexibility you’ve got in a memory system, it could become much harder to change your behaviors and habits or incorporate new information,” Harnett says. “You could also imagine finding some of the molecular players that are involved in filopodia and trying to manipulate some of those things to try to restore flexible memory as we age.”

The research was funded by the Boehringer Ingelheim Fonds, the National Institutes of Health, the James W. and Patricia T. Poitras Fund at MIT, a Klingenstein-Simons Fellowship, and Vallee Foundation Scholarship, and a McKnight Scholarship.

Approaching human cognition from many angles

In January, as the Charles River was starting to freeze over, Keith Murray and the other members of MIT’s men’s heavyweight crew team took to erging on the indoor rowing machine. For 80 minutes at a time, Murray endured one of the most grueling workouts of his college experience. To distract himself from the pain, he would talk with his teammates, covering everything from great philosophical ideas to personal coffee preferences.

For Murray, virtually any conversation is an opportunity to explore how people think and why they think in certain ways. Currently a senior double majoring in computation and cognition, and linguistics and philosophy, Murray tries to understand the human experience based on knowledge from all of these fields.

“I’m trying to blend different approaches together to understand the complexities of human cognition,” he says. “For example, from a physiological perspective, the brain is just billions of neurons firing all at once, but this hardly scratches the surface of cognition.”

Murray grew up in Corydon, Indiana, where he attended the Indiana Academy for Science, Mathematics, and Humanities during his junior year of high school. He was exposed to philosophy there, learning the ideas of Plato, Socrates, and Thomas Aquinas, to name a few. When looking at colleges, Murray became interested in MIT because he wanted to learn about human thought processes from different perspectives. “Coming to MIT, I knew I wanted to do something philosophical. But I wanted to also be on the more technical side of things,” he says.

Once on campus, Murray immediately pursued an opportunity through the Undergraduate Research Opportunity Program (UROP) in the Digital Humanities Lab. There he worked with language-processing technology to analyze gendered language in various novels, with the end goal of displaying the data for an online audience. He learned about the basic mathematical models used for analyzing and presenting data online, to study the social implications of linguistic phrases and expressions.

Murray also joined the Concourse learning community, which brought together different perspectives from the humanities, sciences, and math in a weekly seminar. “I was exposed to some excellent examples of how to do interdisciplinary work,” he recalls.

In the summer before his sophomore year, Murray took a position as a researcher in the Harnett Lab, where instead of working with novels, he was working with mice. Alongside postdoc Lucas Fisher, Murray trained mice to do navigational tasks using virtual reality equipment. His goal was to explore neural encoding in navigation, understanding why the mice behaved in certain ways after being shown certain stimuli on the screens. Spending time in the lab, Murray became increasingly interested in neuroscience and the biological components behind human thought processes.

He sought out other neuroscience-related research experiences, which led him to explore a SuperUROP project in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). Working under Professor Nancy Lynch, he designed theoretical models of the retina using machine learning. Murray was excited to apply the techniques he learned in 9.40 (Introduction to Neural Computation) to address complex neurological problems. Murray considers this one of his most challenging research experiences, as the experience was entirely online.

“It was during the pandemic, so I had to learn a lot on my own; I couldn’t exactly do research in a lab. It was a big challenge, but at the end, I learned a lot and ended up getting a publication out of it,” he reflects.

This past semester, Murray has worked in the lab of Professor Ila Fiete in the McGovern Institute for Brain Research, constructing deep-learning models of animals performing navigational tasks. Through this UROP, which builds on his final project from Fiete’s class 9.49 (Neural Circuits for Cognition), Murray has been working to incorporate existing theoretical models of the hippocampus to investigate the intersection between artificial intelligence and neuroscience.

Reflecting on his varied research experiences, Murray says they have shown him new ways to explore the human brain from multiple perspectives, something he finds helpful as he tries to understand the complexity of human behavior.

Outside of his academic pursuits, Murray has continued to row with the crew team, where he walked on his first year. He sees rowing as a way to build up his strength, both physically and mentally. “When I’m doing my class work or I’m thinking about projects, I am using the same mental toughness that I developed during rowing,” he says. “That’s something I learned at MIT, to cultivate the dedication you put toward something. It’s all the same mental toughness whether you apply it to physical activities like rowing, or research projects.”

Looking ahead, Murray hopes to pursue a PhD in neuroscience, looking to find ways to incorporate his love of philosophy and human thought into his cognitive research. “I think there’s a lot more to do with neuroscience, especially with artificial intelligence. There are so many new technological developments happening right now,” he says.

Dendrites may help neurons perform complicated calculations

Within the human brain, neurons perform complex calculations on information they receive. Researchers at MIT have now demonstrated how dendrites — branch-like extensions that protrude from neurons — help to perform those computations.

The researchers found that within a single neuron, different types of dendrites receive input from distinct parts of the brain, and process it in different ways. These differences may help neurons to integrate a variety of inputs and generate an appropriate response, the researchers say.

In the neurons that the researchers examined in this study, it appears that this dendritic processing helps cells to take in visual information and combine it with motor feedback, in a circuit that is involved in navigation and planning movement.

“Our hypothesis is that these neurons have the ability to pick out specific features and landmarks in the visual environment, and combine them with information about running speed, where I’m going, and when I’m going to start, to move toward a goal position,” says Mark Harnett, an associate professor of brain and cognitive sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Mathieu Lafourcade, a former MIT postdoc, is the lead author of the paper, which appears today in Neuron.

Complex calculations

Any given neuron can have dozens of dendrites, which receive synaptic input from other neurons. Neuroscientists have hypothesized that these dendrites can act as compartments that perform their own computations on incoming information before sending the results to the body of the neuron, which integrates all these signals to generate an output.

Previous research has shown that dendrites can amplify incoming signals using specialized proteins called NMDA receptors. These are voltage-sensitive neurotransmitter receptors that are dependent on the activity of other receptors called AMPA receptors. When a dendrite receives many incoming signals through AMPA receptors at the same time, the threshold to activate nearby NMDA receptors is reached, creating an extra burst of current.

This phenomenon, known as supralinearity, is believed to help neurons distinguish between inputs that arrive close together or farther apart in time or space, Harnett says.

In the new study, the MIT researchers wanted to determine whether different types of inputs are targeted specifically to different types of dendrites, and if so, how that would affect the computations performed by those neurons. They focused on a population of neurons called pyramidal cells, the principal output neurons of the cortex, which have several different types of dendrites. Basal dendrites extend below the body of the neuron, apical oblique dendrites extend from a trunk that travels up from the body, and tuft dendrites are located at the top of the trunk.

Harnett and his colleagues chose a part of the brain called the retrosplenial cortex (RSC) for their studies because it is a good model for association cortex — the type of brain cortex used for complex functions such as planning, communication, and social cognition. The RSC integrates information from many parts of the brain to guide navigation, and pyramidal neurons play a key role in that function.

In a study of mice, the researchers first showed that three different types of input come into pyramidal neurons of the RSC: from the visual cortex into basal dendrites, from the motor cortex into apical oblique dendrites, and from the lateral nuclei of the thalamus, a visual processing area, into tuft dendrites.

“Until now, there hasn’t been much mapping of what inputs are going to those dendrites,” Harnett says. “We found that there are some sophisticated wiring rules here, with different inputs going to different dendrites.”

A range of responses

The researchers then measured electrical activity in each of those compartments. They expected that NMDA receptors would show supralinear activity, because this behavior has been demonstrated before in dendrites of pyramidal neurons in both the primary sensory cortex and the hippocampus.

In the basal dendrites, the researchers saw just what they expected: Input coming from the visual cortex provoked supralinear electrical spikes, generated by NMDA receptors. However, just 50 microns away, in the apical oblique dendrites of the same cells, the researchers found no signs of supralinear activity. Instead, input to those dendrites drives a steady linear response. Those dendrites also have a much lower density of NMDA receptors.

“That was shocking, because no one’s ever reported that before,” Harnett says. “What that means is the apical obliques don’t care about the pattern of input. Inputs can be separated in time, or together in time, and it doesn’t matter. It’s just a linear integrator that’s telling the cell how much input it’s getting, without doing any computation on it.”

Those linear inputs likely represent information such as running speed or destination, Harnett says, while the visual information coming into the basal dendrites represents landmarks or other features of the environment. The supralinearity of the basal dendrites allows them to perform more sophisticated types of computation on that visual input, which the researchers hypothesize allows the RSC to flexibly adapt to changes in the visual environment.

In the tuft dendrites, which receive input from the thalamus, it appears that NMDA spikes can be generated, but not very easily. Like the apical oblique dendrites, the tuft dendrites have a low density of NMDA receptors. Harnett’s lab is now studying what happens in all of these different types of dendrites as mice perform navigation tasks.

The research was funded by a Boehringer Ingelheim Fonds PhD Fellowship, the National Institutes of Health, the James W. and Patricia T. Poitras Fund, the Klingenstein-Simons Fellowship Program, a Vallee Scholar Award, and a McKnight Scholar Award.

School of Science announces 2022 Infinite Expansion Awards

The MIT School of Science has announced eight postdocs and research scientists as recipients of the 2022 Infinite Expansion Award.

The award, formerly known as the Infinite Kilometer Award, was created in 2012 to highlight extraordinary members of the MIT science community. The awardees are nominated not only for their research, but for going above and beyond in mentoring junior colleagues, participating in educational programs, and contributing to their departments, labs, and research centers, the school, and the Institute.

The 2022 School of Science Infinite Expansion winners are:

  • Héctor de Jesús-Cortés, a postdoc in the Picower Institute for Learning and Memory, nominated by professor and Department of Brain and Cognitive Sciences (BCS) head Michale Fee, professor and McGovern Institute for Brain Research Director Robert Desimone, professor and Picower Institute Director Li-Huei Tsai, professor and associate BCS head Laura Schulz, associate professor and associate BCS head Joshua McDermott, and professor and BCS Postdoc Officer Mark Bear for his “awe-inspiring commitment of time and energy to research, outreach, education, mentorship, and community;”
  • Harold Erbin, a postdoc in the Laboratory for Nuclear Science’s Institute for Artificial Intelligence and Fundamental Interactions (IAIFI), nominated by professor and IAIFI Director Jesse Thaler, associate professor and IAIFI Deputy Director Mike Williams, and associate professor and IAIFI Early Career and Equity Committee Chair Tracy Slatyer for “provid[ing] exemplary service on the IAIFI Early Career and Equity Committee” and being “actively involved in many other IAIFI community building efforts;”
  • Megan Hill, a postdoc in the Department of Chemistry, nominated by Professor Jeremiah Johnson for being an “outstanding scientist” who has “also made exceptional contributions to our community through her mentorship activities and participation in Women in Chemistry;”
  • Kevin Kuns, a postdoc in the Kavli Institute for Astrophysics and Space Research, nominated by Associate Professor Matthew Evans for “consistently go[ing] beyond expectations;”
  • Xingcheng Lin, a postdoc in the Department of Chemistry, nominated by Associate Professor Bin Zhang for being “very talented, extremely hardworking, and genuinely enthusiastic about science;”
  • Alexandra Pike, a postdoc in the Department of Biology, nominated by Professor Stephen Bell for “not only excel[ing] in the laboratory” but also being “an exemplary citizen in the biology department, contributing to teaching, community, and to improving diversity, equity, and inclusion in the department;”
  • Nora Shipp, a postdoc with the Kavli Institute for Astrophysics and Space Research, nominated by Assistant Professor Lina Necib for being “independent, efficient, with great leadership qualities” with “impeccable” research; and
  • Jakob Voigts, a research scientist in the McGovern Institute for Brain Research, nominated by Associate Professor Mark Harnett and his laboratory for “contribut[ing] to the growth and development of the lab and its members in numerous and irreplaceable ways.”

Winners are honored with a monetary award and will be celebrated with family, friends, and nominators at a later date, along with recipients of the Infinite Mile Award.

Study finds a striking difference between neurons of humans and other mammals

McGovern Institute Investigator Mark Harnett. Photo: Justin Knight

Neurons communicate with each other via electrical impulses, which are produced by ion channels that control the flow of ions such as potassium and sodium. In a surprising new finding, MIT neuroscientists have shown that human neurons have a much smaller number of these channels than expected, compared to the neurons of other mammals.

The researchers hypothesize that this reduction in channel density may have helped the human brain evolve to operate more efficiently, allowing it to divert resources to other energy-intensive processes that are required to perform complex cognitive tasks.

“If the brain can save energy by reducing the density of ion channels, it can spend that energy on other neuronal or circuit processes,” says Mark Harnett, an associate professor of brain and cognitive sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Harnett and his colleagues analyzed neurons from 10 different mammals, the most extensive electrophysiological study of its kind, and identified a “building plan” that holds true for every species they looked at — except for humans. They found that as the size of neurons increases, the density of channels found in the neurons also increases.

However, human neurons proved to be a striking exception to this rule.

“Previous comparative studies established that the human brain is built like other mammalian brains, so we were surprised to find strong evidence that human neurons are special,” says former MIT graduate student Lou Beaulieu-Laroche.

Beaulieu-Laroche is the lead author of the study, which appears today in Nature.

A building plan

Neurons in the mammalian brain can receive electrical signals from thousands of other cells, and that input determines whether or not they will fire an electrical impulse called an action potential. In 2018, Harnett and Beaulieu-Laroche discovered that human and rat neurons differ in some of their electrical properties, primarily in parts of the neuron called dendrites — tree-like antennas that receive and process input from other cells.

One of the findings from that study was that human neurons had a lower density of ion channels than neurons in the rat brain. The researchers were surprised by this observation, as ion channel density was generally assumed to be constant across species. In their new study, Harnett and Beaulieu-Laroche decided to compare neurons from several different mammalian species to see if they could find any patterns that governed the expression of ion channels. They studied two types of voltage-gated potassium channels and the HCN channel, which conducts both potassium and sodium, in layer 5 pyramidal neurons, a type of excitatory neurons found in the brain’s cortex.

 

Former McGovern Institute graduate student Lou Beaulieu-Laroche is the lead author of the 2021 Nature paper.

They were able to obtain brain tissue from 10 mammalian species: Etruscan shrews (one of the smallest known mammals), gerbils, mice, rats, Guinea pigs, ferrets, rabbits, marmosets, and macaques, as well as human tissue removed from patients with epilepsy during brain surgery. This variety allowed the researchers to cover a range of cortical thicknesses and neuron sizes across the mammalian kingdom.

The researchers found that in nearly every mammalian species they looked at, the density of ion channels increased as the size of the neurons went up. The one exception to this pattern was in human neurons, which had a much lower density of ion channels than expected.

The increase in channel density across species was surprising, Harnett says, because the more channels there are, the more energy is required to pump ions in and out of the cell. However, it started to make sense once the researchers began thinking about the number of channels in the overall volume of the cortex, he says.

In the tiny brain of the Etruscan shrew, which is packed with very small neurons, there are more neurons in a given volume of tissue than in the same volume of tissue from the rabbit brain, which has much larger neurons. But because the rabbit neurons have a higher density of ion channels, the density of channels in a given volume of tissue is the same in both species, or any of the nonhuman species the researchers analyzed.

“This building plan is consistent across nine different mammalian species,” Harnett says. “What it looks like the cortex is trying to do is keep the numbers of ion channels per unit volume the same across all the species. This means that for a given volume of cortex, the energetic cost is the same, at least for ion channels.”

Energy efficiency

The human brain represents a striking deviation from this building plan, however. Instead of increased density of ion channels, the researchers found a dramatic decrease in the expected density of ion channels for a given volume of brain tissue.

The researchers believe this lower density may have evolved as a way to expend less energy on pumping ions, which allows the brain to use that energy for something else, like creating more complicated synaptic connections between neurons or firing action potentials at a higher rate.

“We think that humans have evolved out of this building plan that was previously restricting the size of cortex, and they figured out a way to become more energetically efficient, so you spend less ATP per volume compared to other species,” Harnett says.

He now hopes to study where that extra energy might be going, and whether there are specific gene mutations that help neurons of the human cortex achieve this high efficiency. The researchers are also interested in exploring whether primate species that are more closely related to humans show similar decreases in ion channel density.

The research was funded by the Natural Sciences and Engineering Research Council of Canada, a Friends of the McGovern Institute Fellowship, the National Institute of General Medical Sciences, the Paul and Daisy Soros Fellows Program, the Dana Foundation David Mahoney Neuroimaging Grant Program, the National Institutes of Health, the Harvard-MIT Joint Research Grants Program in Basic Neuroscience, and Susan Haar.

Other authors of the paper include Norma Brown, an MIT technical associate; Marissa Hansen, a former post-baccalaureate scholar; Enrique Toloza, a graduate student at MIT and Harvard Medical School; Jitendra Sharma, an MIT research scientist; Ziv Williams, an associate professor of neurosurgery at Harvard Medical School; Matthew Frosch, an associate professor of pathology and health sciences and technology at Harvard Medical School; Garth Rees Cosgrove, director of epilepsy and functional neurosurgery at Brigham and Women’s Hospital; and Sydney Cash, an assistant professor of neurology at Harvard Medical School and Massachusetts General Hospital.

Nine MIT students awarded 2021 Paul and Daisy Soros Fellowships for New Americans

An MIT senior and eight MIT graduate students are among the 30 recipients of this year’s P.D. Soros Fellowships for New Americans. In addition to senior Fiona Chen, MIT’s newest Soros winners include graduate students Aziza Almanakly, Alaleh Azhir, Brian Y. Chang PhD ’18, James Diao, Charlie ChangWon Lee, Archana Podury, Ashwin Sah ’20, and Enrique Toloza. Six of the recipients are enrolled at the Harvard-MIT Program in Health Sciences and Technology.

P.D. Soros Fellows receive up to $90,000 to fund their graduate studies and join a lifelong community of new Americans from different backgrounds and fields. The 2021 class was selected from a pool of 2,445 applicants, marking the most competitive year in the fellowship’s history.

The Paul & Daisy Soros Fellowships for New Americans program honors the contributions of immigrants and children of immigrants to the United States. As Fiona Chen says, “Being a new American has required consistent confrontation with the struggles that immigrants and racial minorities face in the U.S. today. It has meant frequent difficulties with finding security and comfort in new contexts. But it has also meant continual growth in learning to love the parts of myself — the way I look; the things that my family and I value — that have marked me as different, or as an outsider.”

Students interested in applying to the P.D. Soros fellowship should contact Kim Benard, assistant dean of distinguished fellowships in Career Advising and Professional Development.

Aziza Almanakly

Aziza Almanakly, a PhD student in electrical engineering and computer science, researches microwave quantum optics with superconducting qubits for quantum communication under Professor William Oliver in the Department of Physics. Almanakly’s career goal is to engineer multi-qubit systems that push boundaries in quantum technology.

Born and raised in northern New Jersey, Almanakly is the daughter of Syrian immigrants who came to the United States in the early 1990s in pursuit of academic opportunities. As the civil war in Syria grew dire, more of her relatives sought asylum in the U.S. Almanakly grew up around extended family who built a new version of their Syrian home in New Jersey.

Following in the footsteps of her mathematically minded father, Almanakly studied electrical engineering at The Cooper Union for the Advancement of Science and Art. She also pursued research opportunities in experimental quantum computing at Princeton University, the City University of New York, New York University, and Caltech.

Almanakly recognizes the importance of strong mentorship in diversifying engineering. She uses her unique experience as a New American and female engineer to encourage students from underrepresented backgrounds to enter STEM fields.

Alaleh Azhir

Alaleh Azhir grew up in Iran, where she pursued her passion for mathematics. She immigrated with her mother to the United States at age 14. Determined to overcome strict gender roles she had witnessed for women, Azhir is dedicated to improving health care for them.

Azhir graduated from Johns Hopkins University in 2019 with a perfect GPA as a triple major in biomedical engineering, computer science, and applied mathematics and statistics. A Rhodes and Barry Goldwater Scholar, she has developed many novel tools for visualization and analysis of genomics data at Johns Hopkins University, Harvard University, MIT, the National Institutes of Health, and laboratories in Switzerland.

After completing a master’s in statistical science at Oxford University, Azhir began her MD studies in the Harvard-MIT Program in Health Sciences and Technology. Her thesis focuses on the role of X and Y sex chromosomes on disease manifestations. Through medical training, she aims to build further computational tools specifically for preventive care for women. She has also founded and directs the nonprofit organization, Frappa, aimed at mentoring women living in Iran and helping them to immigrate abroad through the graduate school application process.

Brian Y. Chang PhD ’18

Born in Johnson City, New York, Brian Y. Chang PhD ’18 is the son of immigrants from the Shanghai municipality and Shandong Province in China. He pursued undergraduate and master’s degrees in mechanical engineering at Carnegie Mellon University, graduating in a combined four years with honors.

In 2018, Chang completed a PhD in medical engineering at MIT. Under the mentorship of Professor Elazer Edelman, Chang developed methods that make advanced cardiac technologies more accessible. The resulting approaches are used in hospitals around the world. Chang has published extensively and holds five patents.

With the goal of harnessing the power of engineering to improve patient care, Chang co-founded X-COR Therapeutics, a seed-funded medical device startup developing a more accessible treatment for lung failure with the potential to support patients with severe Covid-19 and chronic obstructive pulmonary disease.

After spending time in the hospital connecting with patients and teaching cardiovascular pathophysiology to medical students, Chang decided to attend medical school. He is currently a medical student in the Harvard-MIT Program in Health Sciences and Technology. Chang hopes to advance health care through medical device innovation and education as a future physician-scientist, entrepreneur, and educator.

Fiona Chen

MIT senior Fiona Chen was born in Cedar Park, Texas, the daughter of immigrants from China. Witnessing how her own and many other immigrant families faced significant difficulties finding work and financial stability sparked her interest in learning about poverty and economic inequality.

At MIT, Chen has pursued degrees in economics and mathematics. Her economics research projects have examined important policy issues — social isolation among students, global development and poverty, universal health-care systems, and the role of technology in shaping the labor market.

An active member of the MIT community, Chen has served as the officer on governance and officer on policy of the Undergraduate Association, MIT’s student government; the opinion editor of The Tech student newspaper; the undergraduate representative of several Institute-wide committees, including MIT’s Corporation Joint Advisory Committee; and one of the founding members of MIT Students Against War. In each of these roles, she has worked to advocate for policies to support underrepresented groups at MIT.

As a Soros fellow, Chen will pursue a PhD in economics to deepen her understanding of economic policy. Her ultimate goal is to become a professor who researches poverty and economic inequality, and applies her findings to craft policy solutions.

James Diao

James Diao graduated from Yale University with degrees in statistics and biochemistry and is currently a medical student at the Harvard-MIT Program in Health Sciences and Technology. He aspires to give voice to patient perspectives in the development and evaluation of health-care technology.

Diao grew up in Houston’s Chinatown, and spent summers with his extended family in Jiangxian. Diao’s family later moved to Fort Bend, Texas, where he found a pediatric oncologist mentor who introduced him to the wonders of modern molecular biology.

Diao’s interests include the responsible development of technology. At Apple, he led projects to validate wearable health features in diverse populations; at PathAI, he built deep learning models to broaden access to pathologist services; at Yale, where he worked on standardizing analyses of exRNA biomarkers; and at Harvard, he studied the impacts of clinical guidelines on marginalized groups.

Diao’s lead author research in the New England Journal of Medicine and JAMA systematically compared race-based and race-free equations for kidney function, and demonstrated that up to 1 million Black Americans may receive unequal kidney care due to their race. He has also published articles on machine learning and precision medicine.

Charlie ChangWon Lee

Born in Seoul, South Korea, Charlie ChangWon Lee was 10 when his family immigrated to the United States and settled in Palisades Park, New Jersey. The stress of his parents’ lack of health coverage ignited Lee’s determination to study the reasons for the high cost of health care in the U.S. and learn how to care for uninsured families like his own.

Lee graduated summa cum laude in integrative biology from Harvard College, winning the Hoopes Prize for his thesis on the therapeutic potential of human gut microbes. Lee’s research on novel therapies led him to question how newly approved, and expensive, medications could reach more patients.

At the Program on Regulation, Therapeutics, and Law (PORTAL) at Brigham and Women’s Hospital, Lee studied policy issues involving pharmaceutical drug pricing, drug development, and medication use and safety. His articles have appeared in JAMA, Health Affairs, and Mayo Clinic Proceedings.

As a first-year medical student at the Harvard-MIT Health Sciences and Technology program, Lee is investigating policies to incentivize vaccine and biosimilar drug development. He hopes to find avenues to bridge science and policy and translate medical innovations into accessible, affordable therapies.

Archana Podury

The daughter of Indian immigrants, Archana Podury was born in Mountain View, California. As an undergraduate at Cornell University, she studied the neural circuits underlying motor learning. Her growing interest in whole-brain dynamics led her to the Princeton Neuroscience Institute and Neuralink, where she discovered how brain-machine interfaces could be used to understand diffuse networks in the brain.

While studying neural circuits, Podury worked at a syringe exchange in Ithaca, New York, where she witnessed firsthand the mechanics of court-based drug rehabilitation. Now, as an MD student in the Harvard-MIT Health Sciences and Technology program, Podury is interested in combining computational and social approaches to neuropsychiatric disease.

In the Boyden Lab at the MIT McGovern Institute for Brain Research, Podury is developing human brain organoid models to better characterize circuit dysfunction in neurodevelopmental disorders. Concurrently, her work in the Dhand Lab at Brigham and Women’s Hospital applies network science tools to understand how patients’ social environments influence their health outcomes following acute neurological injury.

Podury hopes that focusing on both neural and social networks can lead toward a more comprehensive, and compassionate, approach to health and disease.

Ashwin Sah ’20

Ashwin Sah ’20 was born and raised in Portland, Oregon, the son of Indian immigrants. He developed a passion for mathematics research as an undergraduate at MIT, where he conducted research under Professor Yufei Zhao, as well as at the Duluth and Emory REU (Research Experience for Undergraduates) programs.

Sah has given talks on his work at multiple professional venues. His undergraduate research in varied areas of combinatorics and discrete mathematics culminated in the Barry Goldwater Scholarship and the Frank and Brennie Morgan Prize for Outstanding Research in Mathematics by an Undergraduate Student. Additionally, his work on diagonal Ramsey numbers was recently featured in Quanta Magazine.

Beyond research, Sah has pursued opportunities to give back to the math community, helping to organize or grade competitions such as the Harvard-MIT Mathematics Tournament and the USA Mathematical Olympiad. He has also been a grader at the Mathematical Olympiad Program, a camp for talented high-school students in the United States, and an instructor for the Monsoon Math Camp, a virtual program aimed at teaching higher mathematics to high school students in India.

Sah is currently a PhD student in mathematics at MIT, where he continues to work with Zhao.

Enrique Toloza

Enrique Toloza was born in Los Angeles, California, the child of two immigrants: one from Colombia who came to the United States for a PhD and the other from the Philippines who grew up in California and went on to medical school. Their literal marriage of science and medicine inspired Toloza to become a physician-scientist.

Toloza majored in physics and Spanish literature at the University of North Carolina at Chapel Hill. He eventually settled on an interest in theoretical neuroscience after a summer research internship at MIT and completing an honors thesis on noninvasive brain stimulation.

After college, Toloza joined Professor Mark Harnett’s laboratory at MIT for a year. He went on to enroll in the Harvard-MIT MD/PhD program, studying within the Health Sciences and Technology MD curriculum at Harvard and the PhD program at MIT. For his PhD, Toloza rejoined Harnett to conduct research on the biophysics of dendritic integration and the contribution of dendrites to cortical computations in the brain.

Toloza is passionate about expanding health care access to immigrant populations. In college, he led the interpreting team at the University of North Carolina at Chapel Hill’s student-run health clinic; at Harvard Medical School, he has worked with Spanish-speaking patients as a student clinician.