Four from MIT named 2025 Rhodes Scholars

Yiming Chen ’24, Wilhem Hector, Anushka Nair, and David Oluigbo have been selected as 2025 Rhodes Scholars and will begin fully funded postgraduate studies at Oxford University in the U.K. next fall. In addition to MIT’s two U.S. Rhodes winners, Ouigbo and Nair, two affiliates were awarded international Rhodes Scholarships: Chen for Rhodes’ China constituency and Hector for the Global Rhodes Scholarship. Hector is the first Haitian citizen to be named a Rhodes Scholar.

The scholars were supported by Associate Dean Kim Benard and the Distinguished Fellowships team in Career Advising and Professional Development. They received additional mentorship and guidance from the Presidential Committee on Distinguished Fellowships.

“It is profoundly inspiring to work with our amazing students, who have accomplished so much at MIT and, at the same time, thought deeply about how they can have an impact in solving the world’s major challenges,” says Professor Nancy Kanwisher who co-chairs the committee along with Professor Tom Levenson. “These students have worked hard to develop and articulate their vision and to learn to communicate it to others with passion, clarity, and confidence. We are thrilled but not surprised to see so many of them recognized this year as finalists and as winners.

Yiming Chen ’24

Yiming Chen, from Beijing, China, and the Washington area, was named one of four Rhodes China Scholars on Sept 28. At Oxford, she will pursue graduate studies in engineering science, working toward her ongoing goal of advancing AI safety and reliability in clinical workflows.

Chen graduated from MIT in 2024 with a BS in mathematics and computer science and an MEng in computer science. She worked on several projects involving machine learning for health care, and focused her master’s research on medical imaging in the Medical Vision Group of the Computer Science and Artificial Intelligence Laboratory (CSAIL).

Collaborating with IBM Research, Chen developed a neural framework for clinical-grade lumen segmentation in intravascular ultrasound and presented her findings at the MICCAI Machine Learning in Medical Imaging conference. Additionally, she worked at Cleanlab, an MIT-founded startup, creating an open-source library to ensure the integrity of image datasets used in vision tasks.

Chen was a teaching assistant in the MIT math and electrical engineering and computer science departments, and received a teaching excellence award. She taught high school students at the Hampshire College Summer Studies in Math and was selected to participate in MISTI Global Teaching Labs in Italy.

Having studied the guzheng, a traditional Chinese instrument, since age 4, Chen served as president of the MIT Chinese Music Ensemble, explored Eastern and Western music synergies with the MIT Chamber Music Society, and performed at the United Nations. On campus, she was also active with Asymptones a capella, MIT Ring Committee, Ribotones, Figure Skating Club, and the Undergraduate Association Innovation Committee.

Wilhem Hector

Wilhem Hector, a senior from Port-au-Prince, Haiti, majoring in mechanical engineering, was awarded a Global Rhodes Scholarship on Nov 1. The first Haitian national to be named a Rhodes Scholar, Hector will pursue at Oxford a master’s in energy systems followed by a master’s in education, focusing on digital and social change. His long-term goals are twofold: pioneering Haiti’s renewable energy infrastructure and expanding hands-on opportunities in the country‘s national curriculum.

Hector developed his passion for energy through his research in the MIT Howland Lab, where he investigated the uncertainty of wind power production during active yaw control. He also helped launch the MIT Renewable Energy Clinic through his work on the sources of opposition to energy projects in the U.S. Beyond his research, Hector had notable contributions as an intern at Radia Inc. and DTU Wind Energy Systems, where he helped develop computational wind farm modeling and simulation techniques.

Outside of MIT, he leads the Hector Foundation, a nonprofit providing educational opportunities to young people in Haiti. He has raised over $80,000 in the past five years to finance their initiatives, including the construction of Project Manus, Haiti’s first open-use engineering makerspace. Hector’s service endeavors have been supported by the MIT PKG Center, which awarded him the Davis Peace Prize, the PKG Fellowship for Social Impact, and the PKG Award for Public Service.

Hector co-chairs both the Student Events Board and the Class of 2025 Senior Ball Committee and has served as the social chair for Chocolate City and the African Students Association.

Anushka Nair

Anushka Nair, from Portland, Oregon, will graduate next spring with BS and MEng degrees in computer science and engineering with concentrations in economics and AI. She plans to pursue a DPhil in social data science at the Oxford Internet Institute. Nair aims to develop ethical AI technologies that address pressing societal challenges, beginning with combating misinformation.

For her master’s thesis under Professor David Rand, Nair is developing LLM-powered fact-checking tools to detect nuanced misinformation beyond human or automated capabilities. She also researches human-AI co-reasoning at the MIT Center for Collective Intelligence with Professor Thomas Malone. Previously, she conducted research on autonomous vehicle navigation at Stanford’s AI and Robotics Lab, energy microgrid load balancing at MIT’s Institute for Data, Systems, and Society, and worked with Professor Esther Duflo in economics.

Nair interned in the Executive Office of the Secretary General at the United Nations, where she integrated technology solutions and assisted with launching the High-Level Advisory Body on AI. She also interned in Tesla’s energy sector, contributing to Autobidder, an energy trading tool, and led the launch of a platform for monitoring distributed energy resources and renewable power plants. Her work has earned her recognition as a Social and Ethical Responsibilities of Computing Scholar and a U.S. Presidential Scholar.

Nair has served as President of the MIT Society of Women Engineers and MIT and Harvard Women in AI, spearheading outreach programs to mentor young women in STEM fields. She also served as president of MIT Honors Societies Eta Kappa Nu and Tau Beta Pi.

David Oluigbo

David Oluigbo, from Washington, is a senior majoring in artificial intelligence and decision making and minoring in brain and cognitive sciences. At Oxford, he will undertake an MSc in applied digital health followed by an MSc in modeling for global health. Afterward, Oluigbo plans to attend medical school with the goal of becoming a physician-scientist who researches and applies AI to address medical challenges in low-income countries.

Since his first year at MIT, Oluigbo has conducted neural and brain research with Ev Fedorenko at the McGovern Institute for Brain Research and with Susanna Mierau’s Synapse and Network Development Group at Brigham and Women’s Hospital. His work with Mierau led to several publications and a poster presentation at the Federation of European Societies annual meeting.

In a summer internship at the National Institutes of Health Clinical Center, Oluigbo designed and trained machine-learning models on CT scans for automatic detection of neuroendocrine tumors, leading to first authorship on an International Society for Optics and Photonics conference proceeding paper, which he presented at the 2024 annual meeting. Oluigbo also did a summer internship with the Anyscale Learning for All Laboratory at the MIT Computer Science and Artificial Intelligence Laboratory.

Oluigbo is an EMT and systems administrator officer with MIT-EMS. He is a consultant for Code for Good, a representative on the MIT Schwarzman College of Computing Undergraduate Advisory Group, and holds executive roles with the Undergraduate Association, the MIT Brain and Cognitive Society, and the MIT Running Club.

Neuroscientists create a comprehensive map of the cerebral cortex

By analyzing brain scans taken as people watched movie clips, MIT researchers have created the most comprehensive map yet of the functions of the brain’s cerebral cortex.

Using functional magnetic resonance imaging (fMRI) data, the research team identified 24 networks with different functions, which include processing language, social interactions, visual features, and other types of sensory input.

Many of these networks have been seen before but haven’t been precisely characterized using naturalistic conditions. While the new study mapped networks in subjects watching engaging movies, previous works have used a small number of specific tasks or examined correlations across the brain in subjects who were simply resting.

“There’s an emerging approach in neuroscience to look at brain networks under more naturalistic conditions. This is a new approach that reveals something different from conventional approaches in neuroimaging,” says Robert Desimone, director of MIT’s McGovern Institute for Brain Research. “It’s not going to give us all the answers, but it generates a lot of interesting ideas based on what we see going on in the movies that’s related to these network maps that emerge.”

The researchers hope that their new map will serve as a starting point for further study of what each of these networks is doing in the brain.

Desimone and John Duncan, a program leader in the MRC Cognition and Brain Sciences Unit at Cambridge University, are the senior authors of the study, which appears today in Neuron. Reza Rajimehr, a research scientist in the McGovern Institute and a former graduate student at Cambridge University, is the lead author of the paper.

Precise mapping

The cerebral cortex of the brain contains regions devoted to processing different types of sensory information, including visual and auditory input. Over the past few decades, scientists have identified many networks that are involved in this kind of processing, often using fMRI to measure brain activity as subjects perform a single task such as looking at faces.

In other studies, researchers have scanned people’s brains as they do nothing, or let their minds wander. From those studies, researchers have identified networks such as the default mode network, a network of areas that is active during internally focused activities such as daydreaming.

“Up to now, most studies of networks were based on doing functional MRI in the resting-state condition. Based on those studies, we know some main networks in the cortex. Each of them is responsible for a specific cognitive function, and they have been highly influential in the neuroimaging field,” Rajimehr says.

However, during the resting state, many parts of the cortex may not be active at all. To gain a more comprehensive picture of what all these regions are doing, the MIT team analyzed data recorded while subjects performed a more natural task: watching a movie.

“By using a rich stimulus like a movie, we can drive many regions of the cortex very efficiently. For example, sensory regions will be active to process different features of the movie, and high-level areas will be active to extract semantic information and contextual information,” Rajimehr says. “By activating the brain in this way, now we can distinguish different areas or different networks based on their activation patterns.”

The data for this study was generated as part of the Human Connectome Project. Using a 7-Tesla MRI scanner, which offers higher resolution than a typical MRI scanner, brain activity was imaged in 176 people as they watched one hour of movie clips showing a variety of scenes.

The MIT team used a machine-learning algorithm to analyze the activity patterns of each brain region, allowing them to identify 24 networks with different activity patterns and functions.

Some of these networks are located in sensory areas such as the visual cortex or auditory cortex, as expected for regions with specific sensory functions. Other areas respond to features such as actions, language, or social interactions. Many of these networks have been seen before, but this technique offers more precise definition of where the networks are located, the researchers say.

“Different regions are competing with each other for processing specific features, so when you map each function in isolation, you may get a slightly larger network because it is not getting constrained by other processes,” Rajimehr says. “But here, because all the areas are considered together, we are able to define more precise boundaries between different networks.”

The researchers also identified networks that hadn’t been seen before, including one in the prefrontal cortex, which appears to be highly responsive to visual scenes. This network was most active in response to pictures of scenes within the movie frames.

Executive control networks

Three of the networks found in this study are involved in “executive control,” and were most active during transitions between different clips. The researchers also observed that these control networks appear to have a “push-pull” relationship with networks that process specific features such as faces or actions. When networks specific to a particular feature were very active, the executive control networks were mostly quiet, and vice versa.

“Whenever the activations in domain-specific areas are high, it looks like there is no need for the engagement of these high-level networks,” Rajimehr says. “But in situations where perhaps there is some ambiguity and complexity in the stimulus, and there is a need for the involvement of the executive control networks, then we see that these networks become highly active.”

Using a movie-watching paradigm, the researchers are now studying some of the networks they identified in more detail, to identify subregions involved in particular tasks. For example, within the social processing network, they have found regions that are specific to processing social information about faces and bodies. In a new network that analyzes visual scenes, they have identified regions involved in processing memory of places.

“This kind of experiment is really about generating hypotheses for how the cerebral cortex is functionally organized. Networks that emerge during movie watching now need to be followed up with more specific experiments to test the hypotheses. It’s giving us a new view into the operation of the entire cortex during a more naturalistic task than just sitting at rest,” Desimone says.

The research was funded by the McGovern Institute, the Cognitive Science and Technology Council of Iran, the MRC Cognition and Brain Sciences Unit at the University of Cambridge, and a Cambridge Trust scholarship.

Brains, fashion, alien life, and more: Highlights from the Cambridge Science Festival

What is it like to give birth on Mars? Can bioengineer TikTok stars win at the video game “Super Smash Brothers” while also answering questions about science? How do sheep, mouse, and human brains compare? These questions and others were asked last month when more than 50,000 visitors from across Cambridge, Massachusetts, and Greater Boston participated in the MIT Museum’s annual Cambridge Science Festival, a week-long celebration dedicated to creativity, ingenuity, and innovation. Running Monday, Sept. 23 through Sunday, Sept. 29, the 2024 edition was the largest in its history, with a dizzyingly diverse program spanning more than 300 events presented in more than 75 different venues, all free and open to the public.

Presented in partnership with the City of Cambridge and more than 250 collaborators across Greater Boston, this year’s festival comprised a wide range of interactive programs for adults, children, and families, including workshops, demos, keynote lectures, walking tours, professional networking opportunities, and expert panels. Aimed at scientists and non-scientists alike, the festival also collaborated with several local schools to offer visits from an astronaut for middle- and high-school students.

With support from dozens of local organizations, the festival was the first iteration to happen under the new leadership of Michael John Gorman, who was appointed director of the MIT Museum in January and began his position in July.

“A science festival like this has an incredible ability to unite a diverse array of people and ideas, while also showcasing Cambridge as an internationally recognized leader in science, technology, engineering, and math,” says Gorman. “I’m thrilled to have joined an institution that values producing events that foster such a strong sense of community, and was so excited to see the enthusiastic response from the tens of thousands of people who showed up and made the festival such a success.”

The 2024 Cambridge Science Festival was broad in scope, with events ranging from hands-on 3D-printing demos to concerts from the MIT Laptop Ensemble to participatory activities at the MIT Museum’s Maker Hub. This year’s programming also highlighted three carefully curated theme tracks that each encompassed more than 25 associated events:

  1. “For the Win: Games, Puzzles, and the Science of Play” (Thursday) consisted of multiple evening events clustered around Kendall Square.
  2. “Frontiers: A New Era of Space Exploration” (Friday and Saturday) featured programs throughout Boston and was co-curated by The Space Consortium, organizers of Massachusetts Space Week.
  3. “Electric Skin: Wearable Tech and the Future of Fashion” (Saturday) offered both day and evening events at the intersection of science, fabric, and fashion, taking place at The Foundry and co-curated by Boston Fashion Week and Advanced Functional Fabrics of America.

One of the discussions tied to the games-themed “For the Win” track involved artist Jeremy Couillard speaking with MIT Lecturer Mikael Jakobsson about the larger importance of games as a construct for encouraging interpersonal interaction and creating meaningful social spaces. Starting this past summer, the List Visual Arts Center has been the home of Couillard’s first-ever institutional solo exhibition, which centers around “Escape from Lavender Island,” a dystopian third-person, open-world exploration game he released in 2023 on the Steam video-game platform.

For the “Frontiers” space theme, one of the headlining events, “Is Anyone Out There?”, tackled the latest cutting-edge research and theories related to the potential existence of extraterrestrial life. The panel of local astronomers and astrophysicists included Sara Seager, the Class of 1941 Professor of Planetary Science, professor of physics, and professor of aeronautics and astronautics at MIT; Kim Arcand, an expert in astronomic visualization at the Harvard-Smithsonian Center for Astrophysics; and Michael Hecht, a research scientist and associate director of research management at MIT’s Haystack Observatory. The researchers spoke about the tools they and their peers use to try to search for extraterrestrial life, and what discovering life beyond our planet might mean for humanity.

For the “Electric Skin” fashion track, events spanned a range of topics revolving around the role that technology will play in the future of the field, including sold-out workshops where participants learned how to laser-cut and engineer “structural garments.” A panel looking at generative technologies explored how designers are using AI to spur innovation in their companies. Onur Yüce Gün, director of computational design at New Balance, also spoke on a panel with Ziyuan “Zoey” Zhu from IDEO, MIT Media Lab research scientist and architect Behnaz Farahi, and Fiorenzo Omenetto, principal investigator and director of The Tufts Silk Lab and the Frank C. Doble Professor of Engineering at Tufts University and a professor in the Biomedical Engineering Department and in the Department of Physics at Tufts.

Beyond the three themed tracks, the festival comprised an eclectic mix of interactive events and panels. Cambridge Public Library hosted a “Science Story Slam” with high-school students from 10 different states competing for $5,000 in prize money. Entrants shared 5-minute-long stories about their adventures in STEM, with topics ranging from probability to “astro-agriculture.” Judges included several MIT faculty and staff, as well as New York Times national correspondent Kate Zernike.

Elsewhere, the MIT Museum’s Gorman moderated a discussion on AI and democracy that included Audrey Tang, the former minister of digital affairs of Taiwan. The panelists explored how AI tools could combat the polarization of political discourse and increase participation in democratic processes, particularly for marginalized voices. Also in the MIT Museum, the McGovern Institute for Brain Research organized a “Decoding the Brain” event with demos involving real animal brains, while the Broad Institute of MIT and Harvard ran a “Discovery After Dark” event to commemorate the institute’s 20th anniversary. Sunday’s Science Carnival featured more than 100 demos, events, and activities, including the ever-popular “Robot Petting Zoo.”

When it first launched in 2007, the Cambridge Science Festival was by many accounts the first large-scale event of its kind across the entire United States. Similar festivals have since popped up all over the country, including the World Science Festival in New York City, the USA Science and Engineering Festival in Washington, the North Carolina Science Festival in Chapel Hill, and the San Diego Festival of Science and Engineering.

More information about the festival is available online, including opportunities to participate in next year’s events.

Brain pathways that control dopamine release may influence motor control

Within the human brain, movement is coordinated by a brain region called the striatum, which sends instructions to motor neurons in the brain. Those instructions are conveyed by two pathways, one that initiates movement (“go”) and one that suppresses it (“no-go”).

In a new study, MIT researchers have discovered an additional two pathways that arise in the striatum and appear to modulate the effects of the go and no-go pathways. These newly discovered pathways connect to dopamine-producing neurons in the brain — one stimulates dopamine release and the other inhibits it.

By controlling the amount of dopamine in the brain via clusters of neurons known as striosomes, these pathways appear to modify the instructions given by the go and no-go pathways. They may be especially involved in influencing decisions that have a strong emotional component, the researchers say.

“Among all the regions of the striatum, the striosomes alone turned out to be able to project to the dopamine-containing neurons, which we think has something to do with motivation, mood, and controlling movement,” says Ann Graybiel, an MIT Institute Professor, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the new study.

Iakovos Lazaridis, a research scientist at the McGovern Institute, is the lead author of the paper, which appears today in the journal Current Biology.

New pathways

Graybiel has spent much of her career studying the striatum, a structure located deep within the brain that is involved in learning and decision-making, as well as control of movement.

Within the striatum, neurons are arranged in a labyrinth-like structure that includes striosomes, which Graybiel discovered in the 1970s. The classical go and no-go pathways arise from neurons that surround the striosomes, which are known collectively as the matrix. The matrix cells that give rise to these pathways receive input from sensory processing regions such as the visual cortex and auditory cortex. Then, they send go or no-go commands to neurons in the motor cortex.

However, the function of the striosomes, which are not part of those pathways, remained unknown. For many years, researchers in Graybiel’s lab have been trying to solve that mystery.

Their previous work revealed that striosomes receive much of their input from parts of the brain that process emotion. Within striosomes, there are two major types of neurons, classified as D1 and D2. In a 2015 study, Graybiel found that one of these cell types, D1, sends input to the substantia nigra, which is the brain’s major dopamine-producing center.

It took much longer to trace the output of the other set, D2 neurons. In the new Current Biology study, the researchers discovered that those neurons also eventually project to the substantia nigra, but first they connect to a set of neurons in the globus palladus, which inhibits dopamine output. This pathway, an indirect connection to the substantia nigra, reduces the brain’s dopamine output and inhibits movement.

The researchers also confirmed their earlier finding that the pathway arising from D1 striosomes connects directly to the substantia nigra, stimulating dopamine release and initiating movement.

“In the striosomes, we’ve found what is probably a mimic of the classical go/no-go pathways,” Graybiel says. “They’re like classic motor go/no-go pathways, but they don’t go to the motor output neurons of the basal ganglia. Instead, they go to the dopamine cells, which are so important to movement and motivation.”

Emotional decisions

The findings suggest that the classical model of how the striatum controls movement needs to be modified to include the role of these newly identified pathways. The researchers now hope to test their hypothesis that input related to motivation and emotion, which enters the striosomes from the cortex and the limbic system, influences dopamine levels in a way that can encourage or discourage action.

That dopamine release may be especially relevant for actions that induce anxiety or stress. In their 2015 study, Graybiel’s lab found that striosomes play a key role in making decisions that provoke high levels of anxiety; in particular, those that are high risk but may also have a big payoff.

“Ann Graybiel and colleagues have earlier found that the striosome is concerned with inhibiting dopamine neurons. Now they show unexpectedly that another type of striosomal neuron exerts the opposite effect and can signal reward. The striosomes can thus both up- or down-regulate dopamine activity, a very important discovery. Clearly, the regulation of dopamine activity is critical in our everyday life with regard to both movements and mood, to which the striosomes contribute,” says Sten Grillner, a professor of neuroscience at the Karolinska Institute in Sweden, who was not involved in the research.

Another possibility the researchers plan to explore is whether striosomes and matrix cells are arranged in modules that affect motor control of specific parts of the body.

“The next step is trying to isolate some of these modules, and by simultaneously working with cells that belong to the same module, whether they are in the matrix or striosomes, try to pinpoint how the striosomes modulate the underlying function of each of these modules,” Lazaridis says.

They also hope to explore how the striosomal circuits, which project to the same region of the brain that is ravaged by Parkinson’s disease, may influence that disorder.

The research was funded by the National Institutes of Health, the Saks-Kavanaugh Foundation, the William N. and Bernice E. Bumpus Foundation, Jim and Joan Schattinger, the Hock E. Tan and K. Lisa Yang Center for Autism Research, Robert Buxton, the Simons Foundation, the CHDI Foundation, and an Ellen Schapiro and Gerald Axelbaum Investigator BBRF Young Investigator Grant.

Seven with MIT ties elected to National Academy of Medicine for 2024

The National Academy of Medicine recently announced the election of more than 90 members during its annual meeting, including MIT faculty members Matthew Vander Heiden and Fan Wang, along with five MIT alumni.

Election to the National Academy of Medicine (NAM) is considered one of the highest honors in the fields of health and medicine and recognizes individuals who have demonstrated outstanding professional achievement and commitment to service.

Matthew Vander Heiden is the director of the Koch Institute for Integrative Cancer Research at MIT, a Lester Wolfe Professor of Molecular Biology, and a member of the Broad Institute of MIT and Harvard. His research explores how cancer cells reprogram their metabolism to fuel tumor growth and has provided key insights into metabolic pathways that support cancer progression, with implications for developing new therapeutic strategies. The National Academy of Medicine recognized Vander Heiden for his contributions to “the development of approved therapies for cancer and anemia” and his role as a “thought leader in understanding metabolic phenotypes and their relations to disease pathogenesis.”

Vander Heiden earned his MD and PhD from the University of Chicago and completed  his clinical training in internal medicine and medical oncology at the Brigham and Women’s Hospital and the Dana-Farber Cancer Institute. After postdoctoral research at Harvard Medical School, Vander Heiden joined the faculty of the MIT Department of Biology and the Koch Institute in 2010. He is also a practicing oncologist and instructor in medicine at Dana-Farber Cancer Institute and Harvard Medical School.

Fan Wang is a professor of brain and cognitive sciences, an investigator at the McGovern Institute, and director of the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics at MIT.  Wang’s research focuses on the neural circuits governing the bidirectional interactions between the brain and body. She is specifically interested in the circuits that control the sensory and emotional aspects of pain and addiction, as well as the sensory and motor circuits that work together to execute behaviors such as eating, drinking, and moving. The National Academy of Medicine has recognized her body of work for “providing the foundational knowledge to develop new therapies to treat chronic pain and movement disorders.”

Before coming to MIT in 2021, Wang obtained her PhD from Columbia University and received her postdoctoral training at the University of California at San Francisco and Stanford University. She became a faculty member at Duke University in 2003 and was later appointed the Morris N. Broad Professor of Neurobiology. Wang is also a member of the American Academy of Arts and Sciences and she continues to make important contributions to the neural mechanisms underlying general anesthesia, pain perception, and movement control.

MIT alumni who were elected to the NAM for 2024 include:

  • Leemore Dafny PhD ’01 (Economics);
  • David Huang ’85 MS ’89  (Electrical Engineering and Computer Science) PhD ’93 Medical Engineering and Medical Physics);
  • Nola M. Hylton ’79 (Chemical Engineering);
  • Mark R. Prausnitz PhD ’94 (Chemical Engineering); and
  • Konstantina M. Stankovic ’92 (Biology and Physics) PhD ’98 (Speech and Hearing Bioscience and Technology)

Established originally as the Institute of Medicine in 1970 by the National Academy of Sciences, the National Academy of Medicine addresses critical issues in health, science, medicine, and related policy and inspires positive actions across sectors.

“This class of new members represents the most exceptional researchers and leaders in health and medicine, who have made significant breakthroughs, led the response to major public health challenges, and advanced health equity,” said National Academy of Medicine President Victor J. Dzau. “Their expertise will be necessary to supporting NAM’s work to address the pressing health and scientific challenges we face today.”

Model reveals why debunking election misinformation often doesn’t work

When an election result is disputed, people who are skeptical about the outcome may be swayed by figures of authority who come down on one side or the other. Those figures can be independent monitors, political figures, or news organizations. However, these “debunking” efforts don’t always have the desired effect, and in some cases, they can lead people to cling more tightly to their original position.

Neuroscientists and political scientists at MIT and the University of California at Berkeley have now created a computational model that analyzes the factors that help to determine whether debunking efforts will persuade people to change their beliefs about the legitimacy of an election. Their findings suggest that while debunking fails much of the time, it can be successful under the right conditions.

For instance, the model showed that successful debunking is more likely if people are less certain of their original beliefs and if they believe the authority is unbiased or strongly motivated by a desire for accuracy. It also helps when an authority comes out in support of a result that goes against a bias they are perceived to hold: for example, Fox News declaring that Joseph R. Biden had won in Arizona in the 2020 U.S. presidential election.

“When people see an act of debunking, they treat it as a human action and understand it the way they understand human actions — that is, as something somebody did for their own reasons,” says Rebecca Saxe, the John W. Jarve Professor of Brain and Cognitive Sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study. “We’ve used a very simple, general model of how people understand other people’s actions, and found that that’s all you need to describe this complex phenomenon.”

The findings could have implications as the United States prepares for the presidential election taking place on Nov. 5, as they help to reveal the conditions that would be most likely to result in people accepting the election outcome.

MIT graduate student Setayesh Radkani is the lead author of the paper, which appears today in a special election-themed issue of the journal PNAS Nexus. Marika Landau-Wells PhD ’18, a former MIT postdoc who is now an assistant professor of political science at the University of California at Berkeley, is also an author of the study.

Modeling motivation

In their work on election debunking, the MIT team took a novel approach, building on Saxe’s extensive work studying “theory of mind” — how people think about the thoughts and motivations of other people.

As part of her PhD thesis, Radkani has been developing a computational model of the cognitive processes that occur when people see others being punished by an authority. Not everyone interprets punitive actions the same way, depending on their previous beliefs about the action and the authority. Some may see the authority as acting legitimately to punish an act that was wrong, while others may see an authority overreaching to issue an unjust punishment.

Last year, after participating in an MIT workshop on the topic of polarization in societies, Saxe and Radkani had the idea to apply the model to how people react to an authority attempting to sway their political beliefs. They enlisted Landau-Wells, who received her PhD in political science before working as a postdoc in Saxe’s lab, to join their effort, and Landau suggested applying the model to debunking of beliefs regarding the legitimacy of an election result.

The computational model created by Radkani is based on Bayesian inference, which allows the model to continually update its predictions of people’s beliefs as they receive new information. This approach treats debunking as an action that a person undertakes for his or her own reasons. People who observe the authority’s statement then make their own interpretation of why the person said what they did. Based on that interpretation, people may or may not change their own beliefs about the election result.

Additionally, the model does not assume that any beliefs are necessarily incorrect or that any group of people is acting irrationally.

“The only assumption that we made is that there are two groups in the society that differ in their perspectives about a topic: One of them thinks that the election was stolen and the other group doesn’t,” Radkani says. “Other than that, these groups are similar. They share their beliefs about the authority — what the different motives of the authority are and how motivated the authority is by each of those motives.”

The researchers modeled more than 200 different scenarios in which an authority attempts to debunk a belief held by one group regarding the validity of an election outcome.

Each time they ran the model, the researchers altered the certainty levels of each group’s original beliefs, and they also varied the groups’ perceptions of the motivations of the authority. In some cases, groups believed the authority was motivated by promoting accuracy, and in others they did not. The researchers also altered the groups’ perceptions of whether the authority was biased toward a particular viewpoint, and how strongly the groups believed in those perceptions.

Building consensus

In each scenario, the researchers used the model to predict how each group would respond to a series of five statements made by an authority trying to convince them that the election had been legitimate. The researchers found that in most of the scenarios they looked at, beliefs remained polarized and in some cases became even further polarized. This polarization could also extend to new topics unrelated to the original context of the election, the researchers found.

However, under some circumstances, the debunking was successful, and beliefs converged on an accepted outcome. This was more likely to happen when people were initially more uncertain about their original beliefs.

“When people are very, very certain, they become hard to move. So, in essence, a lot of this authority debunking doesn’t matter,” Landau-Wells says. “However, there are a lot of people who are in this uncertain band. They have doubts, but they don’t have firm beliefs. One of the lessons from this paper is that we’re in a space where the model says you can affect people’s beliefs and move them towards true things.”

Another factor that can lead to belief convergence is if people believe that the authority is unbiased and highly motivated by accuracy. Even more persuasive is when an authority makes a claim that goes against their perceived bias — for instance, Republican governors stating that elections in their states had been fair even though the Democratic candidate won.

As the 2024 presidential election approaches, grassroots efforts have been made to train nonpartisan election observers who can vouch for whether an election was legitimate. These types of organizations may be well-positioned to help sway people who might have doubts about the election’s legitimacy, the researchers say.

“They’re trying to train to people to be independent, unbiased, and committed to the truth of the outcome more than anything else. Those are the types of entities that you want. We want them to succeed in being seen as independent. We want them to succeed as being seen as truthful, because in this space of uncertainty, those are the voices that can move people toward an accurate outcome,” Landau-Wells says.

The research was funded, in part, by the Patrick J. McGovern Foundation and the Guggenheim Foundation.

A new method makes high-resolution imaging more accessible

A classical way to image nanoscale structures in cells is with high-powered, expensive super-resolution microscopes. As an alternative, MIT researchers have developed a way to expand tissue before imaging it — a technique that allows them to achieve nanoscale resolution with a conventional light microscope.

In the newest version of this technique, the researchers have made it possible to expand tissue 20-fold in a single step. This simple, inexpensive method could pave the way for nearly any biology lab to perform nanoscale imaging.

“This democratizes imaging,” says Laura Kiessling, the Novartis Professor of Chemistry at MIT and a member of the Broad Institute of MIT and Harvard and MIT’s Koch Institute for Integrative Cancer Research. “Without this method, if you want to see things with a high resolution, you have to use very expensive microscopes. What this new technique allows you to do is see things that you couldn’t normally see with standard microscopes. It drives down the cost of imaging because you can see nanoscale things without the need for a specialized facility.”

At the resolution achieved by this technique, which is around 20 nanometers, scientists can see organelles inside cells, as well as clusters of proteins.

“Twenty-fold expansion gets you into the realm that biological molecules operate in. The building blocks of life are nanoscale things: biomolecules, genes, and gene products,” says Edward Boyden, the Y. Eva Tan Professor in Neurotechnology at MIT; a professor of biological engineering, media arts and sciences, and brain and cognitive sciences; a Howard Hughes Medical Institute investigator; and a member of MIT’s McGovern Institute for Brain Research and Koch Institute for Integrative Cancer Research.

Boyden and Kiessling are the senior authors of the new study, which appears today in Nature Methods. MIT graduate student Shiwei Wang and Tay Won Shin PhD ’23 are the lead authors of the paper.

A single expansion

Boyden’s lab invented expansion microscopy in 2015. The technique requires embedding tissue into an absorbent polymer and breaking apart the proteins that normally hold tissue together. When water is added, the gel swells and pulls biomolecules apart from each other.

The original version of this technique, which expanded tissue about fourfold, allowed researchers to obtain images with a resolution of around 70 nanometers. In 2017, Boyden’s lab modified the process to include a second expansion step, achieving an overall 20-fold expansion. This enables even higher resolution, but the process is more complicated.

“We’ve developed several 20-fold expansion technologies in the past, but they require multiple expansion steps,” Boyden says. “If you could do that amount of expansion in a single step, that could simplify things quite a bit.”

With 20-fold expansion, researchers can get down to a resolution of about 20 nanometers, using a conventional light microscope. This allows them see cell structures like microtubules and mitochondria, as well as clusters of proteins.

In the new study, the researchers set out to perform 20-fold expansion with only a single step. This meant that they had to find a gel that was both extremely absorbent and mechanically stable, so that it wouldn’t fall apart when expanded 20-fold.

To achieve that, they used a gel assembled from N,N-dimethylacrylamide (DMAA) and sodium acrylate. Unlike previous expansion gels that rely on adding another molecule to form crosslinks between the polymer strands, this gel forms crosslinks spontaneously and exhibits strong mechanical properties. Such gel components previously had been used in expansion microscopy protocols, but the resulting gels could expand only about tenfold. The MIT team optimized the gel and the polymerization process to make the gel more robust, and to allow for 20-fold expansion.

To further stabilize the gel and enhance its reproducibility, the researchers removed oxygen from the polymer solution prior to gelation, which prevents side reactions that interfere with crosslinking. This step requires running nitrogen gas through the polymer solution, which replaces most of the oxygen in the system.

Once the gel is formed, select bonds in the proteins that hold the tissue together are broken and water is added to make the gel expand. After the expansion is performed, target proteins in tissue can be labeled and imaged.

“This approach may require more sample preparation compared to other super-resolution techniques, but it’s much simpler when it comes to the actual imaging process, especially for 3D imaging,” Shin says. “We document the step-by-step protocol in the manuscript so that readers can go through it easily.”

Imaging tiny structures

Using this technique, the researchers were able to image many tiny structures within brain cells, including structures called synaptic nanocolumns. These are clusters of proteins that are arranged in a specific way at neuronal synapses, allowing neurons to communicate with each other via secretion of neurotransmitters such as dopamine.

In studies of cancer cells, the researchers also imaged microtubules — hollow tubes that help give cells their structure and play important roles in cell division. They were also able to see mitochondria (organelles that generate energy) and even the organization of individual nuclear pore complexes (clusters of proteins that control access to the cell nucleus).

Wang is now using this technique to image carbohydrates known as glycans, which are found on cell surfaces and help control cells’ interactions with their environment. This method could also be used to image tumor cells, allowing scientists to glimpse how proteins are organized within those cells, much more easily than has previously been possible.

The researchers envision that any biology lab should be able to use this technique at a low cost since it relies on standard, off-the-shelf chemicals and common equipment such confocal microscopes and glove bags, which most labs already have or can easily access.

“Our hope is that with this new technology, any conventional biology lab can use this protocol with their existing microscopes, allowing them to approach resolution that can only be achieved with very specialized and costly state-of-the-art microscopes,” Wang says.

The research was funded, in part, by the U.S. National Institutes of Health, an MIT Presidential Graduate Fellowship, U.S. National Science Foundation Graduate Research Fellowship grants, Open Philanthropy, Good Ventures, the Howard Hughes Medical Institute, Lisa Yang, Ashar Aziz, and the European Research Council.

Tiny magnetic discs offer remote brain stimulation without transgenes

Novel magnetic nanodiscs could provide a much less invasive way of stimulating parts of the brain, paving the way for stimulation therapies without implants or genetic modification, MIT researchers report.

The scientists envision that the tiny discs, which are about 250 nanometers across (about 1/500 the width of a human hair), would be injected directly into the desired location in the brain. From there, they could be activated at any time simply by applying a magnetic field outside the body. The new particles could quickly find applications in biomedical research, and eventually, after sufficient testing, might be applied to clinical uses.

The development of these nanoparticles is described in the journal Nature Nanotechnology, in a paper by Polina Anikeeva, a professor in MIT’s departments of Materials Science and Engineering and Brain and Cognitive Sciences, graduate student Ye Ji Kim, and 17 others at MIT and in Germany.

Deep brain stimulation (DBS) is a common clinical procedure that uses electrodes implanted in the target brain regions to treat symptoms of neurological and psychiatric conditions such as Parkinson’s disease and obsessive-compulsive disorder. Despite its efficacy, the surgical difficulty and clinical complications associated with DBS limit the number of cases where such an invasive procedure is warranted. The new nanodiscs could provide a much more benign way of achieving the same results.

Over the past decade other implant-free methods of producing brain stimulation have been developed. However, these approaches were often limited by their spatial resolution or ability to target deep regions. For the past decade, Anikeeva’s Bioelectronics group as well as others in the field used magnetic nanomaterials to transduce remote magnetic signals into brain stimulation. However, these magnetic methods relied on genetic modifications and can’t be used in humans.

Since all nerve cells are sensitive to electrical signals, Kim, a graduate student in Anikeeva’s group, hypothesized that a magnetoelectric nanomaterial that can efficiently convert magnetization into electrical potential could offer a path toward remote magnetic brain stimulation. Creating a nanoscale magnetoelectric material was, however, a formidable challenge.

Kim synthesized novel magnetoelectric nanodiscs and collaborated with Noah Kent, a postdoc in Anikeeva’s lab with a background in physics who is a second author of the study, to understand the properties of these particles.

The structure of the new nanodiscs consists of a two-layer magnetic core and a piezoelectric shell. The magnetic core is magnetostrictive, which means it changes shape when magnetized. This deformation then induces strain in the piezoelectric shell which produces a varying electrical polarization. Through the combination of the two effects, these composite particles can deliver electrical pulses to neurons when exposed to magnetic fields.

One key to the discs’ effectiveness is their disc shape. Previous attempts to use magnetic nanoparticles had used spherical particles, but the magnetoelectric effect was very weak, says Kim. This anisotropy enhances magnetostriction by over a 1000-fold, adds Kent.

The team first added their nanodiscs to cultured neurons, which allowed then to activate these cells on demand with short pulses of magnetic field. This stimulation did not require any genetic modification.

They then injected small droplets of magnetoelectric nanodiscs solution into specific regions of the brains of mice. Then, simply turning on a relatively weak electromagnet nearby triggered the particles to release a tiny jolt of electricity in that brain region. The stimulation could be switched on and off remotely by the switching of the electromagnet. That electrical stimulation “had an impact on neuron activity and on behavior,” Kim says.

The team found that the magnetoelectric nanodiscs could stimulate a deep brain region, the ventral tegmental area, that is associated with feelings of reward.

The team also stimulated another brain area, the subthalamic nucleus, associated with motor control. “This is the region where electrodes typically get implanted to manage Parkinson’s disease,” Kim explains. The researchers were able to successfully demonstrate the modulation of motor control through the particles. Specifically, by injecting nanodiscs only in one hemisphere, the researchers could induce rotations in healthy mice by applying magnetic field.

The nanodiscs could trigger the neuronal activity comparable with conventional implanted electrodes delivering mild electrical stimulation. The authors achieved subsecond temporal precision for neural stimulation with their method yet observed significantly reduced foreign body responses as compared to the electrodes, potentially allowing for even safer deep brain stimulation.

The multilayered chemical composition and physical shape and size of the new multilayered nanodiscs is what made precise stimulation possible.

While the researchers successfully increased the magnetostrictive effect, the second part of the process, converting the magnetic effect into an electrical output, still needs more work, Anikeeva says. While the magnetic response was a thousand times greater, the conversion to an electric impulse was only four times greater than with conventional spherical particles.

“This massive enhancement of a thousand times didn’t completely translate into the magnetoelectric enhancement,” says Kim. “That’s where a lot of the future work will be focused, on making sure that the thousand times amplification in magnetostriction can be converted into a thousand times amplification in the magnetoelectric coupling.”

What the team found, in terms of the way the particles’ shapes affects their magnetostriction, was quite unexpected. “It’s kind of a new thing that just appeared when we tried to figure out why these particles worked so well,” says Kent.

Anikeeva adds: “Yes, it’s a record-breaking particle, but it’s not as record-breaking as it could be.” That remains a topic for further work, but the team has ideas about how to make further progress.

While these nanodiscs could in principle already be applied to basic research using animal models, to translate them to clinical use in humans would require several more steps, including large-scale safety studies, “which is something academic researchers are not necessarily most well-positioned to do,” Anikeeva says. “When we find that these particles are really useful in a particular clinical context, then we imagine that there will be a pathway for them to undergo more rigorous large animal safety studies.”

The team included researchers affiliated with MIT’s departments of Materials Science and Engineering, Electrical Engineering and Computer Science, Chemistry, and Brain and Cognitive Sciences; the Research Laboratory of Electronics; the McGovern Institute for Brain Research; and the Koch Institute for Integrative Cancer Research; and from the Friedrich-Alexander University of Erlangen, Germany. The work was supported, in part, by the National Institutes of Health, the National Center for Complementary and Integrative Health, the National Institute for Neurological Disorders and Stroke, the McGovern Institute for Brain Research, and the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics in Neuroscience.

Scientists find neurons that process language on different timescales

Using functional magnetic resonance imaging (fMRI), neuroscientists have identified several regions of the brain that are responsible for processing language. However, discovering the specific functions of neurons in those regions has proven difficult because fMRI, which measures changes in blood flow, doesn’t have high enough resolution to reveal what small populations of neurons are doing.

Now, using a more precise technique that involves recording electrical activity directly from the brain, MIT neuroscientists have identified different clusters of neurons that appear to process different amounts of linguistic context. These “temporal windows” range from just one word up to about six words.

The temporal windows may reflect different functions for each population, the researchers say. Populations with shorter windows may analyze the meanings of individual words, while those with longer windows may interpret more complex meanings created when words are strung together.

“This is the first time we see clear heterogeneity within the language network,” says Evelina Fedorenko, an associate professor of neuroscience at MIT. “Across dozens of fMRI experiments, these brain areas all seem to do the same thing, but it’s a large, distributed network, so there’s got to be some structure there. This is the first clear demonstration that there is structure, but the different neural populations are spatially interleaved so we can’t see these distinctions with fMRI.”

Fedorenko, who is also a member of MIT’s McGovern Institute for Brain Research, is the senior author of the study, which appears today in Nature Human Behavior. MIT postdoc Tamar Regev and Harvard University graduate student Colton Casto are the lead authors of the paper.

Temporal windows

Functional MRI, which has helped scientists learn a great deal about the roles of different parts of the brain, works by measuring changes in blood flow in the brain. These measurements act as a proxy of neural activity during a particular task. However, each “voxel,” or three-dimensional chunk, of an fMRI image represents hundreds of thousands to millions of neurons and sums up activity across about two seconds, so it can’t reveal fine-grained detail about what those neurons are doing.

One way to get more detailed information about neural function is to record electrical activity using electrodes implanted in the brain. These data are hard to come by because this procedure is done only in patients who are already undergoing surgery for a neurological condition such as severe epilepsy.

“It can take a few years to get enough data for a task because these patients are relatively rare, and in a given patient electrodes are implanted in idiosyncratic locations based on clinical needs, so it takes a while to assemble a dataset with sufficient coverage of some target part of the cortex. But these data, of course, are the best kind of data we can get from human brains: You know exactly where you are spatially and you have very fine-grained temporal information,” Fedorenko says.

In a 2016 study, Fedorenko reported using this approach to study the language processing regions of six people. Electrical activity was recorded while the participants read four different types of language stimuli: complete sentences, lists of words, lists of non-words, and “jabberwocky” sentences — sentences that have grammatical structure but are made of nonsense words.

Those data showed that in some neural populations in language processing regions, activity would gradually build up over a period of several words, when the participants were reading sentences. However, this did not happen when they read lists of words, lists of nonwords, of Jabberwocky sentences.

In the new study, Regev and Casto went back to those data and analyzed the temporal response profiles in greater detail. In their original dataset, they had recordings of electrical activity from 177 language-responsive electrodes across the six patients. Conservative estimates suggest that each electrode represents an average of activity from about 200,000 neurons. They also obtained new data from a second set of 16 patients, which included recordings from another 362 language-responsive electrodes.

When the researchers analyzed these data, they found that in some of the neural populations, activity would fluctuate up and down with each word. In others, however, activity would build up over multiple words before falling again, and yet others would show a steady buildup of neural activity over longer spans of words.

By comparing their data with predictions made by a computational model that the researchers designed to process stimuli with different temporal windows, the researchers found that neural populations from language processing areas could be divided into three clusters. These clusters represent temporal windows of either one, four, or six words.

“It really looks like these neural populations integrate information across different timescales along the sentence,” Regev says.

Processing words and meaning

These differences in temporal window size would have been impossible to see using fMRI, the researchers say.

“At the resolution of fMRI, we don’t see much heterogeneity within language-responsive regions. If you localize in individual participants the voxels in their brain that are most responsive to language, you find that their responses to sentences, word lists, jabberwocky sentences and non-word lists are highly similar,” Casto says.

The researchers were also able to determine the anatomical locations where these clusters were found. Neural populations with the shortest temporal window were found predominantly in the posterior temporal lobe, though some were also found in the frontal or anterior temporal lobes. Neural populations from the two other clusters, with longer temporal windows, were spread more evenly throughout the temporal and frontal lobes.

Fedorenko’s lab now plans to study whether these timescales correspond to different functions. One possibility is that the shortest timescale populations may be processing the meanings of a single word, while those with longer timescales interpret the meanings represented by multiple words.

“We already know that in the language network, there is sensitivity to how words go together and to the meanings of individual words,” Regev says. “So that could potentially map to what we’re finding, where the longest timescale is sensitive to things like syntax or relationships between words, and maybe the shortest timescale is more sensitive to features of single words or parts of them.”

The research was funded by the Zuckerman-CHE STEM Leadership Program, the Poitras Center for Psychiatric Disorders Research, the Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University, the U.S. National Institutes of Health, an American Epilepsy Society Research and Training Fellowship, the McDonnell Center for Systems Neuroscience, Fondazione Neurone, the McGovern Institute, MIT’s Department of Brain and Cognitive Sciences, and the Simons Center for the Social Brain.

Three MIT professors named 2024 Vannevar Bush Fellows

The U.S. Department of Defense (DoD) has announced three MIT professors among the members of the 2024 class of the Vannevar Bush Faculty Fellowship (VBFF). The fellowship is the DoD’s flagship single-investigator award for research, inviting the nation’s most talented researchers to pursue ambitious ideas that defy conventional boundaries.

Domitilla Del Vecchio, professor of mechanical engineering and the Grover M. Hermann Professor in Health Sciences & Technology; Mehrdad Jazayeri, professor of brain and cognitive sciences and an investigator at the McGovern Institute for Brain Research; and Themistoklis Sapsis, the William I. Koch Professor of Mechanical Engineering and director of the Center for Ocean Engineering are among the 11 university scientists and engineers chosen for this year’s fellowship class. They join an elite group of approximately 50 fellows from previous class years.

“The Vannevar Bush Faculty Fellowship is more than a prestigious program,” said Bindu Nair, director of the Basic Research Office in the Office of the Under Secretary of Defense for Research and Engineering, in a press release. “It’s a beacon for tenured faculty embarking on groundbreaking ‘blue sky’ research.”

Research topics

Each fellow receives up to $3 million over a five-year term to pursue cutting-edge projects. Research topics in this year’s class span a range of disciplines, including materials science, cognitive neuroscience, quantum information sciences, and applied mathematics. While pursuing individual research endeavors, Fellows also leverage the unique opportunity to collaborate directly with DoD laboratories, fostering a valuable exchange of knowledge and expertise.

Del Vecchio, whose research interests include control and dynamical systems theory and systems and synthetic biology, will investigate the molecular underpinnings of analog epigenetic cell memory, then use what they learn to “establish unprecedented engineering capabilities for creating self-organizing and reconfigurable multicellular systems with graded cell fates.”

“With this fellowship, we will be able to explore the limits to which we can leverage analog memory to create multicellular systems that autonomously organize in permanent, but reprogrammable, gradients of cell fates and can be used for creating next-generation tissues and organoids with dramatically increased sophistication,” she says, honored to have been selected.

Jazayeri wants to understand how the brain gives rise to cognitive and emotional intelligence. The engineering systems being built today lack the hallmarks of human intelligence, explains Jazayeri. They neither learn quickly nor generalize their knowledge flexibly. They don’t feel emotions or have emotional intelligence.

Jazayeri plans to use the VBFF award to integrate ideas from cognitive science, neuroscience, and machine learning with experimental data in humans, animals, and computer models to develop a computational understanding of cognitive and emotional intelligence.

“I’m honored and humbled to be selected and excited to tackle some of the most challenging questions at the intersection of neuroscience and AI,” he says.

“I am humbled to be included in such a select group,” echoes Sapsis, who will use the grant to research new algorithms and theory designed for the efficient computation of extreme event probabilities and precursors, and for the design of mitigation strategies in complex dynamical systems.

Examples of Sapsis’s work include risk quantification for extreme events in human-made systems; climate events, such as heat waves, and their effect on interconnected systems like food supply chains; and also “mission-critical algorithmic problems such as search and path planning operations for extreme anomalies,” he explains.

VBFF impact

Named for Vannevar Bush PhD 1916, an influential inventor, engineer, former professor, and dean of the School of Engineering at MIT, the highly competitive fellowship, formerly known as the National Security Science and Engineering Faculty Fellowship, aims to advance transformative, university-based fundamental research. Bush served as the director of the U.S. Office of Scientific Research and Development, and organized and led American science and technology during World War II.

“The outcomes of VBFF-funded research have transformed entire disciplines, birthed novel fields, and challenged established theories and perspectives,” said Nair. “By contributing their insights to DoD leadership and engaging with the broader national security community, they enrich collective understanding and help the United States leap ahead in global technology competition.”