What does the future hold for generative AI?

Speaking at the “Generative AI: Shaping the Future” symposium on Nov. 28, the kickoff event of MIT’s Generative AI Week, keynote speaker and iRobot co-founder Rodney Brooks warned attendees against uncritically overestimating the capabilities of this emerging technology, which underpins increasingly powerful tools like OpenAI’s ChatGPT and Google’s Bard.

“Hype leads to hubris, and hubris leads to conceit, and conceit leads to failure,” cautioned Brooks, who is also a professor emeritus at MIT, a former director of the Computer Science and Artificial Intelligence Laboratory (CSAIL), and founder of Robust.AI.

“No one technology has ever surpassed everything else,” he added.

The symposium, which drew hundreds of attendees from academia and industry to the Institute’s Kresge Auditorium, was laced with messages of hope about the opportunities generative AI offers for making the world a better place, including through art and creativity, interspersed with cautionary tales about what could go wrong if these AI tools are not developed responsibly.

Generative AI is a term to describe machine-learning models that learn to generate new material that looks like the data they were trained on. These models have exhibited some incredible capabilities, such as the ability to produce human-like creative writing, translate languages, generate functional computer code, or craft realistic images from text prompts.

In her opening remarks to launch the symposium, MIT President Sally Kornbluth highlighted several projects faculty and students have undertaken to use generative AI to make a positive impact in the world. For example, the work of the Axim Collaborative, an online education initiative launched by MIT and Harvard, includes exploring the educational aspects of generative AI to help underserved students.

The Institute also recently announced seed grants for 27 interdisciplinary faculty research projects centered on how AI will transform people’s lives across society.

In hosting Generative AI Week, MIT hopes to not only showcase this type of innovation, but also generate “collaborative collisions” among attendees, Kornbluth said.

Collaboration involving academics, policymakers, and industry will be critical if we are to safely integrate a rapidly evolving technology like generative AI in ways that are humane and help humans solve problems, she told the audience.

“I honestly cannot think of a challenge more closely aligned with MIT’s mission. It is a profound responsibility, but I have every confidence that we can face it, if we face it head on and if we face it as a community,” she said.

While generative AI holds the potential to help solve some of the planet’s most pressing problems, the emergence of these powerful machine learning models has blurred the distinction between science fiction and reality, said CSAIL Director Daniela Rus in her opening remarks. It is no longer a question of whether we can make machines that produce new content, she said, but how we can use these tools to enhance businesses and ensure sustainability. 

“Today, we will discuss the possibility of a future where generative AI does not just exist as a technological marvel, but stands as a source of hope and a force for good,” said Rus, who is also the Andrew and Erna Viterbi Professor in the Department of Electrical Engineering and Computer Science.

But before the discussion dove deeply into the capabilities of generative AI, attendees were first asked to ponder their humanity, as MIT Professor Joshua Bennett read an original poem.

Bennett, a professor in the MIT Literature Section and Distinguished Chair of the Humanities, was asked to write a poem about what it means to be human, and drew inspiration from his daughter, who was born three weeks ago.

The poem told of his experiences as a boy watching Star Trek with his father and touched on the importance of passing traditions down to the next generation.

In his keynote remarks, Brooks set out to unpack some of the deep, scientific questions surrounding generative AI, as well as explore what the technology can tell us about ourselves.

To begin, he sought to dispel some of the mystery swirling around generative AI tools like ChatGPT by explaining the basics of how this large language model works. ChatGPT, for instance, generates text one word at a time by determining what the next word should be in the context of what it has already written. While a human might write a story by thinking about entire phrases, ChatGPT only focuses on the next word, Brooks explained.

ChatGPT 3.5 is built on a machine-learning model that has 175 billion parameters and has been exposed to billions of pages of text on the web during training. (The newest iteration, ChatGPT 4, is even larger.) It learns correlations between words in this massive corpus of text and uses this knowledge to propose what word might come next when given a prompt.

The model has demonstrated some incredible capabilities, such as the ability to write a sonnet about robots in the style of Shakespeare’s famous Sonnet 18. During his talk, Brooks showcased the sonnet he asked ChatGPT to write side-by-side with his own sonnet.

But while researchers still don’t fully understand exactly how these models work, Brooks assured the audience that generative AI’s seemingly incredible capabilities are not magic, and it doesn’t mean these models can do anything.

His biggest fears about generative AI don’t revolve around models that could someday surpass human intelligence. Rather, he is most worried about researchers who may throw away decades of excellent work that was nearing a breakthrough, just to jump on shiny new advancements in generative AI; venture capital firms that blindly swarm toward technologies that can yield the highest margins; or the possibility that a whole generation of engineers will forget about other forms of software and AI.

At the end of the day, those who believe generative AI can solve the world’s problems and those who believe it will only generate new problems have at least one thing in common: Both groups tend to overestimate the technology, he said.

“What is the conceit with generative AI? The conceit is that it is somehow going to lead to artificial general intelligence. By itself, it is not,” Brooks said.

Following Brooks’ presentation, a group of MIT faculty spoke about their work using generative AI and participated in a panel discussion about future advances, important but underexplored research topics, and the challenges of AI regulation and policy.

The panel consisted of Jacob Andreas, an associate professor in the MIT Department of Electrical Engineering and Computer Science (EECS) and a member of CSAIL; Antonio Torralba, the Delta Electronics Professor of EECS and a member of CSAIL; Ev Fedorenko, an associate professor of brain and cognitive sciences and an investigator at the McGovern Institute for Brain Research at MIT; and Armando Solar-Lezama, a Distinguished Professor of Computing and associate director of CSAIL. It was moderated by William T. Freeman, the Thomas and Gerd Perkins Professor of EECS and a member of CSAIL.

The panelists discussed several potential future research directions around generative AI, including the possibility of integrating perceptual systems, drawing on human senses like touch and smell, rather than focusing primarily on language and images. The researchers also spoke about the importance of engaging with policymakers and the public to ensure generative AI tools are produced and deployed responsibly.

“One of the big risks with generative AI today is the risk of digital snake oil. There is a big risk of a lot of products going out that claim to do miraculous things but in the long run could be very harmful,” Solar-Lezama said.

The morning session concluded with an excerpt from the 1925 science fiction novel “Metropolis,” read by senior Joy Ma, a physics and theater arts major, followed by a roundtable discussion on the future of generative AI. The discussion included Joshua Tenenbaum, a professor in the Department of Brain and Cognitive Sciences and a member of CSAIL; Dina Katabi, the Thuan and Nicole Pham Professor in EECS and a principal investigator in CSAIL and the MIT Jameel Clinic; and Max Tegmark, professor of physics; and was moderated by Daniela Rus.

One focus of the discussion was the possibility of developing generative AI models that can go beyond what we can do as humans, such as tools that can sense someone’s emotions by using electromagnetic signals to understand how a person’s breathing and heart rate are changing.

But one key to integrating AI like this into the real world safely is to ensure that we can trust it, Tegmark said. If we know an AI tool will meet the specifications we insist on, then “we no longer have to be afraid of building really powerful systems that go out and do things for us in the world,” he said.

A new way to see the activity inside a living cell

Living cells are bombarded with many kinds of incoming molecular signal that influence their behavior. Being able to measure those signals and how cells respond to them through downstream molecular signaling networks could help scientists learn much more about how cells work, including what happens as they age or become diseased.

Right now, this kind of comprehensive study is not possible because current techniques for imaging cells are limited to just a handful of different molecule types within a cell at one time. However, MIT researchers have developed an alternative method that allows them to observe up to seven different molecules at a time, and potentially even more than that.

“There are many examples in biology where an event triggers a long downstream cascade of events, which then causes a specific cellular function,” says Edward Boyden, the Y. Eva Tan Professor in Neurotechnology. “How does that occur? It’s arguably one of the fundamental problems of biology, and so we wondered, could you simply watch it happen?”

It’s arguably one of the fundamental problems of biology, and so we wondered, could you simply watch it happen? – Ed Boyden

The new approach makes use of green or red fluorescent molecules that flicker on and off at different rates. By imaging a cell over several seconds, minutes, or hours, and then extracting each of the fluorescent signals using a computational algorithm, the amount of each target protein can be tracked as it changes over time.

Boyden, who is also a professor of biological engineering and of brain and cognitive sciences at MIT, a Howard Hughes Medical Institute investigator, and a member of MIT’s McGovern Institute for Brain Research and Koch Institute for Integrative Cancer Research, as well as the co-director of the K. Lisa Yang Center for Bionics, is the senior author of the study, which appears today in Cell. MIT postdoc Yong Qian is the lead author of the paper.

Fluorescent signals

Labeling molecules inside cells with fluorescent proteins has allowed researchers to learn a great deal about the functions of many cellular molecules. This type of study is often done with green fluorescent protein (GFP), which was first deployed for imaging in the 1990s. Since then, several fluorescent proteins that glow in other colors have been developed for experimental use.

However, a typical light microscope can only distinguish two or three of these colors, allowing researchers only a tiny glimpse of the overall activity that is happening inside a cell. If they could track a greater number of labeled molecules, researchers could measure a brain cell’s response to different neurotransmitters during learning, for example, or investigate the signals that prompt a cancer cell to metastasize.

“Ideally, you would be able to watch the signals in a cell as they fluctuate in real time, and then you could understand how they relate to each other. That would tell you how the cell computes,” Boyden says. “The problem is that you can’t watch very many things at the same time.”

In 2020, Boyden’s lab developed a way to simultaneously image up to five different molecules within a cell, by targeting glowing reporters to distinct locations inside the cell. This approach, known as “spatial multiplexing,” allows researchers to distinguish signals for different molecules even though they may all be fluorescing the same color.

In the new study, the researchers took a different approach: Instead of distinguishing signals based on their physical location, they created fluorescent signals that vary over time. The technique relies on “switchable fluorophores” — fluorescent proteins that turn on and off at a specific rate. For this study, Boyden and his group members identified four green switchable fluorophores, and then engineered two more, all of which turn on and off at different rates. They also identified two red fluorescent proteins that switch at different rates, and engineered one additional red fluorophore.

Using four switchable fluorophores, MIT researchers were able to label and image four different kinases inside these cells (top four rows). In the bottom row, the cell nuclei are labeled in blue.
Image: Courtesy of the researchers

Each of these switchable fluorophores can be used to label a different type of molecule within a living cell, such an enzyme, signaling protein, or part of the cell cytoskeleton. After imaging the cell for several minutes, hours, or even days, the researchers use a computational algorithm to pick out the specific signal from each fluorophore, analogous to how the human ear can pick out different frequencies of sound.

“In a symphony orchestra, you have high-pitched instruments, like the flute, and low-pitched instruments, like a tuba. And in the middle are instruments like the trumpet. They all have different sounds, and our ear sorts them out,” Boyden says.

The mathematical technique that the researchers used to analyze the fluorophore signals is known as linear unmixing. This method can extract different fluorophore signals, similar to how the human ear uses a mathematical model known as a Fourier transform to extract different pitches from a piece of music.

Once this analysis is complete, the researchers can see when and where each of the fluorescently labeled molecules were found in the cell during the entire imaging period. The imaging itself can be done with a simple light microscope, with no specialized equipment required.

Biological phenomena

In this study, the researchers demonstrated their approach by labeling six different molecules involved in the cell division cycle, in mammalian cells. This allowed them to identify patterns in how the levels of enzymes called cyclin-dependent kinases change as a cell progresses through the cell cycle.

The researchers also showed that they could label other types of kinases, which are involved in nearly every aspect of cell signaling, as well as cell structures and organelles such as the cytoskeleton and mitochondria. In addition to their experiments using mammalian cells grown in a lab dish, the researchers showed that this technique could work in the brains of zebrafish larvae.

This method could be useful for observing how cells respond to any kind of input, such as nutrients, immune system factors, hormones, or neurotransmitters, according to the researchers. It could also be used to study how cells respond to changes in gene expression or genetic mutations. All of these factors play important roles in biological phenomena such as growth, aging, cancer, neurodegeneration, and memory formation.

“You could consider all of these phenomena to represent a general class of biological problem, where some short-term event — like eating a nutrient, learning something, or getting an infection — generates a long-term change,” Boyden says.

In addition to pursuing those types of studies, Boyden’s lab is also working on expanding the repertoire of switchable fluorophores so that they can study even more signals within a cell. They also hope to adapt the system so that it could be used in mouse models.

The research was funded by an Alana Fellowship, K. Lisa Yang, John Doerr, Jed McCaleb, James Fickel, Ashar Aziz, the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics at MIT, the Howard Hughes Medical Institute, and the National Institutes of Health.

The brain may learn about the world the same way some computational models do

To make our way through the world, our brain must develop an intuitive understanding of the physical world around us, which we then use to interpret sensory information coming into the brain.

How does the brain develop that intuitive understanding? Many scientists believe that it may use a process similar to what’s known as “self-supervised learning.” This type of machine learning, originally developed as a way to create more efficient models for computer vision, allows computational models to learn about visual scenes based solely on the similarities and differences between them, with no labels or other information.

A pair of studies from researchers at the K. Lisa Yang Integrative Computational Neuroscience (ICoN) Center at MIT offers new evidence supporting this hypothesis. The researchers found that when they trained models known as neural networks using a particular type of self-supervised learning, the resulting models generated activity patterns very similar to those seen in the brains of animals that were performing the same tasks as the models.

The findings suggest that these models are able to learn representations of the physical world that they can use to make accurate predictions about what will happen in that world, and that the mammalian brain may be using the same strategy, the researchers say.

“The theme of our work is that AI designed to help build better robots ends up also being a framework to better understand the brain more generally,” says Aran Nayebi, a postdoc in the ICoN Center. “We can’t say if it’s the whole brain yet, but across scales and disparate brain areas, our results seem to be suggestive of an organizing principle.”

Nayebi is the lead author of one of the studies, co-authored with Rishi Rajalingham, a former MIT postdoc now at Meta Reality Labs, and senior authors Mehrdad Jazayeri, an associate professor of brain and cognitive sciences and a member of the McGovern Institute for Brain Research; and Robert Yang, an assistant professor of brain and cognitive sciences and an associate member of the McGovern Institute. Ila Fiete, director of the ICoN Center, a professor of brain and cognitive sciences, and an associate member of the McGovern Institute, is the senior author of the other study, which was co-led by Mikail Khona, an MIT graduate student, and Rylan Schaeffer, a former senior research associate at MIT.

Both studies will be presented at the 2023 Conference on Neural Information Processing Systems (NeurIPS) in December.

Modeling the physical world

Early models of computer vision mainly relied on supervised learning. Using this approach, models are trained to classify images that are each labeled with a name — cat, car, etc. The resulting models work well, but this type of training requires a great deal of human-labeled data.

To create a more efficient alternative, in recent years researchers have turned to models built through a technique known as contrastive self-supervised learning. This type of learning allows an algorithm to learn to classify objects based on how similar they are to each other, with no external labels provided.

“This is a very powerful method because you can now leverage very large modern data sets, especially videos, and really unlock their potential,” Nayebi says. “A lot of the modern AI that you see now, especially in the last couple years with ChatGPT and GPT-4, is a result of training a self-supervised objective function on a large-scale dataset to obtain a very flexible representation.”

These types of models, also called neural networks, consist of thousands or millions of processing units connected to each other. Each node has connections of varying strengths to other nodes in the network. As the network analyzes huge amounts of data, the strengths of those connections change as the network learns to perform the desired task.

As the model performs a particular task, the activity patterns of different units within the network can be measured. Each unit’s activity can be represented as a firing pattern, similar to the firing patterns of neurons in the brain. Previous work from Nayebi and others has shown that self-supervised models of vision generate activity similar to that seen in the visual processing system of mammalian brains.

In both of the new NeurIPS studies, the researchers set out to explore whether self-supervised computational models of other cognitive functions might also show similarities to the mammalian brain. In the study led by Nayebi, the researchers trained self-supervised models to predict the future state of their environment across hundreds of thousands of naturalistic videos depicting everyday scenarios.

“For the last decade or so, the dominant method to build neural network models in cognitive neuroscience is to train these networks on individual cognitive tasks. But models trained this way rarely generalize to other tasks,” Yang says. “Here we test whether we can build models for some aspect of cognition by first training on naturalistic data using self-supervised learning, then evaluating in lab settings.”

Once the model was trained, the researchers had it generalize to a task they call “Mental-Pong.” This is similar to the video game Pong, where a player moves a paddle to hit a ball traveling across the screen. In the Mental-Pong version, the ball disappears shortly before hitting the paddle, so the player has to estimate its trajectory in order to hit the ball.

The researchers found that the model was able to track the hidden ball’s trajectory with accuracy similar to that of neurons in the mammalian brain, which had been shown in a previous study by Rajalingham and Jazayeri to simulate its trajectory — a cognitive phenomenon known as “mental simulation.” Furthermore, the neural activation patterns seen within the model were similar to those seen in the brains of animals as they played the game — specifically, in a part of the brain called the dorsomedial frontal cortex. No other class of computational model has been able to match the biological data as closely as this one, the researchers say.

“There are many efforts in the machine learning community to create artificial intelligence,” Jazayeri says. “The relevance of these models to neurobiology hinges on their ability to additionally capture the inner workings of the brain. The fact that Aran’s model predicts neural data is really important as it suggests that we may be getting closer to building artificial systems that emulate natural intelligence.”

Navigating the world

The study led by Khona, Schaeffer, and Fiete focused on a type of specialized neurons known as grid cells. These cells, located in the entorhinal cortex, help animals to navigate, working together with place cells located in the hippocampus.

While place cells fire whenever an animal is in a specific location, grid cells fire only when the animal is at one of the vertices of a triangular lattice. Groups of grid cells create overlapping lattices of different sizes, which allows them to encode a large number of positions using a relatively small number of cells.

In recent studies, researchers have trained supervised neural networks to mimic grid cell function by predicting an animal’s next location based on its starting point and velocity, a task known as path integration. However, these models hinged on access to privileged information about absolute space at all times — information that the animal does not have.

Inspired by the striking coding properties of the multiperiodic grid-cell code for space, the MIT team trained a contrastive self-supervised model to both perform this same path integration task and represent space efficiently while doing so. For the training data, they used sequences of velocity inputs. The model learned to distinguish positions based on whether they were similar or different — nearby positions generated similar codes, but further positions generated more different codes.

“It’s similar to training models on images, where if two images are both heads of cats, their codes should be similar, but if one is the head of a cat and one is a truck, then you want their codes to repel,” Khona says. “We’re taking that same idea but applying it to spatial trajectories.”

Once the model was trained, the researchers found that the activation patterns of the nodes within the model formed several lattice patterns with different periods, very similar to those formed by grid cells in the brain.

“What excites me about this work is that it makes connections between mathematical work on the striking information-theoretic properties of the grid cell code and the computation of path integration,” Fiete says. “While the mathematical work was analytic — what properties does the grid cell code possess? — the approach of optimizing coding efficiency through self-supervised learning and obtaining grid-like tuning is synthetic: It shows what properties might be necessary and sufficient to explain why the brain has grid cells.”

The research was funded by the K. Lisa Yang ICoN Center, the National Institutes of Health, the Simons Foundation, the McKnight Foundation, the McGovern Institute, and the Helen Hay Whitney Foundation.

Soft optical fibers block pain while moving and stretching with the body

Scientists have a new tool to precisely illuminate the roots of nerve pain.

Engineers at MIT have developed soft and implantable fibers that can deliver light to major nerves through the body. When these nerves are genetically manipulated to respond to light, the fibers can send pulses of light to the nerves to inhibit pain. The optical fibers are flexible and stretch with the body.

The new fibers are meant as an experimental tool that can be used by scientists to explore the causes and potential treatments for peripheral nerve disorders in animal models. Peripheral nerve pain can occur when nerves outside the brain and spinal cord are damaged, resulting in tingling, numbness, and pain in affected limbs. Peripheral neuropathy is estimated to affect more than 20 million people in the United States.

“Current devices used to study nerve disorders are made of stiff materials that constrain movement, so that we can’t really study spinal cord injury and recovery if pain is involved,” says Siyuan Rao, assistant professor of biomedical engineering at the University of Massachusetts at Amherst, who carried out part of the work as a postdoc at MIT. “Our fibers can adapt to natural motion and do their work while not limiting the motion of the subject. That can give us more precise information.”

“Now, people have a tool to study the diseases related to the peripheral nervous system, in very dynamic, natural, and unconstrained conditions,” adds Xinyue Liu PhD ’22, who is now an assistant professor at Michigan State University (MSU).

Details of their team’s new fibers are reported today in a study appearing in Nature Methods. Rao’s and Liu’s MIT co-authors include Atharva Sahasrabudhe, a graduate student in chemistry; Xuanhe Zhao, professor of mechanical engineering and civil and environmental engineering; and Polina Anikeeva, professor of materials science and engineering, along with others at MSU, UMass-Amherst, Harvard Medical School, and the National Institutes of Health.

Beyond the brain

The new study grew out of the team’s desire to expand the use of optogenetics beyond the brain. Optogenetics is a technique by which nerves are genetically engineered to respond to light. Exposure to that light can then either activate or inhibit the nerve, which can give scientists information about how the nerve works and interacts with its surroundings.

Neuroscientists have applied optogenetics in animals to precisely trace the neural pathways underlying a range of brain disorders, including addiction, Parkinson’s disease, and mood and sleep disorders — information that has led to targeted therapies for these conditions.

To date, optogenetics has been primarily employed in the brain, an area that lacks pain receptors, which allows for the relatively painless implantation of rigid devices. However, the rigid devices can still damage neural tissues. The MIT team wondered whether the technique could be expanded to nerves outside the brain. Just as with the brain and spinal cord, nerves in the peripheral system can experience a range of impairment, including sciatica, motor neuron disease, and general numbness and pain.

Optogenetics could help neuroscientists identify specific causes of peripheral nerve conditions as well as test therapies to alleviate them. But the main hurdle to implementing the technique beyond the brain is motion. Peripheral nerves experience constant pushing and pulling from the surrounding muscles and tissues. If rigid silicon devices were used in the periphery, they would constrain an animal’s natural movement and potentially cause tissue damage.

Crystals and light

The researchers looked to develop an alternative that could work and move with the body. Their new design is a soft, stretchable, transparent fiber made from hydrogel — a rubbery, biocompatible mix of polymers and water, the ratio of which they tuned to create tiny, nanoscale crystals of polymers scattered throughout a more Jell-O-like solution.

The fiber embodies two layers — a core and an outer shell or “cladding.” The team mixed the solutions of each layer to generate a specific crystal arrangement. This arrangement gave each layer a specific, different refractive index, and together the layers kept any light traveling through the fiber from escaping or scattering away.

The team tested the optical fibers in mice whose nerves were genetically modified to respond to blue light that would excite neural activity or yellow light that would inhibit their activity. They found that even with the implanted fiber in place, mice were able to run freely on a wheel. After two months of wheel exercises, amounting to some 30,000 cycles, the researchers found the fiber was still robust and resistant to fatigue, and could also transmit light efficiently to trigger muscle contraction.

The team then turned on a yellow laser and ran it through the implanted fiber. Using standard laboratory procedures for assessing pain inhibition, they observed that the mice were much less sensitive to pain than rodents that were not stimulated with light. The fibers were able to significantly inhibit sciatic pain in those light-stimulated mice.

The researchers see the fibers as a new tool that can help scientists identify the roots of pain and other peripheral nerve disorders.

“We are focusing on the fiber as a new neuroscience technology,” Liu says. “We hope to help dissect mechanisms underlying pain in the peripheral nervous system. With time, our technology may help identify novel mechanistic therapies for chronic pain and other debilitating conditions such as nerve degeneration or injury.”

This research was supported, in part, by the National Institutes of Health, the National Science Foundation, the U.S. Army Research Office, the McGovern Institute for Brain Research, the Hock E. Tan and K. Lisa Yang Center for Autism Research, the K. Lisa Yang Brain-Body Center, and the Brain and Behavior Research Foundation.

Ariel Furst and Fan Wang receive 2023 National Institutes of Health awards

The National Institutes of Health (NIH) has awarded grants to MIT’s Ariel Furst and Fan Wang, through its High-Risk, High-Reward Research program. The NIH High-Risk, High-Reward Research program awarded 85 new research grants to support exceptionally creative scientists pursuing highly innovative behavioral and biomedical research projects.

Ariel Furst was selected as the recipient of the NIH Director’s New Innovator Award, which has supported unusually innovative research since 2007. Recipients are early-career investigators who are within 10 years of their final degree or clinical residency and have not yet received a research project grant or equivalent NIH grant.

Furst, the Paul M. Cook Career Development Assistant Professor of Chemical Engineering at MIT, invents technologies to improve human and environmental health by increasing equitable access to resources. Her lab develops transformative technologies to solve problems related to health care and sustainability by harnessing the inherent capabilities of biological molecules and cells. She is passionate about STEM outreach and increasing the participation of underrepresented groups in engineering.

After completing her PhD at Caltech, where she developed noninvasive diagnostics for colorectal cancer, Furst became an A. O. Beckman Postdoctoral Fellow at the University of California at Berkeley. There she developed sensors to monitor environmental pollutants. In 2022, Furst was awarded the MIT UROP Outstanding Faculty Mentor Award for her work with undergraduate researchers. She is a now a 2023 Marion Milligan Mason Awardee, a CIFAR Azrieli Global Scholar for Bio-Inspired Solar Energy, and an ARO Early Career Grantee. She is also a co-founder of the regenerative agriculture company, Seia Bio.

Fan Wang received the Pioneer Award, which has been challenging researchers at all career levels to pursue new directions and develop groundbreaking, high impact approaches to a broad area of biomedical and behavioral sciences since 2004.

Wang, a professor in the Department of Brain and Cognitive Sciences and an investigator in the McGovern Institute for Brain Research, is uncovering the neural circuit mechanisms that govern bodily sensations, like touch, pain, and posture, as well as the mechanisms that control sensorimotor behaviors. Researchers in the Wang lab aim to generate an integrated understanding of the sensation-perception-action process, hoping to find better treatments for diseases like chronic pain, addiction, and movement disorders. Wang’s lab uses genetic, viral, in vivo large-scale electrophysiology and imaging techniques to gain traction in these pursuits.

Wang obtained her PhD at Columbia University, working with Professor Richard Axel. She conducted her postdoctoral work at Stanford University with Mark Tessier-Lavigne, and then subsequently joined Duke University as faculty in 2003. Wang was later appointed as the Morris N. Broad Distinguished Professor of Neurobiology at the Duke University School of Medicine. In January 2023, she joined the faculty of the MIT School of Science and the McGovern Institute.

The High-Risk, High-Reward Research program is funded through the NIH Common Fund, which supports a series of exceptionally high-impact programs that cross NIH Institutes and Centers.

“The HRHR program is a pillar for innovation here at NIH, providing support to transformational research, with advances in biomedical and behavioral science,” says Robert W. Eisinger, acting director of the Division of Program Coordination, Planning, and Strategic Initiatives, which oversees the NIH Common Fund. “These awards align with the Common Fund’s mandate to support science expected to have exceptionally high and broadly applicable impact.”

NIH issued eight Pioneer Awards, 58 New Innovator Awards, six Transformative Research Awards, and 13 Early Independence Awards in 2023. Funding for the awards comes from the NIH Common Fund; the National Institute of General Medical Sciences; the National Institute of Mental Health; the National Library of Medicine; the National Institute on Aging; the National Heart, Lung, and Blood Institute; and the Office of Dietary Supplements.

Study: Deep neural networks don’t see the world the way we do

Human sensory systems are very good at recognizing objects that we see or words that we hear, even if the object is upside down or the word is spoken by a voice we’ve never heard.

Computational models known as deep neural networks can be trained to do the same thing, correctly identifying an image of a dog regardless of what color its fur is, or a word regardless of the pitch of the speaker’s voice. However, a new study from MIT neuroscientists has found that these models often also respond the same way to images or words that have no resemblance to the target.

When these neural networks were used to generate an image or a word that they responded to in the same way as a specific natural input, such as a picture of a bear, most of them generated images or sounds that were unrecognizable to human observers. This suggests that these models build up their own idiosyncratic “invariances” — meaning that they respond the same way to stimuli with very different features.

The findings offer a new way for researchers to evaluate how well these models mimic the organization of human sensory perception, says Josh McDermott, an associate professor of brain and cognitive sciences at MIT and a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds, and Machines.

“This paper shows that you can use these models to derive unnatural signals that end up being very diagnostic of the representations in the model,” says McDermott, who is the senior author of the study. “This test should become part of a battery of tests that we as a field are using to evaluate models.”

Jenelle Feather PhD ’22, who is now a research fellow at the Flatiron Institute Center for Computational Neuroscience, is the lead author of the open-access paper, which appears today in Nature Neuroscience. Guillaume Leclerc, an MIT graduate student, and Aleksander Mądry, the Cadence Design Systems Professor of Computing at MIT, are also authors of the paper.

Different perceptions

In recent years, researchers have trained deep neural networks that can analyze millions of inputs (sounds or images) and learn common features that allow them to classify a target word or object roughly as accurately as humans do. These models are currently regarded as the leading models of biological sensory systems.

It is believed that when the human sensory system performs this kind of classification, it learns to disregard features that aren’t relevant to an object’s core identity, such as how much light is shining on it or what angle it’s being viewed from. This is known as invariance, meaning that objects are perceived to be the same even if they show differences in those less important features.

“Classically, the way that we have thought about sensory systems is that they build up invariances to all those sources of variation that different examples of the same thing can have,” Feather says. “An organism has to recognize that they’re the same thing even though they show up as very different sensory signals.”

The researchers wondered if deep neural networks that are trained to perform classification tasks might develop similar invariances. To try to answer that question, they used these models to generate stimuli that produce the same kind of response within the model as an example stimulus given to the model by the researchers.

They term these stimuli “model metamers,” reviving an idea from classical perception research whereby stimuli that are indistinguishable to a system can be used to diagnose its invariances. The concept of metamers was originally developed in the study of human perception to describe colors that look identical even though they are made up of different wavelengths of light.

To their surprise, the researchers found that most of the images and sounds produced in this way looked and sounded nothing like the examples that the models were originally given. Most of the images were a jumble of random-looking pixels, and the sounds resembled unintelligible noise. When researchers showed the images to human observers, in most cases the humans did not classify the images synthesized by the models in the same category as the original target example.

“They’re really not recognizable at all by humans. They don’t look or sound natural and they don’t have interpretable features that a person could use to classify an object or word,” Feather says.

The findings suggest that the models have somehow developed their own invariances that are different from those found in human perceptual systems. This causes the models to perceive pairs of stimuli as being the same despite their being wildly different to a human.

Idiosyncratic invariances

The researchers found the same effect across many different vision and auditory models. However, each of these models appeared to develop their own unique invariances. When metamers from one model were shown to another model, the metamers were just as unrecognizable to the second model as they were to human observers.

“The key inference from that is that these models seem to have what we call idiosyncratic invariances,” McDermott says. “They have learned to be invariant to these particular dimensions in the stimulus space, and it’s model-specific, so other models don’t have those same invariances.”

The researchers also found that they could induce a model’s metamers to be more recognizable to humans by using an approach called adversarial training. This approach was originally developed to combat another limitation of object recognition models, which is that introducing tiny, almost imperceptible changes to an image can cause the model to misrecognize it.

The researchers found that adversarial training, which involves including some of these slightly altered images in the training data, yielded models whose metamers were more recognizable to humans, though they were still not as recognizable as the original stimuli. This improvement appears to be independent of the training’s effect on the models’ ability to resist adversarial attacks, the researchers say.

“This particular form of training has a big effect, but we don’t really know why it has that effect,” Feather says. “That’s an area for future research.”

Analyzing the metamers produced by computational models could be a useful tool to help evaluate how closely a computational model mimics the underlying organization of human sensory perception systems, the researchers say.

“This is a behavioral test that you can run on a given model to see whether the invariances are shared between the model and human observers,” Feather says. “It could also be used to evaluate how idiosyncratic the invariances are within a given model, which could help uncover potential ways to improve our models in the future.”

The research was funded by the National Science Foundation, the National Institutes of Health, a Department of Energy Computational Science Graduate Fellowship, and a Friends of the McGovern Institute Fellowship.

Practicing mindfulness with an app may improve children’s mental health

Many studies have found that practicing mindfulness — defined as cultivating an open-minded attention to the present moment — has benefits for children. Children who receive mindfulness training at school have demonstrated improvements in attention and behavior, as well as greater mental health.

When the Covid-19 pandemic began in 2020, sending millions of students home from school, a group of MIT researchers wondered if remote, app-based mindfulness practices could offer similar benefits. In a study conducted during 2020 and 2021, they report that children who used a mindfulness app at home for 40 days showed improvements in several aspects of mental health, including reductions in stress and negative emotions such as loneliness and fear.

The findings suggest that remote, app-based mindfulness interventions, which could potentially reach a larger number of children than school-based approaches, could offer mental health benefits, the researchers say.

“There is growing and compelling scientific evidence that mindfulness can support mental well-being and promote mental health in diverse children and adults,” says John Gabrieli, the Grover Hermann Professor of Health Sciences and Technology, a professor of brain and cognitive sciences at MIT, and the senior author of the study, which appears this week in the journal Mindfulness.

Researchers in Gabrieli’s lab also recently reported that children who showed higher levels of mindfulness were more emotionally resilient to the negative impacts of the Covid-19 pandemic.

“To some extent, the impact of Covid is out of your control as an individual, but your ability to respond to it and to interpret it may be something that mindfulness can help with,” says MIT graduate student Isaac Treves, who is the lead author of both studies.

Pandemic resilience

After the pandemic began in early 2020, Gabrieli’s lab decided to investigate the effects of mindfulness on children who had to leave school and isolate from friends. In a study that appeared in the journal PLOS One in July, the researchers explored whether mindfulness could boost children’s resilience to negative emotions that the pandemic generated, such as frustration and loneliness.

Working with students between 8 and 10 years old, the researchers measured the children’s mindfulness using a standardized assessment that captures their tendency to blame themselves, ruminate on negative thoughts, and suppress their feelings.

The researchers also asked the children questions about how much the pandemic had affected different aspects of their lives, as well as questions designed to assess their levels of anxiety, depression, stress, and negative emotions such as worry or fear.

Among children who showed the highest levels of mindfulness, there was no correlation between how much the pandemic impacted them and negative feelings. However, in children with lower levels of mindfulness, there was a strong correlation between Covid-19 impact and negative emotions.

The children in this study did not receive any kind of mindfulness training, so their responses reflect their tendency to be mindful at the time they answered the researchers’ questions. The findings suggest that children with higher levels of mindfulness were less likely to get caught up in negative emotions or blame themselves for the negative things they experienced during the pandemic.

“This paper was our best attempt to look at mindfulness specifically in the context of Covid and to think about what are the factors that may help children adapt to the changing circumstances,” Treves says. “The takeaway is not that we shouldn’t worry about pandemics because we can just help the kids with mindfulness. People are able to be resilient when they’re in systems that support them, and in families that support them.”

Remote interventions

The researchers then built on that study by exploring whether a remote, app-based intervention could effectively increase mindfulness and improve mental health. Researchers in Gabrieli’s lab have previously shown that students who received mindfulness training in middle school showed better academic performance, received fewer suspensions, and reported less stress than those who did not receive the training.

For the new study, reported today in Mindfulness, the researchers worked with the same children they had recruited for the PLOS One study and divided them into three groups of about 80 students each.

One group received mindfulness training through an app created by Inner Explorer, a nonprofit that also develops school-based meditation programs. Those children were instructed to engage in mindfulness training five days a week, including relaxation exercises, breathing exercises, and other forms of meditation.

For comparison purposes, the other two groups were asked to use an app for listening to audiobooks (not related to mindfulness). One group was simply given the audiobook app and encouraged to listen at their own pace, while the other group also had weekly one-on-one virtual meetings with a facilitator.

At the beginning and end of the study, the researchers evaluated each participant’s levels of mindfulness, along with measures of mental health such as anxiety, stress, and depression. They found that in all three groups, mental health improved over the course of the eight-week study, and each group also showed increases in mindfulness and prosociality (engaging in helpful behavior).

Additionally, children in the mindfulness group showed some improvements that the other groups didn’t, including a more significant decrease in stress. They also found that parents in the mindfulness group reported that their children experienced more significant decreases in negative emotions such as anger and sadness. Students who practiced the mindfulness exercises the most days showed the greatest benefits.

The researchers were surprised to see that there were no significant differences in measures of anxiety and depression between the mindfulness group and audiobook groups; they hypothesize that may be because students who interacted with a facilitator in one of the audiobook groups also experienced beneficial effects on their mental health.

Overall, the findings suggest that there is value in remote, app-based mindfulness training, especially if children engage with the exercises consistently and receive encouragement from parents, the researchers say. Apps also offer the ability to reach a larger number of children than school-based programs, which require more training and resources.

“There are a lot of great ways to incorporate mindfulness training into schools, but in general, it’s more resource-intensive than having people download an app. So, in terms of pure scalability and cost-effectiveness, apps are useful,” Treves says. “Another good thing about apps is that the kids can go at their own pace and repeat practices that they like, so there’s more freedom of choice.”

The research was funded by the Chan Zuckerberg Initiative as part of the Reach Every Reader Project, the National Institutes of Health, and the National Science Foundation.

Twelve with MIT ties elected to the National Academy of Medicine for 2023

The National Academy of Medicine announced the election of 100 new members to join their esteemed ranks in 2023, among them five MIT faculty members and seven additional affiliates.

MIT professors Daniel Anderson, Regina Barzilay, Guoping Feng, Darrell Irvine, and Morgen Shen were among the new members. Justin Hanes PhD ’96, Said Ibrahim MBA ’16, and Jennifer West ’92, along with three former students in the Harvard-MIT Program in Health Sciences and Technology (HST) — Michael Chiang, Siddhartha Mukherjee, and Robert Vonderheide — were also elected, as was Yi Zhang, an associate member of The Broad Institute of MIT and Harvard.

Election to the academy is considered one of the highest honors in the fields of health and medicine and recognizes individuals who have demonstrated outstanding professional achievement and commitment to service, the academy noted in announcing the election of its new members.

MIT faculty

Daniel G. Anderson, professor in the Department of Chemical Engineering and the Institute for Medical Engineering and Science, was elected “for pioneering the area of non-viral gene therapy and cellular delivery. His work has resulted in fundamental scientific advances; over 500 papers, patents, and patent applications; and the creation of companies, products, and technologies that are now in the clinic.” Anderson is an affiliate of the Broad Institute of MIT and Harvard and of the Ragon Institute at MGH, MIT and Harvard.

Regina Barzilay, the School of Engineering Distinguished Professor for AI and Health within the Department of Electrical Engineering and Computer Science at MIT, was elected “for the development of machine learning tools that have been transformational for breast cancer screening and risk assessment, and for the development of molecular design tools broadly utilized for drug discovery.” Barzilay is the AI faculty lead within the MIT Abdul Latif Jameel Clinic for Machine Learning in Health and an affiliate of the Computer Science and Artificial Intelligence Laboratory and Institute for Medical Engineering and Science.

Guoping Feng, the associate director of the McGovern Institute for Brain Research, James W. (1963) and Patricia T. Professor of Neuroscience in MIT’s Department of Brain and Cognitive Sciences, and an affiliate of the Broad Institute of MIT and Harvard, was elected “for his breakthrough discoveries regarding the pathological mechanisms of neurodevelopmental and psychiatric disorders, providing foundational knowledges and molecular targets for developing effective therapeutics for mental illness such as OCD, ASD, and ADHD.”

Darrell J. Irvine ’00, the Underwood-Prescott Professor of Biological Engineering and Materials Science at MIT and a member of the Koch Institute for Integrative Cancer Research, was elected “for the development of novel methods for delivery of immunotherapies and vaccines for cancer and infectious diseases.”

Morgan Sheng, professor of neuroscience in the Department of Brain and Cognitive Sciences, with affiliations in the McGovern Institute and The Picower Institute for Learning and Memory at MIT, as well as the Broad Institute of MIT and Harvard, was elected “for transforming the understanding of excitatory synapses. He revealed the postsynaptic density as a protein network controlling synaptic signaling and morphology; established the paradigm of signaling complexes organized by PDZ scaffolds; and pioneered the concept of localized regulation of mitochondria, apoptosis, and complement for targeted synapse elimination.”

Additional MIT affiliates

Michael F. Chiang, a former student in the Harvard-MIT Program in Health Sciences and Technology (HST) who is now director of the National Eye Institute of the National Institutes of Health, was honored “for pioneering applications of biomedical informatics to ophthalmology in artificial intelligence, telehealth, pediatric retinal disease, electronic health records, and data science, including methodological and diagnostic advances in AI for pediatric retinopathy of prematurity, and for contributions to developing and implementing the largest ambulatory care registry in the United States.”

Justin Hanes PhD ’96, who earned his PhD from the MIT Department of Chemical Engineering and is now a professor at Johns Hopkins University, was honored “for pioneering discoveries and inventions of innovative drug delivery technologies, especially mucosal, ocular, and central nervous system drug delivery systems; and for international leadership in research and education at the interface of engineering, medicine, and entrepreneurship, leading to clinical translation of drug delivery technologies.”

Said Ibrahim MBA ’16, a graduate of the MIT Sloan School of Management who is now a senior vice president and chair, department of medicine at the Zucker School of Medicine at Hofstra/Northwell, was honored for influential “health services research on racial disparities in elective joint replacement that has provided a national model for advancing health equity research beyond the identification of inequities and toward their remediation, and for his research that has been leveraged to engage diverse and innovative emerging scholars.”

Siddhartha Mukherjee, a former student in HST who is now an associate professor of medicine at Columbia University School of Medicine, was honored “for contributing important research in the immunotherapy of myeloid malignancies, such as acute myeloid leukemia, for establishing international centers for immunotherapy for childhood cancers, and for the discovery of tissue-resident stem cells.”

Robert H. Vonderheide, a former student in HST who is now a professor and vice dean at the Perelman School of Medicine and vice president of cancer programs at the University of Pennsylvania Health System, was honored “for developing immune combination therapies for patients with pancreatic cancer by driving proof-of-concept from lab to clinic, then leading national, randomized clinical trials for therapy, maintenance, and interception; and for improving access of minority individuals to clinical trials while directing an NCI comprehensive cancer center.”

Jennifer West ’92, a graduate of the MIT Department of Chemical Engineering who is now a professor of biomedical engineering and dean of the School of Engineering and Applied Science at the University of Virginia at Charlottesville, was honored “for the invention, development, and translation of novel biomaterials including bioactive, photopolymerizable hydrogels and theranostic nanoparticles.”

Yi Zhang, associate member of the Broad Institute, was honored “for making fundamental contributions to the epigenetics field through systematic identification and characterization of chromatin modifying enzymes, including EZH2, JmjC, and Tet. His proof-of-principle work on EZH2 inhibitors led to the founding of Epizyme and eventual making of tazemetostat, a drug approved for epithelioid sarcoma and follicular lymphoma.”

“It is my honor to welcome this truly exceptional class of new members to the National Academy of Medicine,” said NAM President Victor J. Dzau. “Their contributions to health and medicine are unparalleled, and their leadership and expertise will be essential to helping the NAM tackle today’s urgent health challenges, inform the future of health care, and ensure health equity for the benefit of all around the globe.”

Re-imagining our theories of language

Over a decade ago, the neuroscientist Ev Fedorenko asked 48 English speakers to complete tasks like reading sentences, recalling information, solving math problems, and listening to music. As they did this, she scanned their brains using functional magnetic resonance imaging to see which circuits were activated. If, as linguists have proposed for decades, language is connected to thought in the human brain, then the language processing regions would be activated even during nonlinguistic tasks.

Fedorenko’s experiment, published in 2011 in the Proceedings of the National Academy of Sciences, showed that when it comes to arithmetic, musical processing, general working memory, and other nonlinguistic tasks, language regions of the human brain showed no response. Contrary to what many linguistists have claimed, complex thought and language are separate things. One does not require the other. “We have this highly specialized place in the brain that doesn’t respond to other activities,” says Fedorenko, who is an associate professor at the Department of Brain and Cognitive Sciences (BCS) and the McGovern Institute for Brain Research. “It’s not true that thought critically needs language.”

The design of the experiment, using neuroscience to understand how language works, how it evolved, and its relation to other cognitive functions, is at the heart of Fedorenko’s research. She is part of a unique intellectual triad at MIT’s Department of BCS, along with her colleagues Roger Levy and Ted Gibson. (Gibson and Fedorenko have been married since 2007). Together they have engaged in a years-long collaboration and built a significant body of research focused on some of the biggest questions in linguistics and human cognition. While working in three independent labs — EvLab, TedLab, and the Computational Psycholinguistics Lab — the researchers are motivated by a shared fascination with the human mind and how language works in the brain. “We have a great deal of interaction and collaboration,” says Levy. “It’s a very broadly collaborative, intellectually rich and diverse landscape.”

Using combinations of computational modeling, psycholinguistic experimentation, behavioral data, brain imaging, and large naturalistic language datasets, the researchers also share an answer to a fundamental question: What is the purpose of language? Of all the possible answers to why we have language, perhaps the simplest and most obvious is communication. “Believe it or not,” says Ted Gibson, “that is not the standard answer.”

Gibson first came to MIT in 1993 and joined the faculty of the Linguistics Department in 1997. Recalling the experience today, he describes it as frustrating. The field of linguistics at that time was dominated by the ideas of Noam Chomsky, one of the founders of MIT’s Graduate Program in Linguistics, who has been called the father of modern linguistics. Chomsky’s “nativist” theories of language posited that the purpose of language is the articulation of thought and that language capacity is built-in in advance of any learning. But Gibson, with his training in math and computer science, felt that researchers didn’t satisfyingly test these ideas. He believed that finding the answer to many outstanding questions about language required quantitative research, a departure from standard linguistic methodology. “There’s no reason to rely only on you and your friends, which is how linguistics has worked,” Gibson says. “The data you can get can be much broader if you crowdsource lots of people using experimental methods.” Chomsky’s ascendancy in linguistics presented Gibson with what he saw as a challenge and an opportunity. “I felt like I had to figure it out in detail and see if there was truth in these claims,” he says.

Three decades after he first joined MIT, Gibson believes that the collaborative research at BCS is persuasive and provocative, pointing to new ways of thinking about human culture and cognition. “Now we’re at a stage where it is not just arguments against. We have a lot of positive stuff saying what language is,” he explains. Levy adds: “I would say all three of us are of the view that communication plays a very import role in language learning and processing, but also in the structure of language itself.”

Levy points out that the three researchers completed PhDs in different subjects: Fedorenko in neuroscience, Gibson in computer science, Levy in linguistics. Yet for years before their paths finally converged at MIT, their shared interests in quantitative linguistic research led them to follow each other’s work closely and be influenced by it. The first collaboration between the three was in 2005 and focused on language processing in Russian relative clauses. Around that time, Gibson recalls, Levy was presenting what he describes as “lovely work” that was instrumental in helping him to understand the links between language structure and communication. “Communicative pressures drive the structures,” says Gibson. “Roger was crucial for that. He was the one helping me think about those things a long time ago.”

Levy’s lab is focused on the intersection of artificial intelligence, linguistics, and psychology, using natural language processing tools. “I try to use the tools that are afforded by mathematical and computer science approaches to language to formalize scientific hypotheses about language and the human mind and test those hypotheses,” he says.

Levy points to ongoing research between him and Gibson focused on language comprehension as an example of the benefits of collaboration. “One of the big questions is: When language understanding fails, why does it fail?” Together, the researchers have applied the concept of a “noisy channel,” first developed by the information theorist Claude Shannon in the 1950s, which says that information or messages are corrupted in transmission. “Language understanding unfolds over time, involving an ongoing integration of the past with the present,” says Levy. “Memory itself is an imperfect channel conveying the past from our brain a moment ago to our brain now in order to support successful language understanding.” Indeed, the richness of our linguistic environment, the experience of hundreds of millions of words by adulthood, may create a kind of statistical knowledge guiding our expectations, beliefs, predictions, and interpretations of linguistic meaning. “Statistical knowledge of language actually interacts with the constraints of our memory,” says Levy. “Our experience shapes our memory for language itself.”

All three researchers say they share the belief that by following the evidence, they will eventually discover an even bigger and more complete story about language. “That’s how science goes,” says Fedorenko. “Ted trained me, along with Nancy Kanwisher, and both Ted and Roger are very data-driven. If the data is not giving you the answer you thought, you don’t just keep pushing your story. You think of new hypotheses. Almost everything I have done has been like that.” At times, Fedorenko’s research into parts of the brain’s language system has surprised her and forced her to abandon her hypotheses. “In a certain project I came in with a prior idea that there would be some separation between parts that cared about combinatorics versus words meanings,” she says, “but every little bit of the language system is sensitive to both. At some point, I was like, this is what the data is telling us, and we have to roll with it.”

The researchers’ work pointing to communication as the constitutive purpose of language opens new possibilities for probing and studying non-human language. The standard claim is that human language has a drastically more extensive lexicon than animals, which have no grammar. “But many times, we don’t even know what other species are communicating,” says Gibson. “We say they can’t communicate, but we don’t know. We don’t speak their language.” Fedorenko hopes that more opportunities to make cross-species linguistic comparisons will open up. “Understanding where things are similar and where things diverge would be super useful,” she says.

Meanwhile, the potential applications of language research are far-reaching. One of Levy’s current research projects focuses on how people read and use machine learning algorithms informed by the psychology of eye movements to develop proficiency tests. By tracking the eye movements of people who speak English as a second language while they read texts in English, Levy can predict how good they are at English, an approach that could one day replace the Test of English as a Foreign Language. “It’s an implicit measure of language rather than a much more game-able test,” he says.

The researchers agree that some of the most exciting opportunities in the neuroscience of language lies with large language models that provide new opportunities for asking new questions and making new discoveries. “In the neuroscience of language, the kind of stories that we’ve been able to tell about how the brain does language were limited to verbal, descriptive hypotheses,” says Fedorenko. Computationally implemented models are now amazingly good at language and show some degree of alignment to the brain, she adds. Now, researchers can ask questions such as: what are the actual computations that cells are doing to get meaning from strings of words? “You can now use these models as tools to get insights into how humans might be processing language,” she says. “And you can take the models apart in ways you can’t take apart the brain.”

Fourteen MIT School of Science professors receive tenure for 2022 and 2023

In 2022, nine MIT faculty were granted tenure in the School of Science:

Gloria Choi examines the interaction of the immune system with the brain and the effects of that interaction on neurodevelopment, behavior, and mood. She also studies how social behaviors are regulated according to sensory stimuli, context, internal state, and physiological status, and how these factors modulate neural circuit function via a combinatorial code of classic neuromodulators and immune-derived cytokines. Choi joined the Department of Brain and Cognitive Sciences after a postdoc at Columbia University. She received her bachelor’s degree from the University of California at Berkeley, and her PhD from Caltech. Choi is also an investigator in The Picower Institute for Learning and Memory.

Nikta Fakhri develops experimental tools and conceptual frameworks to uncover laws governing fluctuations, order, and self-organization in active systems. Such frameworks provide powerful insight into dynamics of nonequilibrium living systems across scales, from the emergence of thermodynamic arrow of time to spatiotemporal organization of signaling protein patterns and discovery of odd elasticity. Fakhri joined the Department of Physics in 2015 following a postdoc at University of Göttingen. She completed her undergraduate degree at Sharif University of Technology and her PhD at Rice University.

Geobiologist Greg Fournier uses a combination of molecular phylogeny insights and geologic records to study major events in planetary history, with the hope of furthering our understanding of the co-evolution of life and environment. Recently, his team developed a new technique to analyze multiple gene evolutionary histories and estimated that photosynthesis evolved between 3.4 and 2.9 billion years ago. Fournier joined the Department of Earth, Atmospheric and Planetary Sciences in 2014 after working as a postdoc at the University of Connecticut and as a NASA Postdoctoral Program Fellow in MIT’s Department of Civil and Environmental Engineering. He earned his BA from Dartmouth College in 2001 and his PhD in genetics and genomics from the University of Connecticut in 2009.

Daniel Harlow researches black holes and cosmology, viewed through the lens of quantum gravity and quantum field theory. His work generates new insights into quantum information, quantum field theory, and gravity. Harlow joined the Department of Physics in 2017 following postdocs at Princeton University and Harvard University. He obtained a BA in physics and mathematics from Columbia University in 2006 and a PhD in physics from Stanford University in 2012. He is also a researcher in the Center for Theoretical Physics.

A biophysicist, Gene-Wei Li studies how bacteria optimize the levels of proteins they produce at both mechanistic and systems levels. His lab focuses on design principles of transcription, translation, and RNA maturation. Li joined the Department of Biology in 2015 after completing a postdoc at the University of California at San Francisco. He earned an BS in physics from National Tsinghua University in 2004 and a PhD in physics from Harvard University in 2010.

Michael McDonald focuses on the evolution of galaxies and clusters of galaxies, and the role that environment plays in dictating this evolution. This research involves the discovery and study of the most distant assemblies of galaxies alongside analyses of the complex interplay between gas, galaxies, and black holes in the closest, most massive systems. McDonald joined the Department of Physics and the Kavli Institute for Astrophysics and Space Research in 2015 after three years as a Hubble Fellow, also at MIT. He obtained his BS and MS degrees in physics at Queen’s University, and his PhD in astronomy at the University of Maryland in College Park.

Gabriela Schlau-Cohen combines tools from chemistry, optics, biology, and microscopy to develop new approaches to probe dynamics. Her group focuses on dynamics in membrane proteins, particularly photosynthetic light-harvesting systems that are of interest for sustainable energy applications. Following a postdoc at Stanford University, Schlau-Cohen joined the Department of Chemistry faculty in 2015. She earned a bachelor’s degree in chemical physics from Brown University in 2003 followed by a PhD in chemistry at the University of California at Berkeley.

Phiala Shanahan’s research interests are focused around theoretical nuclear and particle physics. In particular, she works to understand the structure and interactions of hadrons and nuclei from the fundamental degrees of freedom encoded in the Standard Model of particle physics. After a postdoc at MIT and a joint position as an assistant professor at the College of William and Mary and senior staff scientist at the Thomas Jefferson National Accelerator Facility, Shanahan returned to the Department of Physics as faculty in 2018. She obtained her BS from the University of Adelaide in 2012 and her PhD, also from the University of Adelaide, in 2015.

Omer Yilmaz explores the impact of dietary interventions on stem cells, the immune system, and cancer within the intestine. By better understanding how intestinal stem cells adapt to diverse diets, his group hopes to identify and develop new strategies that prevent and reduce the growth of cancers involving the intestinal tract. Yilmaz joined the Department of Biology in 2014 and is now also a member of Koch Institute for Integrative Cancer Research. After receiving his BS from the University of Michigan in 1999 and his PhD and MD from University of Michigan Medical School in 2008, he was a resident in anatomic pathology at Massachusetts General Hospital and Harvard Medical School until 2013.

In 2023, five MIT faculty were granted tenure in the School of Science:

Physicist Riccardo Comin explores the novel phases of matter that can be found in electronic solids with strong interactions, also known as quantum materials. His group employs a combination of synthesis, scattering, and spectroscopy to obtain a comprehensive picture of these emergent phenomena, including superconductivity, (anti)ferromagnetism, spin-density-waves, charge order, ferroelectricity, and orbital order. Comin joined the Department of Physics in 2016 after postdoctoral work at the University of Toronto. He completed his undergraduate studies at the Universita’ degli Studi di Trieste in Italy, where he also obtained a MS in physics in 2009. Later, he pursued doctoral studies at the University of British Columbia, Canada, earning a PhD in 2013.

Netta Engelhardt researches the dynamics of black holes in quantum gravity and uses holography to study the interplay between gravity and quantum information. Her primary focus is on the black hole information paradox, that black holes seem to be destroying information that, according to quantum physics, cannot be destroyed. Engelhardt was a postdoc at Princeton University and a member of the Princeton Gravity Initiative prior to joining the Department of Physics in 2019. She received her BS in physics and mathematics from Brandeis University and her PhD in physics from the University of California at Santa Barbara. Engelhardt is a researcher in the Center for Theoretical Physics and the Black Hole Initiative at Harvard University.

Mark Harnett studies how the biophysical features of individual neurons endow neural circuits with the ability to process information and perform the complex computations that underlie behavior. As part of this work, his lab was the first to describe the physiological properties of human dendrites. He joined the Department of Brain and Cognitive Sciences and the McGovern Institute for Brain Research in 2015. Prior, he was a postdoc at the Howard Hughes Medical Institute’s Janelia Research Campus. He received his BA in biology from Reed College in Portland, Oregon and his PhD in neuroscience from the University of Texas at Austin.

Or Hen investigates quantum chromodynamic effects in the nuclear medium and the interplay between partonic and nucleonic degrees of freedom in nuclei. Specifically, Hen utilizes high-energy scattering of electron, neutrino, photon, proton and ion off atomic nuclei to study short-range correlations: temporal fluctuations of high-density, high-momentum, nucleon clusters in nuclei with important implications for nuclear, particle, atomic, and astrophysics. Hen was an MIT Pappalardo Fellow in the Department of Physics from 2015 to 2017 before joining the faculty in 2017. He received his undergraduate degree in physics and computer engineering from the Hebrew University and earned his PhD in experimental physics at Tel Aviv University.

Sebastian Lourido is interested in learning about the vulnerabilities of parasites in order to develop treatments for infectious diseases and expand our understanding of eukaryotic diversity. His lab studies many important human pathogens, including Toxoplasma gondii, to model features conserved throughout the phylum. Lourido was a Whitehead Fellow at the Whitehead Institute for Biomedical Research until 2017, when he joined the Department of Biology and became a Whitehead Member. He earned his BS from Tulane University in 2004 and his PhD from Washington University in St. Louis in 2012.