Twenty-four individuals and one team were awarded MIT Excellence Awards — the highest awards for staff at the Institute — at a well-attended and energetic ceremony the afternoon of June 8 in Kresge Auditorium. In addition to the Excellence Awards, two community members were honored with the Collier Medal and Staff Award for Distinction in Service.
The Excellence Awards, Collier Medal, and Staff Award for Distinction in Service recognize the extraordinary dedication of staff and community members who represent all areas of the Institute, both on campus and at the Lincoln Laboratory.
The Collier Medal honors the memory of Officer Sean Collier, who gave his life protecting and serving the MIT community, and celebrates an individual or group whose actions demonstrate the importance of community. The Staff Award for Distinction in Service, now in its second year, is presented to a staff member whose service to the Institute results in a positive lasting impact on the community.
The 2023 MIT Excellence Award recipients and their award categories are:
Sustaining MIT: Erin Genereux; Rachida Kernis; J. Bradley Morrison, and the Tip Box Recycling Team (John R. Collins, Michael A. DeBerio, Normand J. Desrochers III, Mitchell S. Galanek, David M. Pavone, Ryan Samz, Rosario Silvestri, and Lu Zhong);
Innovative Solutions: Abram Barrett, Nicole H. W. Henning
Bringing Out the Best: Patty Eames, Suzy Maholchic Nelson
Serving Our Community: Mahnaz El-Kouedi, Kara Flyg, Timothy J. Meunier, Marie A. Stuppard, Roslyn R. Wesley
Embracing Diversity, Equity, and Inclusion: Farrah A. Belizaire
Outstanding Contributor: Diane Ballestas, Robert J. Bicchieri, Lindsey Megan Charles, Benoit Desbiolles, Dennis C. Hamel, Heather Anne Holland, Gregory L. Long, Linda Mar, Mary Ellen Sinkus, Sarah E. Willis, and Phyl A. Winn
The 2023 Collier Medal recipient was Martin Eric William Nisser, a graduate student fellow in the Department of Electrical Engineering and Computer Science/Computer Science and Artificial Intelligence Laboratory and the School of Engineering/MIT Schwarzman College of Computing.
The 2023 recipient of the Staff Award for Distinction in Service was Kimberly A. Haberlin, chief of staff in the Chancellor’s Office.
Presenters included President Sally Kornbluth; Vice President for Human Resources Ramona Allen; Provost Cynthia Barnhart; School of Engineering Dean Anantha Chandrakasan; MIT Police Chief John DiFava and MIT Police Captain Andrew Turco; Institute Community and Equity Officer John Dozier; Lincoln Laboratory Director Eric Evans; and Chancellor Melissa Nobles. As always, an animated and supportive audience with signs, pompoms, and glow bracelets filled the auditorium with cheers for the honorees.
Visit the MIT Human Resources website for more information about the award categories, selection process, recipients, and to view the archive video of the event.
The lab of Edward Boyden, the Y. Eva Tan Professor in Neurotechnology, has developed a powerful technology called Expansion Revealing (ExR) that makes visible molecular structures that were previously too hidden to be seen with even the most powerful microscopes. It “reveals” the nanoscale alterations in synapses, neural wiring, and other molecular assemblies using ordinary lab microscopes. It does so this way: Inside a cell, proteins and other molecules are often tightly packed together. These dense clusters can be difficult to image because the fluorescent labels used to make them visible can’t wedge themselves between the molecules. ExR “de-crowds” the molecules by expanding the cell using a chemical process, making the molecules accessible to fluorescent tags.
“This technology can be used to answer a lot of biological questions about dysfunction in synaptic proteins, which are involved in neurodegenerative diseases,” says Jinyoung Kang, a J. Douglas Tan Postdoctoral Fellow in the labs of Boyden and Guoping Feng, the James W. (1963) and Patricia T. Poitras Professor of Brain and Cognitive Sciences. “Until now, there has been no tool to visualize synapses very well at nanoscale.”
Over the past year, the Boyden team has been using ExR to explore the underlying mechanisms of brain disorders, including autism spectrum disorder (ASD) and Alzheimer’s disease. Since the method can be applied iteratively, Boyden imagines it may one day succeed in creating a 100-fold magnification of molecular structures.
“Using earlier technology, researchers may be missing entire categories of molecular phenomena, both functional and dysfunctional,” says Boyden. “It’s critical to bring these nanostructures into view so that we can identify potential targets for new therapeutics that can restore functional molecular arrangements.”
The team is applying ExR to the study of mutant-animal-model brain slices to expose complex synapse 3D nanoarchitecture and configuration. Among their questions: How do synapses differ when mutations that cause autism and other neurological conditions are present?
Using the new technology, Kang and her collaborator Menglong Zeng characterized the molecular architecture of excitatory synapses on parvalbumin interneurons, cells that drastically influence the downstream effects of neuronal signaling and ultimately change cognitive behaviors. They discovered condensed AMPAR clustering in parvalbumin interneurons is essential for normal brain function. The next step is to explore their role in the function of parvalbumin interneurons, which are vulnerable to stressors and have been implicated in brain disorders including autism and Alzheimer’s disease.
The researchers are now investigating whether ExR can reveal abnormal protein nanostructures in SHANK3 knockout mice and marmosets. Mutations in the SHANK3 gene lead to one of the most severe types of ASD, Phelan-McDermid syndrome, which accounts for about 2 percent of all ASD patients with intellectual disability.
A team of researchers led by Feng Zhang at the McGovern Institute and the Broad Institute of MIT and Harvard has uncovered the first programmable RNA-guided system in eukaryotes — organisms that include fungi, plants, and animals.
In a study in Nature, the team describes how the system is based on a protein called Fanzor. They showed that Fanzor proteins use RNA as a guide to target DNA precisely, and that Fanzors can be reprogrammed to edit the genome of human cells. The compact Fanzor systems have the potential to be more easily delivered to cells and tissues as therapeutics than CRISPR/Cas systems, and further refinements to improve their targeting efficiency could make them a valuable new technology for human genome editing.
CRISPR/Cas was first discovered in prokaryotes (bacteria and other single-cell organisms that lack nuclei) and scientists including Zhang’s lab have long wondered whether similar systems exist in eukaryotes. The new study demonstrates that RNA-guided DNA-cutting mechanisms are present across all kingdoms of life.
“This new system is another way to make precise changes in human cells, complementing the genome editing tools we already have.” — Feng Zhang
“CRISPR-based systems are widely used and powerful because they can be easily reprogrammed to target different sites in the genome,” said Zhang, senior author on the study and a core institute member at the Broad, an investigator at MIT’s McGovern Institute, the James and Patricia Poitras Professor of Neuroscience at MIT, and a Howard Hughes Medical Institute investigator. “This new system is another way to make precise changes in human cells, complementing the genome editing tools we already have.”
Searching the domains of life
A major aim of the Zhang lab is to develop genetic medicines using systems that can modulate human cells by targeting specific genes and processes. “A number of years ago, we started to ask, ‘What is there beyond CRISPR, and are there other RNA-programmable systems out there in nature?’” said Zhang.
Two years ago, Zhang lab members discovered a class of RNA-programmable systems in prokaryotes called OMEGAs, which are often linked with transposable elements, or “jumping genes”, in bacterial genomes and likely gave rise to CRISPR/Cas systems. That work also highlighted similarities between prokaryotic OMEGA systems and Fanzor proteins in eukaryotes, suggesting that the Fanzor enzymes might also use an RNA-guided mechanism to target and cut DNA.
In the new study, the researchers continued their study of RNA-guided systems by isolating Fanzors from fungi, algae, and amoeba species, in addition to a clam known as the Northern Quahog. Co-first author Makoto Saito of the Zhang lab led the biochemical characterization of the Fanzor proteins, showing that they are DNA-cutting endonuclease enzymes that use nearby non-coding RNAs known as ωRNAs to target particular sites in the genome. It is the first time this mechanism has been found in eukaryotes, such as animals.
Unlike CRISPR proteins, Fanzor enzymes are encoded in the eukaryotic genome within transposable elements and the team’s phylogenetic analysis suggests that the Fanzor genes have migrated from bacteria to eukaryotes through so-called horizontal gene transfer.
“These OMEGA systems are more ancestral to CRISPR and they are among the most abundant proteins on the planet, so it makes sense that they have been able to hop back and forth between prokaryotes and eukaryotes,” said Saito.
To explore Fanzor’s potential as a genome editing tool, the researchers demonstrated that it can generate insertions and deletions at targeted genome sites within human cells. The researchers found the Fanzor system to initially be less efficient at snipping DNA than CRISPR/Cas systems, but by systematic engineering, they introduced a combination of mutations into the protein that increased its activity 10-fold. Additionally, unlike some CRISPR systems and the OMEGA protein TnpB, the team found that a fungal-derived Fanzor protein did not exhibit “collateral activity,” where an RNA-guided enzyme cleaves its DNA target as well as degrading nearby DNA or RNA. The results suggest that Fanzors could potentially be developed as efficient genome editors.
Co-first author Peiyu Xu led an effort to analyze the molecular structure of the Fanzor/ωRNA complex and illustrate how it latches onto DNA to cut it. Fanzor shares structural similarities with its prokaryotic counterpart CRISPR-Cas12 protein, but the interaction between the ωRNA and the catalytic domains of Fanzor is more extensive, suggesting that the ωRNA might play a role in the catalytic reactions. “We are excited about these structural insights for helping us further engineer and optimize Fanzor for improved efficiency and precision as a genome editor,” said Xu.
Like CRISPR-based systems, the Fanzor system can be easily reprogrammed to target specific genome sites, and Zhang said it could one day be developed into a powerful new genome editing technology for research and therapeutic applications. The abundance of RNA-guided endonucleases like Fanzors further expands the number of OMEGA systems known across kingdoms of life and suggests that there are more yet to be found.
“Nature is amazing. There’s so much diversity,” said Zhang. “There are probably more RNA-programmable systems out there, and we’re continuing to explore and will hopefully discover more.”
The paper’s other authors include Guilhem Faure, Samantha Maguire, Soumya Kannan, Han Altae-Tran, Sam Vo, AnAn Desimone, and Rhiannon Macrae.
Support for this work was provided by the Howard Hughes Medical Institute; Poitras Center for Psychiatric Disorders Research at MIT; K. Lisa Yang and Hock E. Tan Molecular Therapeutics Center at MIT; Broad Institute Programmable Therapeutics Gift Donors; The Pershing Square Foundation, William Ackman, and Neri Oxman; James and Patricia Poitras; BT Charitable Foundation; Asness Family Foundation; Kenneth C. Griffin; the Phillips family; David Cheng; Robert Metcalfe; and Hugo Shong.
The brain and the digestive tract are in constant communication, relaying signals that help to control feeding and other behaviors. This extensive communication network also influences our mental state and has been implicated in many neurological disorders.
MIT engineers have designed a new technology for probing those connections. Using fibers embedded with a variety of sensors, as well as light sources for optogenetic stimulation, the researchers have shown that they can control neural circuits connecting the gut and the brain, in mice.
In a new study, the researchers demonstrated that they could induce feelings of fullness or reward-seeking behavior in mice by manipulating cells of the intestine. In future work, they hope to explore some of the correlations that have been observed between digestive health and neurological conditions such as autism and Parkinson’s disease.
“The exciting thing here is that we now have technology that can drive gut function and behaviors such as feeding. More importantly, we have the ability to start accessing the crosstalk between the gut and the brain with the millisecond precision of optogenetics, and we can do it in behaving animals,” says Polina Anikeeva, the Matoula S. Salapatas Professor in Materials Science and Engineering, a professor of brain and cognitive sciences, director of the K. Lisa Yang Brain-Body Center, associate director of MIT’s Research Laboratory of Electronics, and a member of MIT’s McGovern Institute for Brain Research.
Anikeeva is the senior author of the new study, which appears today in Nature Biotechnology. The paper’s lead authors are MIT graduate student Atharva Sahasrabudhe, Duke University postdoc Laura Rupprecht, MIT postdoc Sirma Orguc, and former MIT postdoc Tural Khudiyev.
The brain-body connection
Last year, the McGovern Institute launched the K. Lisa Yang Brain-Body Center to study the interplay between the brain and other organs of the body. Research at the center focuses on illuminating how these interactions help to shape behavior and overall health, with a goal of developing future therapies for a variety of diseases.
“There’s continuous, bidirectional crosstalk between the body and the brain,” Anikeeva says. “For a long time, we thought the brain is a tyrant that sends output into the organs and controls everything. But now we know there’s a lot of feedback back into the brain, and this feedback potentially controls some of the functions that we have previously attributed exclusively to the central neural control.”
As part of the center’s work, Anikeeva set out to probe the signals that pass between the brain and the nervous system of the gut, also called the enteric nervous system. Sensory cells in the gut influence hunger and satiety via both the neuronal communication and hormone release.
Untangling those hormonal and neural effects has been difficult because there hasn’t been a good way to rapidly measure the neuronal signals, which occur within milliseconds.
“We needed a device that didn’t exist. So, we decided to make it,” says Atharva Sahasrabudhe.
“To be able to perform gut optogenetics and then measure the effects on brain function and behavior, which requires millisecond precision, we needed a device that didn’t exist. So, we decided to make it,” says Sahasrabudhe, who led the development of the gut and brain probes.
The electronic interface that the researchers designed consists of flexible fibers that can carry out a variety of functions and can be inserted into the organs of interest. To create the fibers, Sahasrabudhe used a technique called thermal drawing, which allowed him to create polymer filaments, about as thin as a human hair, that can be embedded with electrodes and temperature sensors.
The filaments also carry microscale light-emitting devices that can be used to optogenetically stimulate cells, and microfluidic channels that can be used to deliver drugs.
The mechanical properties of the fibers can be tailored for use in different parts of the body. For the brain, the researchers created stiffer fibers that could be threaded deep into the brain. For digestive organs such as the intestine, they designed more delicate rubbery fibers that do not damage the lining of the organs but are still sturdy enough to withstand the harsh environment of the digestive tract.
“To study the interaction between the brain and the body, it is necessary to develop technologies that can interface with organs of interest as well as the brain at the same time, while recording physiological signals with high signal-to-noise ratio,” Sahasrabudhe says. “We also need to be able to selectively stimulate different cell types in both organs in mice so that we can test their behaviors and perform causal analyses of these circuits.”
The fibers are also designed so that they can be controlled wirelessly, using an external control circuit that can be temporarily affixed to the animal during an experiment. This wireless control circuit was developed by Orguc, a Schmidt Science Fellow, and Harrison Allen ’20, MEng ’22, who were co-advised between the Anikeeva lab and the lab of Anantha Chandrakasan, dean of MIT’s School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science.
Driving behavior
Using this interface, the researchers performed a series of experiments to show that they could influence behavior through manipulation of the gut as well as the brain.
First, they used the fibers to deliver optogenetic stimulation to a part of the brain called the ventral tegmental area (VTA), which releases dopamine. They placed mice in a cage with three chambers, and when the mice entered one particular chamber, the researchers activated the dopamine neurons. The resulting dopamine burst made the mice more likely to return to that chamber in search of the dopamine reward.
Then, the researchers tried to see if they could also induce that reward-seeking behavior by influencing the gut. To do that, they used fibers in the gut to release sucrose, which also activated dopamine release in the brain and prompted the animals to seek out the chamber they were in when sucrose was delivered.
Next, working with colleagues from Duke University, the researchers found they could induce the same reward-seeking behavior by skipping the sucrose and optogenetically stimulating nerve endings in the gut that provide input to the vagus nerve, which controls digestion and other bodily functions.
“Again, we got this place preference behavior that people have previously seen with stimulation in the brain, but now we are not touching the brain. We are just stimulating the gut, and we are observing control of central function from the periphery,” Anikeeva says.
Sahasrabudhe worked closely with Rupprecht, a postdoc in Professor Diego Bohorquez’ group at Duke, to test the fibers’ ability to control feeding behaviors. They found that the devices could optogenetically stimulate cells that produce cholecystokinin, a hormone that promotes satiety. When this hormone release was activated, the animals’ appetites were suppressed, even though they had been fasting for several hours. The researchers also demonstrated a similar effect when they stimulated cells that produce a peptide called PYY, which normally curbs appetite after very rich foods are consumed.
The researchers now plan to use this interface to study neurological conditions that are believed to have a gut-brain connection. For instance, studies have shown that autistic children are far more likely than their peers to be diagnosed with GI dysfunction, while anxiety and irritable bowel syndrome share genetic risks.
“We can now begin asking, are those coincidences, or is there a connection between the gut and the brain? And maybe there is an opportunity for us to tap into those gut-brain circuits to begin managing some of those conditions by manipulating the peripheral circuits in a way that does not directly ‘touch’ the brain and is less invasive,” Anikeeva says.
The research was funded, in part, by the Hock E. Tan and K. Lisa Yang Center for Autism Research and the K. Lisa Yang Brain-Body Center, the National Institute of Neurological Disorders and Stroke, the National Science Foundation (NSF) Center for Materials Science and Engineering, the NSF Center for Neurotechnology, the National Center for Complementary and Integrative Health, a National Institutes of Health Director’s Pioneer Award, the National Institute of Mental Health, and the National Institute of Diabetes and Digestive and Kidney Diseases.
MIT scientists have developed tiny, soft-bodied robots that can be controlled with a weak magnet. The robots, formed from rubbery magnetic spirals, can be programmed to walk, crawl, swim—all in response to a simple, easy-to-apply magnetic field.
“This is the first time this has been done, to be able to control three-dimensional locomotion of robots with a one-dimensional magnetic field,” says McGovern associate investigator Polina Anikeeva, whose team reported on the magnetic robots June 3, 2023, in the journal Advanced Materials. “And because they are predominantly composed of polymer and polymers are soft, you don’t need a very large magnetic field to activate them. It’s actually a really tiny magnetic field that drives these robots,” says Anikeeva, who is also the Matoula S. Salapatas Professor in Materials Science and Engineering and a professor of brain and cognitive sciences at MIT, as well as the associate director of MIT’s Research Laboratory of Electronics and director of MIT’s K. Lisa Yang Brain-Body Center.
The new robots are well suited to transport cargo through confined spaces and their rubber bodies are gentle on fragile environments, opening the possibility that the technology could be developed for biomedical applications. Anikeeva and her team have made their robots millimeters long, but she says the same approach could be used to produce much smaller robots.
Engineering magnetic robots
Anikeeva says that until now, magnetic robots have moved in response to moving magnetic fields. She explains that for these models, “if you want your robot to walk, your magnet walks with it. If you want it to rotate, you rotate your magnet.” That limits the settings in which such robots might be deployed. “If you are trying to operate in a really constrained environment, a moving magnet may not be the safest solution. You want to be able to have a stationary instrument that just applies magnetic field to the whole sample,” she explains.
Youngbin Lee, a former graduate student in Anikeeva’s lab, engineered a solution to this problem. The robots he developed in Anikeeva’s lab are not uniformly magnetized. Instead, they are strategically magnetized in different zones and directions so a single magnetic field can enable a movement-driving profile of magnetic forces.
Before they are magnetized, however, the flexible, lightweight bodies of the robots must be fabricated. Lee starts this process with two kinds of rubber, each with a different stiffness. These are sandwiched together, then heated and stretched into a long, thin fiber. Because of the two materials’ different properties, one of the rubbers retains its elasticity through this stretching process, but the other deforms and cannot return to its original size. So when the strain is released, one layer of the fiber contracts, tugging on the other side and pulling the whole thing into a tight coil. Anikeeva says the helical fiber is modeled after the twisty tendrils of a cucumber plant, which spiral when one layer of cells loses water and contracts faster than a second layer.
A third material—one whose particles have the potential to become magnetic—is incorporated in a channel that runs through the rubbery fiber. So once the spiral has been made, a magnetization pattern that enables a particular type of movement can be introduced.
“Youngbin thought very carefully about how to magnetize our robots to make them able to move just as he programmed them to move,” Anikeeva says. “He made calculations to determine how to establish such a profile of forces on it when we apply a magnetic field that it will actually start walking or crawling.”
To form a caterpillar-like crawling robot, for example, the helical fiber is shaped into gentle undulations, and then the body, head, and tail are magnetized so that a magnetic field applied perpendicular to the robot’s plane of motion will cause the body to compress. When the field is reduced to zero, the compression is released, and the crawling robot stretches. Together, these movements propel the robot forward. Another robot in which two foot-like helical fibers are connected with a joint is magnetized in a pattern that enables a movement more like walking.
Biomedical potential
This precise magnetization process generates a program for each robot and ensures that that once the robots are made, they are simple to control. A weak magnetic field activates each robot’s program and drives its particular type of movement. A single magnetic field can even send multiple robots moving in opposite directions, if they have been programmed to do so. The team found that one minor manipulation of the magnetic field has a useful effect: With the flip of a switch to reverse the field, a cargo-carrying robot can be made to gently shake and release its payload.
Anikeeva says she can imagine these soft-bodied robots—whose straightforward production will be easy to scale up—delivering materials through narrow pipes or even inside the human body. For example, they might carry a drug through narrow blood vessels, releasing it exactly where it is needed. She says the magnetically-actuated devices have biomedical potential beyond robots as well, and might one day be incorporated into artificial muscles or materials that support tissue regeneration.
Maedbh King came to MIT to make a difference in mental health. As a postdoctoral fellow in the K. Lisa Yang Integrative Computational Neuroscience (ICoN) Center, she is building computer models aimed at helping clinicians improve diagnosis and treatment, especially for young people with neurodevelopmental and psychiatric disorders.
Tapping two large patient-data sources, King is working to analyze critical biological and behavioral information to better categorize patients’ mental health conditions, including autism spectrum disorder, attention-deficit hyperactivity disorder (ADHD), anxiety, and suicidal thoughts — and to provide more predictive approaches to addressing them. Her strategy reflects the center’s commitment to a holistic understanding of human brain function using theoretical and computa-tional neuroscience.
“Today, treatment decisions for psychiatric disorders are derived entirely from symptoms, which leaves clinicians and patients trying one treatment and, if it doesn’t work, trying another,” says King. “I hope to help change that.”
King grew up in Dublin, Ireland, and studied psychology in college; gained neuroimaging and programming skills while earning a master’s degree from Western University in Canada; and received her doctorate from the University of California, Berkeley, where she built maps and models of the human brain. In fall 2022, King joined the lab of Satrajit Ghosh, a McGovern Institute principal research scientist whose team uses neuroimaging, speech communication, and machine learning to improve assessments and treatments for mental health and neurological disorders.
Big-data insights
King is pursuing several projects using the Healthy Brain Network, a landmark mental health study of children and adolescents in New York City. She and lab colleagues are extracting data from cognitive and other assessments — such as language patterns, favorite school subjects, and family mental illness history — from roughly 4,000 participants to provide more-nuanced understanding of their neurodevelopmental disorders, such as autism or ADHD.
“Computational models are powerful. They can identify patterns that can’t be obtained with the human eye through electronic records,” says King.
With this database, one can develop “very rich clinical profiles of these young people,” including their challenges and adaptive strengths, King explains. “We’re interested in placing these participants within a spectrum of symptoms, rather than just providing a binary label of, ‘has this disorder’ or ‘doesn’t have it.’ It’s an effort to subtype based on these phenotypic assessments.”
In other research, King is developing tools to detect risk factors for suicide among adolescents. Working with psychiatrists at Children’s Hospital of Philadelphia, she is using detailed questionnaires from some 20,000 youths who visited the hospital’s emergency department over several years; about one-tenth had tried to take their own lives. The questionnaires collect information about demographics, lifestyle, relationships, and other aspects of patients’ lives.
“One of the big questions the physicians want to answer is, Are there any risk predictors we can identify that can ultimately prevent, or at least mitigate, future suicide attempts?” King says. “Computational models are powerful. They can identify patterns that can’t be obtained with the human eye through electronic records.”
King is passionate about producing findings to help practitioners, whether they’re clinicians, teachers, parents, or policy makers, and the populations they’re studying. “This applied work,” she says, “should be communicated in a way that can be useful.
From cameras to self-driving cars, many of today’s technologies depend on artificial intelligence (AI) to extract meaning from visual information. Today’s AI technology has artificial neural networks at its core, and most of the time we can trust these AI computer vision systems to see things the way we do — but sometimes they falter. According to MIT and IBM Research scientists, one way to improve computer vision is to instruct the artificial neural networks that they rely on to deliberately mimic the way the brain’s biological neural network processes visual images.
Researchers led by James DiCarlo, the director of MIT’s Quest for Intelligence and member of the MIT-IBM Watson AI Lab, have made a computer vision model more robust by training it to work like a part of the brain that humans and other primates rely on for object recognition. This May, at the International Conference on Learning Representations (ICLR), the team reported that when they trained an artificial neural network using neural activity patterns in the brain’s inferior temporal (IT) cortex, the artificial neural network was more robustly able to identify objects in images than a model that lacked that neural training. And the model’s interpretations of images more closely matched what humans saw, even when images included minor distortions that made the task more difficult.
Comparing neural circuits
Many of the artificial neural networks used for computer vision already resemble the multi-layered brain circuits that process visual information in humans and other primates. Like the brain, they use neuron-like units that work together to process information. As they are trained for a particular task, these layered components collectively and progressively process the visual information to complete the task — determining for example, that an image depicts a bear or a car or a tree.
DiCarlo and others previously found that when such deep-learning computer vision systems establish efficient ways to solve visual problems, they end up with artificial circuits that work similarly to the neural circuits that process visual information in our own brains. That is, they turn out to be surprisingly good scientific models of the neural mechanisms underlying primate and human vision.
That resemblance is helping neuroscientists deepen their understanding of the brain. By demonstrating ways visual information can be processed to make sense of images, computational models suggest hypotheses about how the brain might accomplish the same task. As developers continue to refine computer vision models, neuroscientists have found new ideas to explore in their own work.
“As vision systems get better at performing in the real world, some of them turn out to be more human-like in their internal processing. That’s useful from an understanding biology point of view,” says DiCarlo, who is also a professor of brain and cognitive sciences and an investigator at the McGovern Institute.
Engineering more brain-like AI
While their potential is promising, computer vision systems are not yet perfect models of human vision. DiCarlo suspected one way to improve computer vision may be to incorporate specific brain-like features into these models.
To test this idea, he and his collaborators built a computer vision model using neural data previously collected from vision-processing neurons in the monkey IT cortex — a key part of the primate ventral visual pathway involved in the recognition of objects — while the animals viewed various images. More specifically, Joel Dapello, a Harvard graduate student and former MIT-IBM Watson AI Lab intern, and Kohitij Kar, Assistant Professor, Canada Research Chair (Visual Neuroscience) at York University and visiting scientist at MIT, in collaboration with David Cox, IBM Research’s VP for AI Models and IBM director of the MIT-IBM Watson AI Lab, and other researchers at IBM Research and MIT, asked an artificial neural network to emulate the behavior of these primate vision-processing neurons while the network learned to identify objects in a standard computer vision task.
“In effect, we said to the network, ‘please solve this standard computer vision task, but please also make the function of one of your inside simulated “neural” layers be as similar as possible to the function of the corresponding biological neural layer,’” DiCarlo explains. “We asked it to do both of those things as best it could.” This forced the artificial neural circuits to find a different way to process visual information than the standard, computer vision approach, he says.
After training the artificial model with biological data, DiCarlo’s team compared its activity to a similarly-sized neural network model trained without neural data, using the standard approach for computer vision. They found that the new, biologically-informed model IT layer was – as instructed — a better match for IT neural data. That is, for every image tested, the population of artificial IT neurons in the model responded more similarly to the corresponding population of biological IT neurons.
“Everybody gets something out of the exciting virtuous cycle between natural/biological intelligence and artificial intelligence,” DiCarlo says.
The researchers also found that the model IT was also a better match to IT neural data collected from another monkey, even though the model had never seen data from that animal, and even when that comparison was evaluated on that monkey’s IT responses to new images. This indicated that the team’s new, “neurally-aligned” computer model may be an improved model of the neurobiological function of the primate IT cortex — an interesting finding, given that it was previously unknown whether the amount of neural data that can be currently collected from the primate visual system is capable of directly guiding model development.
With their new computer model in hand, the team asked whether the “IT neural alignment” procedure also leads to any changes in the overall behavioral performance of the model. Indeed, they found that the neurally-aligned model was more human-like in its behavior — it tended to succeed in correctly categorizing objects in images for which humans also succeed, and it tended to fail when humans also fail.
Adversarial attacks
The team also found that the neurally-aligned model was more resistant to “adversarial attacks” that developers use to test computer vision and AI systems. In computer vision, adversarial attacks introduce small distortions into images that are meant to mislead an artificial neural network.
“Say that you have an image that the model identifies as a cat. Because you have the knowledge of the internal workings of the model, you can then design very small changes in the image so that the model suddenly thinks it’s no longer a cat,” DiCarlo explains.
These minor distortions don’t typically fool humans, but computer vision models struggle with these alterations. A person who looks at the subtly distorted cat still reliably and robustly reports that it’s a cat. But standard computer vision models are more likely to mistake the cat for a dog, or even a tree.
“There must be some internal differences in the way our brains process images that lead to our vision being more resistant to those kinds of attacks,” DiCarlo says. And indeed, the team found that when they made their model more neurally-aligned, it became more robust, correctly identifying more images in the face of adversarial attacks. The model could still be fooled by stronger “attacks,” but so can people, DiCarlo says. His team is now exploring the limits of adversarial robustness in humans.
A few years ago, DiCarlo’s team found they could also improve a model’s resistance to adversarial attacks by designing the first layer of the artificial network to emulate the early visual processing layer in the brain. One key next step is to combine such approaches — making new models that are simultaneously neurally-aligned at multiple visual processing layers.
The new work is further evidence that an exchange of ideas between neuroscience and computer science can drive progress in both fields. “Everybody gets something out of the exciting virtuous cycle between natural/biological intelligence and artificial intelligence,” DiCarlo says. “In this case, computer vision and AI researchers get new ways to achieve robustness and neuroscientists and cognitive scientists get more accurate mechanistic models of human vision.”
This work was supported by the MIT-IBM Watson AI Lab, Semiconductor Research Corporation, DARPA, the Massachusetts Institute of Technology Shoemaker Fellowship, Office of Naval Research, the Simons Foundation, and Canada Research Chair Program.
When interacting with another person, you likely spend part of your time trying to anticipate how they will feel about what you’re saying or doing. This task requires a cognitive skill called theory of mind, which helps us to infer other people’s beliefs, desires, intentions, and emotions.
MIT neuroscientists have now designed a computational model that can predict other people’s emotions — including joy, gratitude, confusion, regret, and embarrassment — approximating human observers’ social intelligence. The model was designed to predict the emotions of people involved in a situation based on the prisoner’s dilemma, a classic game theory scenario in which two people must decide whether to cooperate with their partner or betray them.
To build the model, the researchers incorporated several factors that have been hypothesized to influence people’s emotional reactions, including that person’s desires, their expectations in a particular situation, and whether anyone was watching their actions.
“These are very common, basic intuitions, and what we said is, we can take that very basic grammar and make a model that will learn to predict emotions from those features,” says Rebecca Saxe, the John W. Jarve Professor of Brain and Cognitive Sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.
Sean Dae Houlihan PhD ’22, a postdoc at the Neukom Institute for Computational Science at Dartmouth College, is the lead author of the paper, which appears today in Philosophical Transactions A. Other authors include Max Kleiman-Weiner PhD ’18, a postdoc at MIT and Harvard University; Luke Hewitt PhD ’22, a visiting scholar at Stanford University; and Joshua Tenenbaum, a professor of computational cognitive science at MIT and a member of the Center for Brains, Minds, and Machines and MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).
Predicting emotions
While a great deal of research has gone into training computer models to infer someone’s emotional state based on their facial expression, that is not the most important aspect of human emotional intelligence, Saxe says. Much more important is the ability to predict someone’s emotional response to events before they occur.
“The most important thing about what it is to understand other people’s emotions is to anticipate what other people will feel before the thing has happened,” she says. “If all of our emotional intelligence was reactive, that would be a catastrophe.”
To try to model how human observers make these predictions, the researchers used scenarios taken from a British game show called “Golden Balls.” On the show, contestants are paired up with a pot of $100,000 at stake. After negotiating with their partner, each contestant decides, secretly, whether to split the pool or try to steal it. If both decide to split, they each receive $50,000. If one splits and one steals, the stealer gets the entire pot. If both try to steal, no one gets anything.
Depending on the outcome, contestants may experience a range of emotions — joy and relief if both contestants split, surprise and fury if one’s opponent steals the pot, and perhaps guilt mingled with excitement if one successfully steals.
To create a computational model that can predict these emotions, the researchers designed three separate modules. The first module is trained to infer a person’s preferences and beliefs based on their action, through a process called inverse planning.
“This is an idea that says if you see just a little bit of somebody’s behavior, you can probabilistically infer things about what they wanted and expected in that situation,” Saxe says.
Using this approach, the first module can predict contestants’ motivations based on their actions in the game. For example, if someone decides to split in an attempt to share the pot, it can be inferred that they also expected the other person to split. If someone decides to steal, they may have expected the other person to steal, and didn’t want to be cheated. Or, they may have expected the other person to split and decided to try to take advantage of them.
The model can also integrate knowledge about specific players, such as the contestant’s occupation, to help it infer the players’ most likely motivation.
The second module compares the outcome of the game with what each player wanted and expected to happen. Then, a third module predicts what emotions the contestants may be feeling, based on the outcome and what was known about their expectations. This third module was trained to predict emotions based on predictions from human observers about how contestants would feel after a particular outcome. The authors emphasize that this is a model of human social intelligence, designed to mimic how observers causally reason about each other’s emotions, not a model of how people actually feel.
“From the data, the model learns that what it means, for example, to feel a lot of joy in this situation, is to get what you wanted, to do it by being fair, and to do it without taking advantage,” Saxe says.
Core intuitions
Once the three modules were up and running, the researchers used them on a new dataset from the game show to determine how the models’ emotion predictions compared with the predictions made by human observers. This model performed much better at that task than any previous model of emotion prediction.
The model’s success stems from its incorporation of key factors that the human brain also uses when predicting how someone else will react to a given situation, Saxe says. Those include computations of how a person will evaluate and emotionally react to a situation, based on their desires and expectations, which relate to not only material gain but also how they are viewed by others.
“Our model has those core intuitions, that the mental states underlying emotion are about what you wanted, what you expected, what happened, and who saw. And what people want is not just stuff. They don’t just want money; they want to be fair, but also not to be the sucker, not to be cheated,” she says.
“The researchers have helped build a deeper understanding of how emotions contribute to determining our actions; and then, by flipping their model around, they explain how we can use people’s actions to infer their underlying emotions. This line of work helps us see emotions not just as ‘feelings’ but as playing a crucial, and subtle, role in human social behavior,” says Nick Chater, a professor of behavioral science at the University of Warwick, who was not involved in the study.
In future work, the researchers hope to adapt the model so that it can perform more general predictions based on situations other than the game-show scenario used in this study. They are also working on creating models that can predict what happened in the game based solely on the expression on the faces of the contestants after the results were announced.
The research was funded by the McGovern Institute; the Paul E. and Lilah Newton Brain Science Award; the Center for Brains, Minds, and Machines; the MIT-IBM Watson AI Lab; and the Multidisciplinary University Research Initiative.
When staff in MIT’s Department of Facilities would visualize energy use and carbon-associated emissions by campus buildings, Building 46 always stood out — attributed to its energy intensity, which accounted for 8 percent of MIT’s total campus energy use. This high energy draw was not surprising, as the building is home of the Brain and Cognitive Sciences Complex and a large amount of lab space, but it also made the building a perfect candidate for an energy performance audit to seek out potential energy saving opportunities.
This audit revealed that several energy efficiency updates to the building mechanical systems infrastructure, including optimization of the room-by-room ventilation rates, could result in an estimated 35 percent reduction of energy use, which would in turn lower MIT’s total greenhouse gas emissions by an estimated 2 percent — driving toward the Institute’s 2026 goal of net-zero and 2050 goal of elimination of direct campus emissions.
Building energy efficiency projects are not new for MIT. Since 2010, MIT has been engaged in a partnership agreement with utility company Eversource establishing the Efficiency Forward program, empowering MIT to invest in more than 300 energy conservation projects to date and lowering energy consumption on campus for a total calculated savings of approximately 70 million kilowatt hours and 4.2 million therms. But at 418,000 gross square feet, Building 46 is the first energy efficiency project of its size on the campus.
“We’ve never tackled a whole building like this — it’s the first capital project that is technically an energy project,” explains Siobhan Carr, energy efficiency program manager, who was part of the team overseeing the energy audit and lab ventilation performance assessment in the building. “That gives you an idea of the magnitude and complexity of this.”
The project started with the full building energy assessment and lab ventilation risk audit. “We had a team go through every corner of the building and look at every possible opportunity to save energy,” explains Jessica Parks, senior project manager for systems performance and turnover in campus construction. “One of the biggest issues we saw was that there’s a lot of dry lab spaces which are basically offices, but they’re all getting the same ventilation as if they were a high-intensity lab.” Higher ventilation and more frequent air exchange rates draw more energy. By optimizing for the required ventilation rates, there was an opportunity to save energy in nearly every space in the building.
In addition to the optimized ventilation, the project team will convert fume hoods from constant volume to variable volume and install equipment to help the building systems run more efficiently. The team also identified opportunities to work with labs to implement programs such as fume hood hibernation and unoccupied setbacks for temperature and ventilation. As different spaces in the building have varying needs, the energy retrofit will touch all 1,254 spaces in the building — one by one — to implement the different energy measures to reach that estimated 35 percent reduction in energy use.
Although time-consuming and complex, this room-by-room approach has a big benefit in that it has allowed research to continue in the space largely uninterrupted. With a few exceptions, the occupants of Building 46, which include the Department of Brain and Cognitive Sciences, The McGovern Institute for Brain Research, and The Picower Institute for Learning and Memory, have remained in place for the duration of the project. Partners in the MIT Environment, Health and Safety Office are instrumental to this balance of renovations and keeping the building operational during the optimization efforts and are one of several teams across MIT contributing to building efficiency efforts.
The completion date of the building efficiency project is set for 2024, but Carr says that some of the impact of this ongoing work may soon be seen. “We should start to see savings as we move through the building, and we expect to fully realize all of our projected savings a year after completion,” she says, noting that the length of time is required for a year-over-year perspective to see the full reduction in energy use.
The impact of the project goes far beyond the footprint of Building 46 as it offers insights and spurred actions for future projects — including buildings 76 and 68, the number two and three top energy users on campus. Both buildings recently underwent their own energy audits and lab ventilation performance assessments. The energy efficiency team is now crafting a plan for full-building approaches, much like Building 46. “To date, 46 has presented many learning opportunities, such as how to touch every space in a building while research continues, as well as how to overcome challenges encountered when working on existing systems,” explains Parks. “The good news is that we have developed solutions for those challenges and the teams have been proactively implementing those lessons in our other projects.”
Communication has proven to be another key for these large projects where occupants see the work happening and often play a role in answering questions about their unique space. “People are really engaged, they ask questions about the work, and we ask them about the space they’re in every day,” says Parks. “The Building 46 occupants have been wonderful partners as we worked in all of their spaces, which is paving the way for a successful project.”
The release of Fast Forward in 2021 has also made communications easier, notes Carr, who says the plan helps to frame these projects as part of the big picture — not just a construction interruption. “Fast Forward has brought a visibility into what we’re doing within [MIT] Facilities on these buildings,” she says. “It brings more eyes and ears, and people understand that these projects are happening throughout campus and not just in their own space — we’re all working to reduce energy and to reduce greenhouse gas across campus.”
The Energy Efficiency team will continue to apply that big-picture approach as ongoing building efficiency projects on campus are assessed to reach toward a 10 to 15 percent reduction in energy use and corresponding emissions over the next several years.
Mutations of a gene called Foxp2 have been linked to a type of speech disorder called apraxia that makes it difficult to produce sequences of sound. A new study from MIT and National Yang Ming Chiao Tung University sheds light on how this gene controls the ability to produce speech.
In a study of mice, the researchers found that mutations in Foxp2 disrupt the formation of dendrites and neuronal synapses in the brain’s striatum, which plays important roles in the control of movement. Mice with these mutations also showed impairments in their ability to produce the high-frequency sounds that they use to communicate with other mice.
Those malfunctions arise because Foxp2 mutations prevent the proper assembly of motor proteins, which move molecules within cells, the researchers found.
“These mice have abnormal vocalizations, and in the striatum there are many cellular abnormalities,” says Ann Graybiel, an MIT Institute Professor, a member of MIT’s McGovern Institute for Brain Research, and an author of the paper. “This was an exciting finding. Who would have thought that a speech problem might come from little motors inside cells?”
Fu-Chin Liu PhD ’91, a professor at National Yang Ming Chiao Tung University in Taiwan, is the senior author of the study, which appears today in the journal Brain. Liu and Graybiel also worked together on a 2016 study of the potential link between Foxp2 and autism spectrum disorder. The lead authors of the new Brain paper are Hsiao-Ying Kuo and Shih-Yun Chen of National Yang Ming Chiao Tung University.
Speech control
Children with Foxp2-associated apraxia tend to begin speaking later than other children, and their speech is often difficult to understand. The disorder is believed to arise from impairments in brain regions, such as the striatum, that control the movements of the lips, mouth, and tongue. Foxp2 is also expressed in the brains of songbirds such as zebra finches and is critical to those birds’ ability to learn songs.
Foxp2 encodes a transcription factor, meaning that it can control the expression of many other target genes. Many species express Foxp2, but humans have a special form of Foxp2. In a 2014 study, Graybiel and colleagues found evidence that the human form of Foxp2, when expressed in mice, allowed the mice to accelerate the switch from declarative to procedural types of learning.
In that study, the researchers showed that mice engineered to express the human version of Foxp2, which differs from the mouse version by only two DNA base pairs, were much better at learning mazes and performing other tasks that require turning repeated actions into behavioral routines. Mice with human-like Foxp2 also had longer dendrites — the slender extensions that help neurons form synapses — in the striatum, which is involved in habit formation as well as motor control.
In the new study, the researchers wanted to explore how the Foxp2 mutation that has been linked with apraxia affects speech production, using ultrasonic vocalizations in mice as a proxy for speech. Many rodents and other animals such as bats produce these vocalizations to communicate with each other.
While previous studies, including the work by Liu and Graybiel in 2016, had suggested that Foxp2 affects dendrite growth and synapse formation, the mechanism for how that occurs was not known. In the new study, led by Liu, the researchers investigated one proposed mechanism, which is that Foxp2 affects motor proteins.
One of these molecular motors is the dynein protein complex, a large cluster of proteins that is responsible for shuttling molecules along microtubule scaffolds within cells.
“All kinds of molecules get shunted around to different places in our cells, and that’s certainly true of neurons,” Graybiel says. “There’s an army of tiny molecules that move molecules around in the cytoplasm or put them into the membrane. In a neuron, they may send molecules from the cell body all the way down the axons.”
A delicate balance
The dynein complex is made up of several other proteins. The most important of these is a protein called dynactin1, which interacts with microtubules, enabling the dynein motor to move along microtubules. In the new study, the researchers found that dynactin1 is one of the major targets of the Foxp2 transcription factor.
The researchers focused on the striatum, one of the regions where Foxp2 is most often found, and showed that the mutated version of Foxp2 is unable to suppress dynactin1 production. Without that brake in place, cells generate too much dynactin1. This upsets the delicate balance of dynein-dynactin1, which prevents the dynein motor from moving along microtubules.
Those motors are needed to shuttle molecules that are necessary for dendrite growth and synapse formation on dendrites. With those molecules stranded in the cell body, neurons are unable to form synapses to generate the proper electrophysiological signals they need to make speech production possible.
Mice with the mutated version of Foxp2 had abnormal ultrasonic vocalizations, which typically have a frequency of around 22 to 50 kilohertz. The researchers showed that they could reverse these vocalization impairments and the deficits in the molecular motor activity, dendritic growth, and electrophysiological activity by turning down the gene that encodes dynactin1.
Mutations of Foxp2 can also contribute to autism spectrum disorders and Huntington’s disease, through mechanisms that Liu and Graybiel previously studied in their 2016 paper and that many other research groups are now exploring. Liu’s lab is also investigating the potential role of abnormal Foxp2 expression in the subthalamic nucleus of the brain as a possible factor in Parkinson’s disease.
The research was funded by the Ministry of Science and Technology of Taiwan, the Ministry of Education of Taiwan, the U.S. National Institute of Mental Health, the Saks Kavanaugh Foundation, the Kristin R. Pressman and Jessica J. Pourian ’13 Fund, and Stephen and Anne Kott.