Symposium highlights scale of mental health crisis and novel methods of diagnosis and treatment

Digital technologies, such as smartphones and machine learning, have revolutionized education. At the McGovern Institute for Brain Research’s 2024 Spring Symposium, “Transformational Strategies in Mental Health,” experts from across the sciences — including psychiatry, psychology, neuroscience, computer science, and others — agreed that these technologies could also play a significant role in advancing the diagnosis and treatment of mental health disorders and neurological conditions.

Co-hosted by the McGovern Institute, MIT Open Learning, McClean Hospital, the Poitras Center for Psychiatric Disorders Research at MIT, and the Wellcome Trust, the symposium raised the alarm about the rise in mental health challenges and showcased the potential for novel diagnostic and treatment methods.

“We have to do something together as a community of scientists and partners of all kinds to make a difference.” – John Gabrieli

John Gabrieli, the Grover Hermann Professor of Health Sciences and Technology at MIT, kicked off the symposium with a call for an effort on par with the Manhattan Project, which in the 1940s saw leading scientists collaborate to do what seemed impossible. While the challenge of mental health is quite different, Gabrieli stressed, the complexity and urgency of the issue are similar. In his later talk, “How can science serve psychiatry to enhance mental health?,” he noted a 35 percent rise in teen suicide deaths between 1999 and 2000 and, between 2007 and 2015, a 100 percent increase in emergency room visits for youths ages 5 to 18 who experienced a suicide attempt or suicidal ideation.

“We have no moral ambiguity, but all of us speaking today are having this meeting in part because we feel this urgency,” said Gabrieli, who is also a professor of brain and cognitive sciences, the director of the Integrated Learning Initiative (MITili) at MIT Open Learning, and a member of the McGovern Institute. “We have to do something together as a community of scientists and partners of all kinds to make a difference.”

An urgent problem

In 2021, U.S. Surgeon General Vivek Murthy issued an advisory on the increase in mental health challenges in youth; in 2023, he issued another, warning of the effects of social media on youth mental health. At the symposium, Susan Whitfield-Gabrieli, a research affiliate at the McGovern Institute and a professor of psychology and director of the Biomedical Imaging Center at Northeastern University, cited these recent advisories, saying they underscore the need to “innovate new methods of intervention.”

Other symposium speakers also highlighted evidence of growing mental health challenges for youth and adolescents. Christian Webb, associate professor of psychology at Harvard Medical School, stated that by the end of adolescence, 15-20 percent of teens will have experienced at least one episode of clinical depression, with girls facing the highest risk. Most teens who experience depression receive no treatment, he added.

Adults who experience mental health challenges need new interventions, too. John Krystal, the Robert L. McNeil Jr. Professor of Translational Research and chair of the Department of Psychiatry at Yale University School of Medicine, pointed to the limited efficacy of antidepressants, which typically take about two months to have an effect on the patient. Patients with treatment-resistant depression face a 75 percent likelihood of relapse within a year of starting antidepressants. Treatments for other mental health disorders, including bipolar and psychotic disorders, have serious side effects that can deter patients from adherence, said Virginie-Anne Chouinard, director of research at McLean OnTrackTM, a program for first episode psychosis at McLean Hospital.

New treatments, new technologies

Emerging technologies, including smartphone technology and artificial intelligence, are key to the interventions that symposium speakers shared.

In a talk on AI and the brain, Dina Katabi, the Thuan and Nicole Pham Professor of Electrical Engineering and Computer Science at MIT, discussed novel ways to detect Parkinson’s and Alzheimer’s, among other diseases. Early-stage research involved developing devices that can analyze how movement within a space impacts the surrounding electromagnetic field, as well as how wireless signals can detect breathing and sleep stages.

“I realize this may sound like la-la land,” Katabi said. “But it’s not! This device is used today by real patients, enabled by a revolution in neural networks and AI.”

Parkinson’s disease often cannot be diagnosed until significant impairment has already occurred. In a set of studies, Katabi’s team collected data on nocturnal breathing and trained a custom neural network to detect occurrences of Parkinson’s. They found the network was over 90 percent accurate in its detection. Next, the team used AI to analyze two sets of breathing data collected from patients at a six-year interval. Could their custom neural network identify patients who did not have a Parkinson’s diagnosis on the first visit, but subsequently received one? The answer was largely yes: Machine learning identified 75 percent of patients who would go on to receive a diagnosis.

Detecting high-risk patients at an early stage could make a substantial difference for intervention and treatment. Similarly, research by Jordan Smoller, professor of psychiatry at Harvard Medical School and director of the Center for Precision Psychiatry at Massachusetts General Hospital, demonstrated that AI-aided suicide risk prediction model could detect 45 percent of suicide attempts or deaths with 90 percent specificity, about two to three years in advance.

Other presentations, including a series of lightning talks, shared new and emerging treatments, such as the use of ketamine to treat depression; the use of smartphones, including daily text surveys and mindfulness apps, in treating depression in adolescents; metabolic interventions for psychotic disorders; the use of machine learning to detect impairment from THC intoxication; and family-focused treatment, rather than individual therapy, for youth depression.

Advancing understanding

The frequency and severity of adverse mental health events for children, adolescents, and adults demonstrate the necessity of funding for mental health research — and the open sharing of these findings.

Niall Boyce, head of mental health field building at the Wellcome Trust — a global charitable foundation dedicated to using science to solve urgent health challenges — outlined the foundation’s funding philosophy of supporting research that is “collaborative, coherent, and focused” and centers on “What is most important to those most affected?” Wellcome research managers Anum Farid and Tayla McCloud stressed the importance of projects that involve people with lived experience of mental health challenges and “blue sky thinking” that takes risks and can advance understanding in innovative ways. Wellcome requires that all published research resulting from its funding be open and accessible in order to maximize their benefits.

Whether through therapeutic models, pharmaceutical treatments, or machine learning, symposium speakers agreed that transformative approaches to mental health call for collaboration and innovation.

“Understanding mental health requires us to understand the unbelievable diversity of humans,” Gabrieli said. “We have to use all the tools we have now to develop new treatments that will work for people for whom our conventional treatments don’t.”

Just thinking about a location activates mental maps in the brain

As you travel your usual route to work or the grocery store, your brain engages cognitive maps stored in your hippocampus and entorhinal cortex. These maps store information about paths you have taken and locations you have been to before, so you can navigate whenever you go there.

New research from MIT has found that such mental maps also are created and activated when you merely think about sequences of experiences, in the absence of any physical movement or sensory input. In an animal study, the researchers found that the entorhinal cortex harbors a cognitive map of what animals experience while they use a joystick to browse through a sequence of images. These cognitive maps are then activated when thinking about these sequences, even when the images are not visible.

This is the first study to show the cellular basis of mental simulation and imagination in a nonspatial domain through activation of a cognitive map in the entorhinal cortex.

“These cognitive maps are being recruited to perform mental navigation, without any sensory input or motor output. We are able to see a signature of this map presenting itself as the animal is going through these experiences mentally,” says Mehrdad Jazayeri, an associate professor of brain and cognitive sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

McGovern Institute Research Scientist Sujaya Neupane is the lead author of the paper, which appears today in Nature. Ila Fiete, a professor of brain and cognitive sciences at MIT, a member of MIT’s McGovern Institute for Brain Research, and director of the K. Lisa Yang Integrative Computational Neuroscience Center, is also an author of the paper.

Mental maps

A great deal of work in animal models and humans has shown that representations of physical locations are stored in the hippocampus, a small seahorse-shaped structure, and the nearby entorhinal cortex. These representations are activated whenever an animal moves through a space that it has been in before, just before it traverses the space, or when it is asleep.

“Most prior studies have focused on how these areas reflect the structures and the details of the environment as an animal moves physically through space,” Jazayeri says. “When an animal moves in a room, its sensory experiences are nicely encoded by the activity of neurons in the hippocampus and entorhinal cortex.”

In the new study, Jazayeri and his colleagues wanted to explore whether these cognitive maps are also built and then used during purely mental run-throughs or imagining of movement through nonspatial domains.

To explore that possibility, the researchers trained animals to use a joystick to trace a path through a sequence of images (“landmarks”) spaced at regular temporal intervals. During the training, the animals were shown only a subset of pairs of images but not all the pairs. Once the animals had learned to navigate through the training pairs, the researchers tested if animals could handle the new pairs they had never seen before.

One possibility is that animals do not learn a cognitive map of the sequence, and instead solve the task using a memorization strategy. If so, they would be expected to struggle with the new pairs. Instead, if the animals were to rely on a cognitive map, they should be able to generalize their knowledge to the new pairs.

“The results were unequivocal,” Jazayeri says. “Animals were able to mentally navigate between the new pairs of images from the very first time they were tested. This finding provided strong behavioral evidence for the presence of a cognitive map. But how does the brain establish such a map?”

To address this question, the researchers recorded from single neurons in the entorhinal cortex as the animals performed this task. Neural responses had a striking feature: As the animals used the joystick to navigate between two landmarks, neurons featured distinctive bumps of activity associated with the mental representation of the intervening landmarks.

“The brain goes through these bumps of activity at the expected time when the intervening images would have passed by the animal’s eyes, which they never did,” Jazayeri says. “And the timing between these bumps, critically, was exactly the timing that the animal would have expected to reach each of those, which in this case was 0.65 seconds.”

The researchers also showed that the speed of the mental simulation was related to the animals’ performance on the task: When they were a little late or early in completing the task, their brain activity showed a corresponding change in timing. The researchers also found evidence that the mental representations in the entorhinal cortex don’t encode specific visual features of the images, but rather the ordinal arrangement of the landmarks.

A model of learning

To further explore how these cognitive maps may work, the researchers built a computational model to mimic the brain activity that they found and demonstrate how it could be generated. They used a type of model known as a continuous attractor model, which was originally developed to model how the entorhinal cortex tracks an animal’s position as it moves, based on sensory input.

The researchers customized the model by adding a component that was able to learn the activity patterns generated by sensory input. This model was then able to learn to use those patterns to reconstruct those experiences later, when there was no sensory input.

“The key element that we needed to add is that this system has the capacity to learn bidirectionally by communicating with sensory inputs. Through the associational learning that the model goes through, it will actually recreate those sensory experiences,” Jazayeri says.

The researchers now plan to investigate what happens in the brain if the landmarks are not evenly spaced, or if they’re arranged in a ring. They also hope to record brain activity in the hippocampus and entorhinal cortex as the animals first learn to perform the navigation task.

“Seeing the memory of the structure become crystallized in the mind, and how that leads to the neural activity that emerges, is a really valuable way of asking how learning happens,” Jazayeri says.

The research was funded by the Natural Sciences and Engineering Research Council of Canada, the Québec Research Funds, the National Institutes of Health, and the Paul and Lilah Newton Brain Science Award.

Nancy Kanwisher, Robert Langer, and Sara Seager named Kavli Prize Laureates

MIT faculty members Nancy Kanwisher, Robert Langer, and Sara Seager are among eight researchers worldwide to receive this year’s Kavli Prizes.

A partnership among the Norwegian Academy of Science and Letters, the Norwegian Ministry of Education and Research, and the Kavli Foundation, the Kavli Prizes are awarded every two years to “honor scientists for breakthroughs in astrophysics, nanoscience and neuroscience that transform our understanding of the big, the small and the complex.” The laureates in each field will share $1 million.

Understanding recognition of faces

Nancy Kanwisher, the Walter A Rosenblith Professor of Brain and Cognitive Sciences and McGovern Institute for Brain Research investigator, has been awarded the 2024 Kavli Prize in Neuroscience with Doris Tsao, professor in the Department of Molecular and Cell Biology at the University of California at Berkeley, and Winrich Freiwald, the Denise A. and Eugene W. Chinery Professor at the Rockefeller University.

Kanwisher, Tsao, and Freiwald discovered a specialized system within the brain to recognize faces. Their discoveries have provided basic principles of neural organization and made the starting point for further research on how the processing of visual information is integrated with other cognitive functions.

Kanwisher was the first to prove that a specific area in the human neocortex is dedicated to recognizing faces, now called the fusiform face area. Using functional magnetic resonance imaging, she found individual differences in the location of this area and devised an analysis technique to effectively localize specialized functional regions in the brain. This technique is now widely used and applied to domains beyond the face recognition system.

Integrating nanomaterials for biomedical advances

Robert Langer, the David H. Koch Institute Professor, has been awarded the 2024 Kavli Prize in Nanoscience with Paul Alivisatos, president of the University of Chicago and John D. MacArthur Distinguished Service Professor in the Department of Chemistry, and Chad Mirkin, professor of chemistry at Northwestern University.

Langer, Alivisatos, and Mirkin each revolutionized the field of nanomedicine by demonstrating how engineering at the nano scale can advance biomedical research and application. Their discoveries contributed foundationally to the development of therapeutics, vaccines, bioimaging, and diagnostics.

Langer was the first to develop nanoengineered materials that enabled the controlled release, or regular flow, of drug molecules. This capability has had an immense impact for the treatment of a range of diseases, such as aggressive brain cancer, prostate cancer, and schizophrenia. His work also showed that tiny particles, containing protein antigens, can be used in vaccination, and was instrumental in the development of the delivery of messenger RNA vaccines.

Searching for life beyond Earth

Sara Seager, the Class of 1941 Professor of Planetary Sciences in the Department of Earth, Atmospheric and Planetary Sciences and a professor in the departments of Physics and of Aeronautics and Astronautics, has been awarded the 2024 Kavli Prize in Astrophysics along with David Charbonneau, the Fred Kavli Professor of Astrophysics at Harvard University.

Seager and Charbonneau are recognized for discoveries of exoplanets and the characterization of their atmospheres. They pioneered methods for the detection of atomic species in planetary atmospheres and the measurement of their thermal infrared emission, setting the stage for finding the molecular fingerprints of atmospheres around both giant and rocky planets. Their contributions have been key to the enormous progress seen in the last 20 years in the exploration of myriad exoplanets.

Kanwisher, Langer, and Seager bring the number of all-time MIT faculty recipients of the Kavli Prize to eight. Prior winners include Rainer Weiss in astrophysics (2016), Alan Guth in astrophysics (2014), Mildred Dresselhaus in nanoscience (2012), Ann Graybiel in neuroscience (2012), and Jane Luu in astrophysics (2012).

MIT scientists learn how to control muscles with light

For people with paralysis or amputation, neuroprosthetic systems that artificially stimulate muscle contraction with electrical current can help them regain limb function. However, despite many years of research, this type of prosthesis is not widely used because it leads to rapid muscle fatigue and poor control.

McGovern Institute Associate Investigator Hugh Herr. Photo: Jimmy Day / MIT Media Lab

MIT researchers have developed a new approach that they hope could someday offer better muscle control with less fatigue. Instead of using electricity to stimulate muscles, they used light. In a study in mice, the researchers showed that this optogenetic technique offers more precise muscle control, along with a dramatic decrease in fatigue.

“It turns out that by using light, through optogenetics, one can control muscle more naturally. In terms of clinical application, this type of interface could have very broad utility,” says Hugh Herr, a professor of media arts and sciences, co-director of the K. Lisa Yang Center for Bionics at MIT, and an associate member of MIT’s McGovern Institute for Brain Research.

Optogenetics is a method based on genetically engineering cells to express light-sensitive proteins, which allows researchers to control activity of those cells by exposing them to light. This approach is currently not feasible in humans, but Herr, MIT graduate student Guillermo Herrera-Arcos, and their colleagues at the K. Lisa Yang Center for Bionics are now working on ways to deliver light-sensitive proteins safely and effectively into human tissue.

Herr is the senior author of the study, which appears today in Science Robotics. Herrera-Arcos is the lead author of the paper.

Optogenetic control

For decades, researchers have been exploring the use of functional electrical stimulation (FES) to control muscles in the body. This method involves implanting electrodes that stimulate nerve fibers, causing a muscle to contract. However, this stimulation tends to activate the entire muscle at once, which is not the way that the human body naturally controls muscle contraction.

“Humans have this incredible control fidelity that is achieved by a natural recruitment of the muscle, where small motor units, then moderate-sized, then large motor units are recruited, in that order, as signal strength is increased,” Herr says. “With FES, when you artificially blast the muscle with electricity, the largest units are recruited first. So, as you increase signal, you get no force at the beginning, and then suddenly you get too much force.”

This large force not only makes it harder to achieve fine muscle control, it also wears out the muscle quickly, within five or 10 minutes.

The MIT team wanted to see if they could replace that entire interface with something different. Instead of electrodes, they decided to try controlling muscle contraction using optical molecular machines via optogenetics.

Two scientists in the lab.
“This could lead to a minimally invasive strategy that would change the game in terms of clinical care for persons suffering from limb pathology,” Hugh Herr says, pictured on left next to Herrera-Arcos.

Using mice as an animal model, the researchers compared the amount of muscle force they could generate using the traditional FES approach with forces generated by their optogenetic method. For the optogenetic studies, they used mice that had already been genetically engineered to express a light-sensitive protein called channelrhodopsin-2. They implanted a small light source near the tibial nerve, which controls muscles of the lower leg.

The researchers measured muscle force as they gradually increased the amount of light stimulation, and found that, unlike FES stimulation, optogenetic control produced a steady, gradual increase in contraction of the muscle.

“As we change the optical stimulation that we deliver to the nerve, we can proportionally, in an almost linear way, control the force of the muscle. This is similar to how the signals from our brain control our muscles. Because of this, it becomes easier to control the muscle compared with electrical stimulation,” Herrera-Arcos says.

Fatigue resistance

Using data from those experiments, the researchers created a mathematical model of optogenetic muscle control. This model relates the amount of light going into the system to the output of the muscle (how much force is generated).

This mathematical model allowed the researchers to design a closed-loop controller. In this type of system, the controller delivers a stimulatory signal, and after the muscle contracts, a sensor can detect how much force the muscle is exerting. This information is sent back to the controller, which calculates if, and how much, the light stimulation needs to be adjusted to reach the desired force.

Using this type of control, the researchers found that muscles could be stimulated for more than an hour before fatiguing, while muscles became fatigued after only 15 minutes using FES stimulation.

One hurdle the researchers are now working to overcome is how to safely deliver light-sensitive proteins into human tissue. Several years ago, Herr’s lab reported that in rats, these proteins can trigger an immune response that inactivates the proteins and could also lead to muscle atrophy and cell death.

“A key objective of the K. Lisa Yang Center for Bionics is to solve that problem,” Herr says. “A multipronged effort is underway to design new light-sensitive proteins, and strategies to deliver them, without triggering an immune response.”

As additional steps toward reaching human patients, Herr’s lab is also working on new sensors that can be used to measure muscle force and length, as well as new ways to implant the light source. If successful, the researchers hope their strategy could benefit people who have experienced strokes, limb amputation, and spinal cord injuries, as well as others who have impaired ability to control their limbs.

“This could lead to a minimally invasive strategy that would change the game in terms of clinical care for persons suffering from limb pathology,” Herr says.

The research was funded by the K. Lisa Yang Center for Bionics at MIT.

Five MIT faculty elected to the National Academy of Sciences for 2024

The National Academy of Sciences has elected 120 members and 24 international members, including five faculty members from MIT. Guoping Feng, Piotr Indyk, Daniel J. Kleitman, Daniela Rus, and Senthil Todadri were elected in recognition of their “distinguished and continuing achievements in original research.” Membership to the National Academy of Sciences is one of the highest honors a scientist can receive in their career.

Among the new members added this year are also nine MIT alumni, including Zvi Bern ’82; Harold Hwang ’93, SM ’93; Leonard Kleinrock SM ’59, PhD ’63; Jeffrey C. Lagarias ’71, SM ’72, PhD ’74; Ann Pearson PhD ’00; Robin Pemantle PhD ’88; Jonas C. Peters PhD ’98; Lynn Talley PhD ’82; and Peter T. Wolczanski ’76. Those elected this year bring the total number of active members to 2,617, with 537 international members.

The National Academy of Sciences is a private, nonprofit institution that was established under a congressional charter signed by President Abraham Lincoln in 1863. It recognizes achievement in science by election to membership, and — with the National Academy of Engineering and the National Academy of Medicine — provides science, engineering, and health policy advice to the federal government and other organizations.

Guoping Feng

Guoping Feng is the James W. (1963) and Patricia T. Poitras Professor in the Department of Brain and Cognitive Sciences. He is also associate director and investigator in the McGovern Institute for Brain Research, a member of the Broad Institute of MIT and Harvard, and director of the Hock E. Tan and K. Lisa Yang Center for Autism Research.

His research focuses on understanding the molecular mechanisms that regulate the development and function of synapses, the places in the brain where neurons connect and communicate. He’s interested in how defects in the synapses can contribute to psychiatric and neurodevelopmental disorders. By understanding the fundamental mechanisms behind these disorders, he’s producing foundational knowledge that may guide the development of new treatments for conditions like obsessive-compulsive disorder and schizophrenia.

Feng received his medical training at Zhejiang University Medical School in Hangzhou, China, and his PhD in molecular genetics from the State University of New York at Buffalo. He did his postdoctoral training at Washington University at St. Louis and was on the faculty at Duke University School of Medicine before coming to MIT in 2010. He is a member of the American Academy of Arts and Sciences, a fellow of the American Association for the Advancement of Science, and was elected to the National Academy of Medicine in 2023.

Piotr Indyk

Piotr Indyk is the Thomas D. and Virginia W. Cabot Professor of Electrical Engineering and Computer Science. He received his magister degree from the University of Warsaw and his PhD from Stanford University before coming to MIT in 2000.

Indyk’s research focuses on building efficient, sublinear, and streaming algorithms. He’s developed, for example, algorithms that can use limited time and space to navigate massive data streams, that can separate signals into individual frequencies faster than other methods, and can address the “nearest neighbor” problem by finding highly similar data points without needing to scan an entire database. His work has applications on everything from machine learning to data mining.

He has been named a Simons Investigator and a fellow of the Association for Computer Machinery. In 2023, he was elected to the American Academy of Arts and Sciences.

Daniel J. Kleitman

Daniel Kleitman, a professor emeritus of applied mathematics, has been at MIT since 1966. He received his undergraduate degree from Cornell University and his master’s and PhD in physics from Harvard University before doing postdoctoral work at Harvard and the Niels Bohr Institute in Copenhagen, Denmark.

Kleitman’s research interests include operations research, genomics, graph theory, and combinatorics, the area of math concerned with counting. He was actually a professor of physics at Brandeis University before changing his field to math, encouraged by the prolific mathematician Paul Erdős. In fact, Kleitman has the rare distinction of having an Erdős number of just one. The number is a measure of the “collaborative distance” between a mathematician and Erdős in terms of authorship of papers, and studies have shown that leading mathematicians have particularly low numbers.

He’s a member of the American Academy of Arts and Sciences and has made important contributions to the MIT community throughout his career. He was head of the Department of Mathematics and served on a number of committees, including the Applied Mathematics Committee. He also helped create web-based technology and an online textbook for several of the department’s core undergraduate courses. He was even a math advisor for the MIT-based film “Good Will Hunting.”

Daniela Rus

Daniela Rus, the Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science, is the director of the Computer Science and Artificial Intelligence Laboratory (CSAIL). She also serves as director of the Toyota-CSAIL Joint Research Center.

Her research on robotics, artificial intelligence, and data science is geared toward understanding the science and engineering of autonomy. Her ultimate goal is to create a future where machines are seamlessly integrated into daily life to support people with cognitive and physical tasks, and deployed in way that ensures they benefit humanity. She’s working to increase the ability of machines to reason, learn, and adapt to complex tasks in human-centered environments with applications for agriculture, manufacturing, medicine, construction, and other industries. She’s also interested in creating new tools for designing and fabricating robots and in improving the interfaces between robots and people, and she’s done collaborative projects at the intersection of technology and artistic performance.

Rus received her undergraduate degree from the University of Iowa and her PhD in computer science from Cornell University. She was a professor of computer science at Dartmouth College before coming to MIT in 2004. She is part of the Class of 2002 MacArthur Fellows; was elected to the National Academy of Engineering and the American Academy of Arts and Sciences; and is a fellow of the Association for Computer Machinery, the Institute of Electrical and Electronics Engineers, and the Association for the Advancement of Artificial Intelligence.

Senthil Todadri

Senthil Todadri, a professor of physics, came to MIT in 2001. He received his undergraduate degree from the Indian Institute of Technology in Kanpur and his PhD from Yale University before working as a postdoc at the Kavli Institute for Theoretical Physics in Santa Barbara, California.

Todadri’s research focuses on condensed matter theory. He’s interested in novel phases and phase transitions of quantum matter that expand beyond existing paradigms. Combining modeling experiments and abstract methods, he’s working to develop a theoretical framework for describing the physics of these systems. Much of that work involves understanding the phenomena that arise because of impurities or strong interactions between electrons in solids that don’t conform with conventional physical theories. He also pioneered the theory of deconfined quantum criticality, which describes a class of phase transitions, and he discovered the dualities of quantum field theories in two dimensional superconducting states, which has important applications to many problems in the field.

Todadri has been named a Simons Investigator, a Sloan Research Fellow, and a fellow of the American Physical Society. In 2023, he was elected to the American Academy of Arts and Sciences

Using MRI, engineers have found a way to detect light deep in the brain

Scientists often label cells with proteins that glow, allowing them to track the growth of a tumor, or measure changes in gene expression that occur as cells differentiate.

A man stands with his arms crossed in front of a board with mathematical equations written on it.
Alan Jasanoff, associate member of the McGovern Institute, and a professor of brain and cognitive sciences, biological engineering, and nuclear science and engineering at MIT. Photo: Justin Knight

While this technique works well in cells and some tissues of the body, it has been difficult to apply this technique to image structures deep within the brain, because the light scatters too much before it can be detected.

MIT engineers have now come up with a novel way to detect this type of light, known as bioluminescence, in the brain: They engineered blood vessels of the brain to express a protein that causes them to dilate in the presence of light. That dilation can then be observed with magnetic resonance imaging (MRI), allowing researchers to pinpoint the source of light.

“A well-known problem that we face in neuroscience, as well as other fields, is that it’s very difficult to use optical tools in deep tissue. One of the core objectives of our study was to come up with a way to image bioluminescent molecules in deep tissue with reasonably high resolution,” says Alan Jasanoff, an MIT professor of biological engineering, brain and cognitive sciences, and nuclear science and engineering.

The new technique developed by Jasanoff and his colleagues could enable researchers to explore the inner workings of the brain in more detail than has previously been possible.

Jasanoff, who is also an associate investigator at MIT’s McGovern Institute for Brain Research, is the senior author of the study, which appears today in Nature Biomedical Engineering. Former MIT postdocs Robert Ohlendorf and Nan Li are the lead authors of the paper.

Detecting light

Bioluminescent proteins are found in many organisms, including jellyfish and fireflies. Scientists use these proteins to label specific proteins or cells, whose glow can be detected by a luminometer. One of the proteins often used for this purpose is luciferase, which comes in a variety of forms that glow in different colors.

Jasanoff’s lab, which specializes in developing new ways to image the brain using MRI, wanted to find a way to detect luciferase deep within the brain. To achieve that, they came up with a method for transforming the blood vessels of the brain into light detectors. A popular form of MRI works by imaging changes in blood flow in the brain, so the researchers engineered the blood vessels themselves to respond to light by dilating.

“Blood vessels are a dominant source of imaging contrast in functional MRI and other non-invasive imaging techniques, so we thought we could convert the intrinsic ability of these techniques to image blood vessels into a means for imaging light, by photosensitizing the blood vessels themselves,” Jasanoff says.

“We essentially turn the vasculature of the brain into a three-dimensional camera.” – Alan Jasanoff

To make the blood vessels sensitive to light, the researcher engineered them to express a bacterial protein called Beggiatoa photoactivated adenylate cyclase (bPAC). When exposed to light, this enzyme produces a molecule called cAMP, which causes blood vessels to dilate. When blood vessels dilate, it alters the balance of oxygenated and deoxygenated hemoglobin, which have different magnetic properties. This shift in magnetic properties can be detected by MRI.

BPAC responds specifically to blue light, which has a short wavelength, so it detects light generated within close range. The researchers used a viral vector to deliver the gene for bPAC specifically to the smooth muscle cells that make up blood vessels. When this vector was injected in rats, blood vessels throughout a large area of the brain became light-sensitive.

“Blood vessels form a network in the brain that is extremely dense. Every cell in the brain is within a couple dozen microns of a blood vessel,” Jasanoff says. “The way I like to describe our approach is that we essentially turn the vasculature of the brain into a three-dimensional camera.”

Once the blood vessels were sensitized to light, the researchers implanted cells that had been engineered to express luciferase if a substrate called CZT is present. In the rats, the researchers were able to detect luciferase by imaging the brain with MRI, which revealed dilated blood vessels.

Tracking changes in the brain

The researchers then tested whether their technique could detect light produced by the brain’s own cells, if they were engineered to express luciferase. They delivered the gene for a type of luciferase called GLuc to cells in a deep brain region known as the striatum. When the CZT substrate was injected into the animals, MRI imaging revealed the sites where light had been emitted.

This technique, which the researchers dubbed bioluminescence imaging using hemodynamics, or BLUsH, could be used in a variety of ways to help scientists learn more about the brain, Jasanoff says.

For one, it could be used to map changes in gene expression, by linking the expression of luciferase to a specific gene. This could help researchers observe how gene expression changes during embryonic development and cell differentiation, or when new memories form. Luciferase could also be used to map anatomical connections between cells or to reveal how cells communicate with each other.

The researchers now plan to explore some of those applications, as well as adapting the technique for use in mice and other animal models.

The research was funded by the U.S. National Institutes of Health, the G. Harold and Leila Y. Mathers Foundation, Lore Harp McGovern, Gardner Hendrie, a fellowship from the German Research Foundation, a Marie Sklodowska-Curie Fellowship from the European Union, and a Y. Eva Tan Fellowship and a J. Douglas Tan Fellowship, both from the McGovern Institute for Brain Research.

Women in STEM — A celebration of excellence and curiosity

What better way to commemorate Women’s History Month and International Women’s Day than to give  three of the world’s most accomplished scientists an opportunity to talk about their careers? On March 7, MindHandHeart invited professors Paula Hammond, Ann Graybiel, and Sangeeta Bhatia to share their career journeys, from the progress they have witnessed to the challenges they have faced as women in STEM. Their conversation was moderated by Mary Fuller, chair of the faculty and professor of literature.

Hammond, an Institute professor with appointments in the Department of Chemical Engineering and the Koch Institute for Integrative Cancer Research, reflected on the strides made by women faculty at MIT, while acknowledging ongoing challenges. “I think that we have advanced a great deal in the last few decades in terms of the numbers of women who are present, although we still have a long way to go,” Hammond noted in her opening. “We’ve seen a remarkable increase over the past couple of decades in our undergraduate population here at MIT, and now we’re beginning to see it in the graduate population, which is really exciting.” Hammond was recently appointed to the role of vice provost for faculty.

Ann Graybiel, also an Institute professor, who has appointments in the Department of Brain and Cognitive Sciences and the McGovern Institute for Brain Research, described growing up in the Deep South. “Girls can’t do science,” she remembers being told in school, and they “can’t do research.” Yet her father, a physician scientist, often took her with him to work and had her assist from a young age, eventually encouraging her directly to pursue a career in science. Graybiel, who first came to MIT in 1973, noted that she continued to face barriers and rejection throughout her career long after leaving the South, but that individual gestures of inspiration, generosity, or simple statements of “You can do it” from her peers helped her power through and continue in her scientific pursuits.

Sangeeta Bhatia, the John and Dorothy Wilson Professor of Health Sciences and Technology and Electrical Engineering and Computer Science, director of the Marble Center for Cancer Nanomedicine at the Koch Institute for Integrative Cancer Research, and a member of the Institute for Medical Engineering and Science, is also the mother of two teenage girls. She shared her perspective on balancing career and family life: “I wanted to pick up my kids from school and I wanted to know their friends. … I had a vision for the life that I wanted.” Setting boundaries at work, she noted, empowered her to achieve both personal and professional goals. Bhatia also described her collaboration with President Emerita Susan Hockfield and MIT Amgen Professor of Biology Emerita Nancy Hopkins to spearhead the Future Founders Initiative, which aims to boost the representation of female faculty members pursuing biotechnology ventures.

A video of the full panel discussion is available on the MindHandHeart YouTube channel.

From neurons to learning and memory

Mark Harnett, an associate professor at MIT, still remembers the first time he saw electrical activity spiking from a living neuron.

He was a senior at Reed College and had spent weeks building a patch clamp rig — an experimental setup with an electrode that can be used to gently probe a neuron and measure its electrical activity.

“The first time I stuck one of these electrodes onto one of these cells and could see the electrical activity happening in real time on the oscilloscope, I thought, ‘Oh my God, this is what I’m going to do for the rest of my life. This is the coolest thing I’ve ever seen!’” Harnett says.

Harnett, who recently earned tenure in MIT’s Department of Brain and Cognitive Sciences, now studies the electrical properties of neurons and how these properties enable neural circuits to perform the computations that give rise to brain functions such as learning, memory, and sensory perception.

“My lab’s ultimate goal is to understand how the cortex works,” Harnett says. “What are the computations? How do the cells and the circuits and the synapses support those computations? What are the molecular and structural substrates of learning and memory? How do those things interact with circuit dynamics to produce flexible, context-dependent computation?”

“We go after that by looking at molecules, like synaptic receptors and ion channels, all the way up to animal behavior, and building theoretical models of neural circuits,” he adds.

Influence on the mind

Harnett’s interest in science was sparked in middle school, when he had a teacher who made the subject come to life. “It was middle school science, which was a lot of just mixing random things together. It wasn’t anything particularly advanced, but it was really fun,” he says. “Our teacher was just super encouraging and inspirational, and she really sparked what became my lifelong interest in science.”

When Harnett was 11, his father got a new job at a technology company in Minneapolis and the family moved from New Jersey to Minnesota, which proved to be a difficult adjustment. When choosing a college, Harnett decided to go far away, and ended up choosing Reed College, a school in Portland, Oregon, that encourages a great deal of independence in both academics and personal development.

“Reed was really free,” he recalls. “It let you grow into who you wanted to be, and try things, both for what you wanted to do academically or artistically, but also the kind of person you wanted to be.”

While in college, Harnett enjoyed both biology and English, especially Shakespeare. His English professors encouraged him to go into science, believing that the field needed scientists who could write and think creatively. He was interested in neuroscience, but Reed didn’t have a neuroscience department, so he took the closest subject he could find — a course in neuropharmacology.

“That class totally blew my mind. It was just fascinating to think about all these pharmacological agents, be they from plants or synthetic or whatever, influencing how your mind worked,” Harnett says. “That class really changed my whole way of thinking about what I wanted to do, and that’s when I decided I wanted to become a neuroscientist.”

For his senior research thesis, Harnett joined an electrophysiology lab at Oregon Health Sciences University (OHSU), working with Professor Larry Trussell, who studies synaptic transmission in the auditory system. That lab was where he first built and used a patch clamp rig to measure neuron activity.

After graduating from college, he spent a year as a research technician in a lab at the University of Minnesota, then returned to OHSU to work in a different research lab studying ion channels and synaptic physiology. Eventually he decided to go to graduate school, ending up at the University of Texas at Austin, where his future wife was studying public policy.

For his PhD research, he studied the neurons that release the neuromodulator dopamine and how they are affected by drugs of abuse and addiction. However, once he finished his degree, he decided to return to studying the biophysics of computation, which he pursued during a postdoc at the Howard Hughes Medical Institute Janelia Research Campus with Jeff Magee.

A broad approach

When he started his lab at MIT’s McGovern Institute in 2015, Harnett set out to expand his focus. While the physiology of ion channels and synapses forms the basis of much of his lab’s work, they connect these processes to neuronal computation, cortical circuit operation, and higher-level cognitive functions.

Electrical impulses that flow between neurons, allowing them to communicate with each other, are produced by ion channels that control the flow of ions such as potassium and sodium. In a 2021 study, Harnett and his students discovered that human neurons have a much smaller number of these channels than expected, compared to the neurons of other mammals.

This reduction in density may have evolved to help the brain operate more efficiently, allowing it to divert resources to other energy-intensive processes that are required to perform complex cognitive tasks. Harnett’s lab has also found that in human neurons, electrical signals weaken as they flow along dendrites, meaning that small sections of dendrites can form units that perform individual computations within a neuron.

Harnett’s lab also recently discovered, to their surprise, that the adult brain contains millions of “silent synapses” — immature connections that remain inactive until they’re recruited to help form new memories. The existence of these synapses offers a clue to how the adult brain is able to continually form new memories and learn new things without having to modify mature synapses.

Many of these projects fall into areas that Harnett didn’t necessarily envision himself working on when he began his faculty career, but they naturally grew out of the broad approach he wanted to take to studying the cortex. To that end, he sought to bring people to the lab who wanted to work at different levels — from molecular physiology up to behavior and computational modeling.

As a postdoc studying electrophysiology, Harnett spent most of his time working alone with his patch clamp device and two-photon microscope. While that type of work still goes on his lab, the overall atmosphere is much more collaborative and convivial, and as a mentor, he likes to give his students broad leeway to come up with their own projects that fit in with the lab’s overall mission.

“I have this incredible, dynamic group that has been really great to work with. We take a broad approach to studying the cortex, and I think that’s what makes it fun,” he says. “Working with the folks that I’ve been able to recruit — grad students, techs, undergrads, and postdocs — is probably the thing that really matters the most to me.”

A new computational technique could make it easier to engineer useful proteins

To engineer proteins with useful functions, researchers usually begin with a natural protein that has a desirable function, such as emitting fluorescent light, and put it through many rounds of random mutation that eventually generate an optimized version of the protein.

This process has yielded optimized versions of many important proteins, including green fluorescent protein (GFP). However, for other proteins, it has proven difficult to generate an optimized version. MIT researchers have now developed a computational approach that makes it easier to predict mutations that will lead to better proteins, based on a relatively small amount of data.

Using this model, the researchers generated proteins with mutations that were predicted to lead to improved versions of GFP and a protein from adeno-associated virus (AAV), which is used to deliver DNA for gene therapy. They hope it could also be used to develop additional tools for neuroscience research and medical applications.

Woman gestures with her hand in front of a glass wall with equations written on it.
MIT Professor of Brain and Cognitive Sciences Ila Fiete in her lab at the McGovern Institute. Photo: Steph Stevens

“Protein design is a hard problem because the mapping from DNA sequence to protein structure and function is really complex. There might be a great protein 10 changes away in the sequence, but each intermediate change might correspond to a totally nonfunctional protein. It’s like trying to find your way to the river basin in a mountain range, when there are craggy peaks along the way that block your view. The current work tries to make the riverbed easier to find,” says Ila Fiete, a professor of brain and cognitive sciences at MIT, a member of MIT’s McGovern Institute for Brain Research, director of the K. Lisa Yang Integrative Computational Neuroscience Center, and one of the senior authors of the study.

Regina Barzilay, the School of Engineering Distinguished Professor for AI and Health at MIT, and Tommi Jaakkola, the Thomas Siebel Professor of Electrical Engineering and Computer Science at MIT, are also senior authors of an open-access paper on the work, which will be presented at the International Conference on Learning Representations in May. MIT graduate students Andrew Kirjner and Jason Yim are the lead authors of the study. Other authors include Shahar Bracha, an MIT postdoc, and Raman Samusevich, a graduate student at Czech Technical University.

Optimizing proteins

Many naturally occurring proteins have functions that could make them useful for research or medical applications, but they need a little extra engineering to optimize them. In this study, the researchers were originally interested in developing proteins that could be used in living cells as voltage indicators. These proteins, produced by some bacteria and algae, emit fluorescent light when an electric potential is detected. If engineered for use in mammalian cells, such proteins could allow researchers to measure neuron activity without using electrodes.

While decades of research have gone into engineering these proteins to produce a stronger fluorescent signal, on a faster timescale, they haven’t become effective enough for widespread use. Bracha, who works in Edward Boyden’s lab at the McGovern Institute, reached out to Fiete’s lab to see if they could work together on a computational approach that might help speed up the process of optimizing the proteins.

“This work exemplifies the human serendipity that characterizes so much science discovery,” Fiete says.

“This work grew out of the Yang Tan Collective retreat, a scientific meeting of researchers from multiple centers at MIT with distinct missions unified by the shared support of K. Lisa Yang. We learned that some of our interests and tools in modeling how brains learn and optimize could be applied in the totally different domain of protein design, as being practiced in the Boyden lab.”

For any given protein that researchers might want to optimize, there is a nearly infinite number of possible sequences that could generated by swapping in different amino acids at each point within the sequence. With so many possible variants, it is impossible to test all of them experimentally, so researchers have turned to computational modeling to try to predict which ones will work best.

In this study, the researchers set out to overcome those challenges, using data from GFP to develop and test a computational model that could predict better versions of the protein.

They began by training a type of model known as a convolutional neural network (CNN) on experimental data consisting of GFP sequences and their brightness — the feature that they wanted to optimize.

The model was able to create a “fitness landscape” — a three-dimensional map that depicts the fitness of a given protein and how much it differs from the original sequence — based on a relatively small amount of experimental data (from about 1,000 variants of GFP).

These landscapes contain peaks that represent fitter proteins and valleys that represent less fit proteins. Predicting the path that a protein needs to follow to reach the peaks of fitness can be difficult, because often a protein will need to undergo a mutation that makes it less fit before it reaches a nearby peak of higher fitness. To overcome this problem, the researchers used an existing computational technique to “smooth” the fitness landscape.

Once these small bumps in the landscape were smoothed, the researchers retrained the CNN model and found that it was able to reach greater fitness peaks more easily. The model was able to predict optimized GFP sequences that had as many as seven different amino acids from the protein sequence they started with, and the best of these proteins were estimated to be about 2.5 times fitter than the original.

“Once we have this landscape that represents what the model thinks is nearby, we smooth it out and then we retrain the model on the smoother version of the landscape,” Kirjner says. “Now there is a smooth path from your starting point to the top, which the model is now able to reach by iteratively making small improvements. The same is often impossible for unsmoothed landscapes.”


The researchers also showed that this approach worked well in identifying new sequences for the viral capsid of adeno-associated virus (AAV), a viral vector that is commonly used to deliver DNA. In that case, they optimized the capsid for its ability to package a DNA payload.

“We used GFP and AAV as a proof-of-concept to show that this is a method that works on data sets that are very well-characterized, and because of that, it should be applicable to other protein engineering problems,” Bracha says.

The researchers now plan to use this computational technique on data that Bracha has been generating on voltage indicator proteins.

“Dozens of labs having been working on that for two decades, and still there isn’t anything better,” she says. “The hope is that now with generation of a smaller data set, we could train a model in silico and make predictions that could be better than the past two decades of manual testing.”

The research was funded, in part, by the U.S. National Science Foundation, the Machine Learning for Pharmaceutical Discovery and Synthesis consortium, the Abdul Latif Jameel Clinic for Machine Learning in Health, the DTRA Discovery of Medical Countermeasures Against New and Emerging threats program, the DARPA Accelerated Molecular Discovery program, the Sanofi Computational Antibody Design grant, the U.S. Office of Naval Research, the Howard Hughes Medical Institute, the National Institutes of Health, the K. Lisa Yang ICoN Center, and the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics at MIT.

For people who speak many languages, there’s something special about their native tongue

A new study of people who speak many languages has found that there is something special about how the brain processes their native language.

In the brains of these polyglots — people who speak five or more languages — the same language regions light up when they listen to any of the languages that they speak. In general, this network responds more strongly to languages in which the speaker is more proficient, with one notable exception: the speaker’s native language. When listening to one’s native language, language network activity drops off significantly.

The findings suggest there is something unique about the first language one acquires, which allows the brain to process it with minimal effort, the researchers say.

“Something makes it a little bit easier to process — maybe it’s that you’ve spent more time using that language — and you get a dip in activity for the native language compared to other languages that you speak proficiently,” says Evelina Fedorenko, an associate professor of neuroscience at MIT, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Saima Malik-Moraleda, a graduate student in the Speech and Hearing Bioscience and Technology Program at Harvard University, and Olessia Jouravlev, a former MIT postdoc who is now an associate professor at Carleton University, are the lead authors of the paper, which appears today in the journal Cerebral Cortex.

Many languages, one network

McGovern Investivator Ev Fedorenko in the Martinos Imaging Center at MIT. Photo: Caitlin Cunningham

The brain’s language processing network, located primarily in the left hemisphere, includes regions in the frontal and temporal lobes. In a 2021 study, Fedorenko’s lab found that in the brains of polyglots, the language network was less active when listening to their native language than the language networks of people who speak only one language.

In the new study, the researchers wanted to expand on that finding and explore what happens in the brains of polyglots as they listen to languages in which they have varying levels of proficiency. Studying polyglots can help researchers learn more about the functions of the language network, and how languages learned later in life might be represented differently than a native language or languages.

“With polyglots, you can do all of the comparisons within one person. You have languages that vary along a continuum, and you can try to see how the brain modulates responses as a function of proficiency,” Fedorenko says.

For the study, the researchers recruited 34 polyglots, each of whom had at least some degree of proficiency in five or more languages but were not bilingual or multilingual from infancy. Sixteen of the participants spoke 10 or more languages, including one who spoke 54 languages with at least some proficiency.

Each participant was scanned with functional magnetic resonance imaging (fMRI) as they listened to passages read in eight different languages. These included their native language, a language they were highly proficient in, a language they were moderately proficient in, and a language in which they described themselves as having low proficiency.

They were also scanned while listening to four languages they didn’t speak at all. Two of these were languages from the same family (such as Romance languages) as a language they could speak, and two were languages completely unrelated to any languages they spoke.

The passages used for the study came from two different sources, which the researchers had previously developed for other language studies. One was a set of Bible stories recorded in many different languages, and the other consisted of passages from “Alice in Wonderland” translated into many languages.

Brain scans revealed that the language network lit up the most when participants listened to languages in which they were the most proficient. However, that did not hold true for the participants’ native languages, which activated the language network much less than non-native languages in which they had similar proficiency. This suggests that people are so proficient in their native language that the language network doesn’t need to work very hard to interpret it.

“As you increase proficiency, you can engage linguistic computations to a greater extent, so you get these progressively stronger responses. But then if you compare a really high-proficiency language and a native language, it may be that the native language is just a little bit easier, possibly because you’ve had more experience with it,” Fedorenko says.

Brain engagement

The researchers saw a similar phenomenon when polyglots listened to languages that they don’t speak: Their language network was more engaged when listening to languages related to a language that they could understand, than compared to listening to completely unfamiliar languages.

“Here we’re getting a hint that the response in the language network scales up with how much you understand from the input,” Malik-Moraleda says. “We didn’t quantify the level of understanding here, but in the future we’re planning to evaluate how much people are truly understanding the passages that they’re listening to, and then see how that relates to the activation.”

The researchers also found that a brain network known as the multiple demand network, which turns on whenever the brain is performing a cognitively demanding task, also becomes activated when listening to languages other than one’s native language.

“What we’re seeing here is that the language regions are engaged when we process all these languages, and then there’s this other network that comes in for non-native languages to help you out because it’s a harder task,” Malik-Moraleda says.

In this study, most of the polyglots began studying their non-native languages as teenagers or adults, but in future work, the researchers hope to study people who learned multiple languages from a very young age. They also plan to study people who learned one language from infancy but moved to the United States at a very young age and began speaking English as their dominant language, while becoming less proficient in their native language, to help disentangle the effects of proficiency versus age of acquisition on brain responses.

The research was funded by the McGovern Institute for Brain Research, MIT’s Department of Brain and Cognitive Sciences, and the Simons Center for the Social Brain.