Three MIT professors named 2024 Vannevar Bush Fellows

The U.S. Department of Defense (DoD) has announced three MIT professors among the members of the 2024 class of the Vannevar Bush Faculty Fellowship (VBFF). The fellowship is the DoD’s flagship single-investigator award for research, inviting the nation’s most talented researchers to pursue ambitious ideas that defy conventional boundaries.

Domitilla Del Vecchio, professor of mechanical engineering and the Grover M. Hermann Professor in Health Sciences & Technology; Mehrdad Jazayeri, professor of brain and cognitive sciences and an investigator at the McGovern Institute for Brain Research; and Themistoklis Sapsis, the William I. Koch Professor of Mechanical Engineering and director of the Center for Ocean Engineering are among the 11 university scientists and engineers chosen for this year’s fellowship class. They join an elite group of approximately 50 fellows from previous class years.

“The Vannevar Bush Faculty Fellowship is more than a prestigious program,” said Bindu Nair, director of the Basic Research Office in the Office of the Under Secretary of Defense for Research and Engineering, in a press release. “It’s a beacon for tenured faculty embarking on groundbreaking ‘blue sky’ research.”

Research topics

Each fellow receives up to $3 million over a five-year term to pursue cutting-edge projects. Research topics in this year’s class span a range of disciplines, including materials science, cognitive neuroscience, quantum information sciences, and applied mathematics. While pursuing individual research endeavors, Fellows also leverage the unique opportunity to collaborate directly with DoD laboratories, fostering a valuable exchange of knowledge and expertise.

Del Vecchio, whose research interests include control and dynamical systems theory and systems and synthetic biology, will investigate the molecular underpinnings of analog epigenetic cell memory, then use what they learn to “establish unprecedented engineering capabilities for creating self-organizing and reconfigurable multicellular systems with graded cell fates.”

“With this fellowship, we will be able to explore the limits to which we can leverage analog memory to create multicellular systems that autonomously organize in permanent, but reprogrammable, gradients of cell fates and can be used for creating next-generation tissues and organoids with dramatically increased sophistication,” she says, honored to have been selected.

Jazayeri wants to understand how the brain gives rise to cognitive and emotional intelligence. The engineering systems being built today lack the hallmarks of human intelligence, explains Jazayeri. They neither learn quickly nor generalize their knowledge flexibly. They don’t feel emotions or have emotional intelligence.

Jazayeri plans to use the VBFF award to integrate ideas from cognitive science, neuroscience, and machine learning with experimental data in humans, animals, and computer models to develop a computational understanding of cognitive and emotional intelligence.

“I’m honored and humbled to be selected and excited to tackle some of the most challenging questions at the intersection of neuroscience and AI,” he says.

“I am humbled to be included in such a select group,” echoes Sapsis, who will use the grant to research new algorithms and theory designed for the efficient computation of extreme event probabilities and precursors, and for the design of mitigation strategies in complex dynamical systems.

Examples of Sapsis’s work include risk quantification for extreme events in human-made systems; climate events, such as heat waves, and their effect on interconnected systems like food supply chains; and also “mission-critical algorithmic problems such as search and path planning operations for extreme anomalies,” he explains.

VBFF impact

Named for Vannevar Bush PhD 1916, an influential inventor, engineer, former professor, and dean of the School of Engineering at MIT, the highly competitive fellowship, formerly known as the National Security Science and Engineering Faculty Fellowship, aims to advance transformative, university-based fundamental research. Bush served as the director of the U.S. Office of Scientific Research and Development, and organized and led American science and technology during World War II.

“The outcomes of VBFF-funded research have transformed entire disciplines, birthed novel fields, and challenged established theories and perspectives,” said Nair. “By contributing their insights to DoD leadership and engaging with the broader national security community, they enrich collective understanding and help the United States leap ahead in global technology competition.”

Four MIT faculty named 2024 HHMI Investigators

The Howard Hughes Medical Institute (HHMI) today announced its 2024 investigators, four of whom hail from the School of Science at MIT: Steven Flavell, Mary Gehring, Mehrad Jazayeri, and Gene-Wei Li.

Four others with MIT ties were also honored: Jonathan Abraham, graduate of the Harvard/MIT MD-PhD Program; Dmitriy Aronov PhD ’10; Vijay Sankaran, graduate of the Harvard/MIT MD-PhD Program; and Steven McCarroll, institute member of the Broad Institute of MIT and Harvard.

Every three years, HHMI selects roughly two dozen new investigators who have significantly impacted their chosen disciplines to receive a substantial and completely discretionary grant. This funding can be reviewed and renewed indefinitely. The award, which totals roughly $11 million per investigator over the next seven years, enables scientists to continue working at their current institution, paying their full salary while providing financial support for researchers to be flexible enough to go wherever their scientific inquiries take them.

Of the almost 1,000 applicants this year, 26 investigators were selected for their ability to push the boundaries of science and for their efforts to create highly inclusive and collaborative research environments.

“When scientists create environments in which others can thrive, we all benefit,” says HHMI president Erin O’Shea. “These newest HHMI Investigators are extraordinary, not only because of their outstanding research endeavors but also because they mentor and empower the next generation of scientists to work alongside them at the cutting edge.”

Steven Flavell

Steven Flavell, associate professor of brain and cognitive sciences and investigator in the Picower Institute for Learning and Memory, seeks to uncover the neural mechanisms that generate the internal states of the brain, for example, different motivational and arousal states. Working in the model organism, the C. elegans worm, the lab has used genetic, systems, and computational approaches to relate neural activity across the brain to precise features of the animal’s behavior. In addition, they have mapped out the anatomical and functional organization of the serotonin system, mapping out how it modulates the internal state of C. elegans. As a newly named HHMI Investigator, Flavell will pursue research that he hopes will build a foundational understanding of how internal states arise and influence behavior in nervous systems in general. The work will employ brain-wide neural recordings, computational modeling, expansive research on neuromodulatory system organization, and studies of how the synaptic wiring of the nervous system constrains an animal’s ability to generate different internal states.

“I think that it should be possible to define the basis of internal states in C. elegans in concrete terms,” Flavell says. “If we can build a thread of understanding from the molecular architecture of neuromodulatory systems, to changes in brain-wide activity, to state-dependent changes in behavior, then I think we’ll be in a much better place as a field to think about the basis of brain states in more complex animals.”

Mary Gehring

Mary Gehring, professor of biology and core member and David Baltimore Chair in Biomedical Research at the Whitehead Institute for Biomedical Research, studies how plant epigenetics modulates plant growth and development, with a long-term goal of uncovering the essential genetic and epigenetic elements of plant seed biology. Ultimately, the Gehring Lab’s work provides the scientific foundations for engineering alternative modes of seed development and improving plant resiliency at a time when worldwide agriculture is in a uniquely precarious position due to climate changes.

The Gehring Lab uses genetic, genomic, computational, synthetic, and evolutionary approaches to explore heritable traits by investigating repetitive sequences, DNA methylation, and chromatin structure. The lab primarily uses the model plant A. thaliana, a member of the mustard family and the first plant to have its genome sequenced.

“I’m pleased that HHMI has been expanding its support for plant biology, and gratified that our lab will benefit from its generous support,” Gehring says. “The appointment gives us the freedom to step back, take a fresh look at the scientific opportunities before us, and pursue the ones that most interest us. And that’s a very exciting prospect.”

Mehrdad Jazayeri

Mehrdad Jazayeri, a professor of brain and cognitive sciences and an investigator at the McGovern Institute for Brain Research, studies how physiological processes in the brain give rise to the abilities of the mind. Work in the Jazayeri Lab brings together ideas from cognitive science, neuroscience, and machine learning with experimental data in humans, animals, and computer models to develop a computational understanding of how the brain creates internal representations, or models, of the external world.

Before coming to MIT in 2013, Jazayeri received his BS in electrical engineering, majoring in telecommunications, from Sharif University of Technology in Tehran, Iran. He completed his MS in physiology at the University of Toronto and his PhD in neuroscience at New York University.

With his appointment to HHMI, Jazayeri plans to explore how the brain enables rapid learning and flexible behavior — central aspects of intelligence that have been difficult to study using traditional neuroscience approaches.

“This is a recognition of my lab’s past accomplishments and the promise of the exciting research we want to embark on,” he says. “I am looking forward to engaging with this wonderful community and making new friends and colleagues while we elevate our science to the next level.”

Gene-Wei Li

Gene-Wei Li, associate professor of biology, has been working on quantifying the amount of proteins cells produce and how protein synthesis is orchestrated within the cell since opening his lab at MIT in 2015.

Li, whose background is in physics, credits the lab’s findings to the skills and communication among his research team, allowing them to explore the unexpected questions that arise in the lab.

For example, two of his graduate student researchers found that the coordination between transcription and translation fundamentally differs between the model organisms E. coli and B. subtilis. In B. subtilis, the ribosome lags far behind RNA polymerase, a process the lab termed “runaway transcription.” The discovery revealed that this kind of uncoupling between transcription and translation is widespread across many species of bacteria, a study that contradicted the long-standing dogma of molecular biology that the machinery of protein synthesis and RNA polymerase work side-by-side in all bacteria.

The support from HHMI enables Li and his team the flexibility to pursue the basic research that leads to discoveries at their discretion.

“Having this award allows us to be bold and to do things at a scale that wasn’t possible before,” Li says. “The discovery of runaway transcription is a great example. We didn’t have a traditional grant for that.”

Mehrdad Jazayeri selected as an HHMI investigator

The Howard Hughes Medical Institute (HHMI) has named McGovern Institute neuroscientist Mehrdad Jazayeri as one of 26 new HHMI investigators—a group of visionary scientists who HHMI will support with more than $300 million over the next seven years.

Support from HHMI is intended to give its investigators, who work at institutions across the United States, the time and resources they need to push the boundaries of the biological sciences. Jazayeri, whose work integrates neurobiology with cognitive science and machine learning, plans to use that support to explore how the brain enables rapid learning and flexible behavior—central aspects of intelligence that have been difficult to study using traditional neuroscience approaches.

Jazayeri says he is delighted and honored by the news. “This is a recognition of my lab’s past accomplishments and the promise of the exciting research we want to embark on,” he says. “I am looking forward to engaging with this wonderful community and making new friends and colleagues while we elevate our science to the next level.”

An unexpected path

Jazayeri, who has been an investigator at the McGovern Institute since 2013, has already made a series of groundbreaking discoveries about how physiological processes in the brain give rise to the abilities of the mind. “That’s what we do really well,” he says. “We expose the computational link between abstract mental concepts, like belief, and electrical signals in the brain,” he says.

Jazayeri’s expertise and enthusiasm for this work grew out a curiosity that was sparked unexpectedly several years after he’d abandoned university education. He’d pursued his undergraduate studies in electrical engineering, a path with good job prospects in Iran where he lived. But an undergraduate program at Sharif University of Technology in Tehran left him disenchanted. “It was an uninspiring experience,” he says. “It’s a top university and I went there excited, but I lost interest as I couldn’t think of a personally meaningful application for my engineering skills. So, after my undergrad, I started a string of random jobs, perhaps to search for my passion.”

A few years later, Jazayeri was trying something new, happily living and working at a banana farm near the Caspian Sea. The farm schedule allowed for leisure in the evenings, which he took advantage of by delving into boxes full of books that an uncle regularly sent him from London. The books were an unpredictable, eclectic mix. Jazayeri read them all—and it was those that talked about the brain that most captured his imagination.

Until then, he had never had much interest in biology. But when he read about neurological disorders and how scientists were studying the brain, he was captivated. The subject seemed to merge his inherent interest in philosophy with an analytical approach that he also loved. “These books made me think that you actually can understand this system at a more concrete level…you can put electrodes in the brain and listen to what neurons say,” he says. “It had never even occurred to me to think about those things.”

He wanted to know more. It took time to find a graduate program in neuroscience that would accept a student with his unconventional background, but eventually the University of Toronto accepted him into a master’s program after he crammed for and passed an undergraduate exam testing his knowledge of physiology. From there, he went on to earn a PhD in neuroscience from New York University studying visual perception, followed by a postdoctoral fellowship at the University of Washington where he studied time perception.

In 2013, Jazayeri joined MIT’s Department of Brain and Cognitive Sciences. At MIT, conversations with new colleagues quickly enriched the way he thought about the brain. “It is fascinating to listen to cognitive scientists’ ideas about the mind,” he says. “They have a rich and deep understanding of the mind but the language they use to describe the mind is not the language of the brain. Bridging this gap in language between neuroscience and cognitive science is at the core of research in my lab.”

His lab’s general approach has been to collect data on neural activity from humans and animals as they perform tasks that call on specific aspects of the mind. “We design tasks that are as simple as possible but get at the crux of the problems in cognitive science,” he explains. “Then we build models that help us connect abstract concepts and theories in cognitive science to signals and dynamics of neural activity in the brain.”

It’s an interdisciplinary approach that even calls on many of the engineering approaches that had failed to inspire him as a student. Students and postdocs in the lab bring a diverse set of knowledge and skills, and together the team has made significant contributions to neuroscience, cognitive science, and computational science.

With animals trained to reproduce a rhythm, they’ve shown how neurons adjust the speed of their signals to predict when something will occur, and what happens when the actual timing of a stimulus deviates from the brain’s expectations.

Studies of time interval predictions have also helped the team learn how the brain weighs different pieces of information as it assesses situations and makes decisions. This process, called Bayesian integration, shapes our beliefs and our confidence in those beliefs. “These are really fundamental concepts in cognitive sciences, and we can now say how neurons exactly do that,” he says.

More recently, by teaching animals to navigate a virtual environment, Jazayeri’s team has found activity in the brain that appears to call up a cognitive map of a space even when its features are not visible. The discovery helps reveal how the brain builds internal models and uses them to interact with the world.

A new paradigm

Jazayeri is proud of these achievements. But he knows that when it comes to understanding the power and complexity of cognition, something is missing.

“Two really important hallmarks of cognition are the ability to learn rapidly and generalize flexibly. If somebody can do that, we say they’re intelligent,” he says. It’s an ability we have from an early age. “If you bring a kid a bunch of toys, they don’t need several years of training, they just can play with the toys right away in very creative ways,” he says. In the wild, many animals are similarly adept at problem solving and finding uses for new tools. But when animals are trained for many months on a single task, as typically happens in a lab, they don’t behave as intelligently. “They become like an expert that does one thing well, but they’re no longer very flexible,” he says.

Figuring out how the brain adapts and acts flexibly in real-world situations in going to require a new approach. “What we have done is that we come up with a task, and then change the animal’s brain through learning to match our task,” he says. “What we now want to do is to add a new paradigm to our work, one in which we will devise the task such that it would match the animal’s brain.”

As an HHMI investigator, Jazayeri plans to take advantage of a host of new technologies to study the brain’s involvement in ecologically relevant behaviors. That means moving beyond the virtual scenarios and digital platforms that have been so widespread in neuroscience labs, including his own, and instead letting animals interact with real objects and environments. “The animal will use its eyes and hands to engage with physical objects in the real world,” he says.

To analyze and learn about animals’ behavior, the team plans detailed tracking of hand and eye movements, and even measurements of sensations that are felt through the hands as animals explore objects and work through problems. These activities are expected to engage the entire brain, so the team will broadly record and analyze neural activity.

Designing meaningful experiments and making sense of the data will be a deeply interdisciplinary endeavor, and Jazayeri knows working with a collaborative community of scientists will be essential. He’s looking forward to sharing the enormous amount of relevant data his lab expects to collect with the research community and getting others involved. Likewise, as a dedicated mentor, he is committed to training scientists who will continue and expand the work in the future.

He is enthusiastic about the opportunity to move into these bigger questions about cognition and intelligence, and support from HHMI comes at an opportune moment. “I think we have now built the infrastructure and conceptual frameworks to think about these problems, and technology for recording and tracking animals has developed a great deal, so we can now do more naturalistic experiments,” he says.

His passion for his work is one of many passions in his life. His love for family, friends, and art are just as deep, and making space to experience everything is a lifelong struggle. But he knows his zeal is infectious. “I think my love for science is probably one of the best motivators of people around me,” he says.

License plates of MIT

What does your license plate say about you?

In the United States, more than 9 million vehicles carry personalized “vanity” license plates, in which preferred words, digits, or phrases replace an otherwise random assignment of letters and numbers to identify a vehicle. While each state and the District of Columbia maintains its own rules about appropriate selections, creativity reigns when choosing a unique vanity plate. What’s more, the stories behind them can be just as fascinating as the people who use them.

It might not come as a surprise to learn that quite a few MIT community members have participated in such vehicular whimsy. Read on to meet some of them and learn about the nerdy, artsy, techy, and MIT-related plates that color their rides.

A little piece of tech heaven

One of the most recognized vehicles around campus is Samuel Klein’s 1998 Honda Civic. More than just the holder of a vanity plate, it’s an art car — a vehicle that’s been custom-designed as a way to express an artistic idea or theme. Klein’s Civic is covered with hundreds of 5.5-inch floppy disks in various colors, and it sports disks, computer keys, and other techy paraphernalia on the interior. With its double-entendre vanity plate, “DSKDRV” (“disk drive”), the art car initially came into being on the West Coast.

Klein, a longtime affiliate of the MIT Media Lab, MIT Press, and MIT Libraries, first heard about the car from fellow Wikimedian and current MIT librarian Phoebe Ayers. An artistic friend of Ayers’, Lara Wiegand, had designed and decorated the car in Seattle but wanted to find a new owner. Klein was intrigued and decided to fly west to check the Civic out.

“I went out there, spent a whole afternoon seeing how she maintained the car and talking about engineering and mechanisms and the logistics of what’s good and bad,” Klein says. “It had already gone through many iterations.”

Klein quickly decided he was up to the task of becoming the new owner. As he drove the car home across the country, it “got a wide range of really cool responses across different parts of the U.S.”

Back in Massachusetts, Klein made a few adjustments: “We painted the hubcaps, we added racing stripes, we added a new generation of laser-etched glass circuits and, you know, I had my own collection of antiquated technology disks that seemed to fit.”

The vanity plate also required a makeover. In Washington state it was “DISKDRV,” but, Klein says, “we had to shave the license plate a bit because there are fewer letters in Massachusetts.”

Today, the car has about 250,000 miles and an Instagram account. “The biggest challenge is just the disks have to be resurfaced, like a lizard, every few years,” says Klein, whose partner, an MIT research scientist, often parks it around campus. “There’s a small collection of love letters for the car. People leave the car notes. It’s very sweet.”

Marking his place in STEM history

Omar Abudayyeh ’12, PhD ’18, a recent McGovern Fellow at the McGovern Institute for Brain Research at MIT who is now an assistant professor at Harvard Medical School, shares an equally riveting story about his vanity plate, “CRISPR,” which adorns his sport utility vehicle.

The plate refers to the genome-editing technique that has revolutionized biological and medical research by enabling rapid changes to genetic material. As an MIT graduate student in the lab of Professor Feng Zhang, a pioneering contributor to CRISPR technologies, Abudayyeh was highly involved in early CRISPR development for DNA and RNA editing. In fact, he and Jonathan Gootenberg ’13, another recent McGovern Fellow and assistant professor at Harvard Medical School who works closely with Abudayyeh, discovered many novel CRISPR enzymes, such as Cas12 and Cas13, and applied these technologies for both gene therapy and CRISPR diagnostics.

So how did Abudayyeh score his vanity plate? It was all due to his attendance at a genome-editing conference in 2022, where another early-stage CRISPR researcher, Samuel Sternberg, showed up in a car with New York “CRISPR” plates. “It became quite a source of discussion at the conference, and at one of the breaks, Sam and his labmates egged us on to get the Massachusetts license plate,” Abudayyeh explains. “I insisted that it must be taken, but I applied anyway, paying the 70 dollars and then receiving a message that I would get a letter eight to 12 weeks later about whether the plate was available or not. I then returned to Boston and forgot about it until a couple months later when, to my surprise, the plate arrived in the mail.”

While Abudayyeh continues his affiliation with the McGovern Institute, he and Gootenberg recently set up a lab at Harvard Medical School as new faculty members. “We have continued to discover new enzymes, such as Cas7-11, that enable new frontiers, such as programmable proteases for RNA sensing and novel therapeutics, and we’ve applied CRISPR technologies for new efforts in gene editing and aging research,” Abudayyeh notes.

As for his license plate, he says, “I’ve seen instances of people posting about it on Twitter or asking about it in Slack channels. A number of times, people have stopped me to say they read the Walter Isaacson book on CRISPR, asking how I was related to it. I would then explain my story — and describe how I’m actually in the book, in the chapters on CRISPR diagnostics.”

Displaying MIT roots, nerd pride

For some, a connection to MIT is all the reason they need to register a vanity plate — or three. Jeffrey Chambers SM ’06, PhD ’14, a graduate of the Department of Aeronautics and Astronautics, shares that he drives with a Virginia license plate touting his “PHD MIT.” Professor of biology Anthony Sinskey ScD ’67 owns several vehicles sporting vanity plates that honor Course 20, which is today the Department of Biological Engineering but has previously been known by Food Technology, Nutrition and Food Science, and Applied Biological Sciences. Sinskey says he has both “MIT 20” and “MIT XX” plates in Massachusetts and New Hampshire.

At least two MIT couples have had dual vanity plates. Says Laura Kiessling ’83, professor of chemistry: “My plate is ‘SLEX.’ This is the abbreviation for a carbohydrate called sialyl Lewis X. It has many roles, including a role in fertilization (sperm-egg binding). It tends to elicit many different reactions from people asking me what it means. Unless they are scientists, I say that my husband [Ron Raines ’80, professor of biology] gave it to me as an inside joke. My husband’s license plate is ‘PROTEIN.’”

Professor of the practice emerita Marcia Bartusiak of MIT Comparative Media Studies/Writing and her husband, Stephen Lowe PhD ’88, previously shared a pair of related license plates. When the couple lived in Virginia, Lowe working as a mathematician on the structure of spiral galaxies and Bartusiak a young science writer focused on astronomy, they had “SPIRAL” and “GALAXY” plates. Now retired in Massachusetts, while they no longer have registered vanity plates, they’ve named their current vehicles “Redshift” and “Blueshift.”

Still other community members have plates that make a nod to their hobbies — such as Department of Earth, Atmospheric and Planetary Sciences and AeroAstro Professor Sara Seager’s “ICANOE” — or else playfully connect with fellow drivers. Julianna Mullen, communications director in the Plasma Science and Fusion Center, says of her “OMGWHY” plate: “It’s just an existential reminder of the importance of scientific inquiry, especially in traffic when someone cuts you off so they can get exactly two car lengths ahead. Oh my God, why did they do it?”

Are you an MIT affiliate with a unique vanity plate? We’d love to see it!

Polina Anikeeva named head of the Department of Materials Science and Engineering

Polina Anikeeva PhD ’09, the Matoula S. Salapatas Professor at MIT, has been named the new head of MIT’s Department of Materials Science and Engineering (DMSE), effective July 1.

“Professor Anikeeva’s passion and dedication as both a researcher and educator, as well as her impressive network of connections across the wider Institute, make her incredibly well suited to lead DMSE,” says Anantha Chandrakasan, chief innovation and strategy officer, dean of engineering, and Vannevar Bush Professor of Electrical Engineering and Computer Science.

In addition to serving as a professor in DMSE, Anikeeva is a professor of brain and cognitive sciences, director of the K. Lisa Yang Brain-Body Center, a member of the McGovern Institute for Brain Research, and associate director of MIT’s Research Laboratory of Electronics.

Anikeeva leads the MIT Bioelectronics Group, which focuses on developing magnetic and optoelectronic tools to study neural communication in health and disease. Her team applies magnetic nanomaterials and fiber-based devices to reveal physiological processes underlying brain-organ communication, with particular focus on gut-brain circuits. Their goal is to develop minimally invasive treatments for a range of neurological, psychiatric, and metabolic conditions.

Anikeeva’s research sits at the intersection of materials chemistry, electronics, and neurobiology. By bridging these disciplines, Anikeeva and her team are deepening our understanding and treatment of complex neurological disorders. Her approach has led to the creation of optoelectronic and magnetic devices that can record neural activity and stimulate neurons during behavioral studies.

Throughout her career, Anikeeva has been recognized with numerous awards for her groundbreaking research. Her honors include receiving an NSF CAREER Award, DARPA Young Faculty Award, and the Pioneer Award from the NIH’s High-Risk, High-Reward Research Program. MIT Technology Review named her one of the 35 Innovators Under 35 and the Vilcek Foundation awarded her the Prize for Creative Promise in Biomedical Science.

Her impact extends beyond the laboratory and into the classroom, where her dedication to education has earned her the Junior Bose Teaching Award, the MacVicar Faculty Fellowship, and an MITx Prize for Teaching and Learning in MOOCs. Her entrepreneurial spirit was acknowledged with a $100,000 prize in the inaugural MIT Faculty Founders Initiative Prize Competition, recognizing her pioneering work in neuroprosthetics.

In 2023, Anikeeva co-founded Neurobionics Inc., which develops flexible fibers that can interface with the brain — opening new opportunities for sensing and therapeutics. The team has presented their technologies at MIT delta v Demo Day and won $50,000 worth of lab space at the LabCentral Ignite Golden Ticket pitch competition. Anikeeva serves as the company’s scientific advisor.

Anikeeva earned her bachelor’s degree in physics at St. Petersburg State Polytechnic University in Russia. She continued her education at MIT, where she received her PhD in materials science and engineering. Vladimir Bulović, director of MIT.nano and the Fariborz Maseeh Chair in Emerging Technology, served as Anikeeva’s doctoral advisor. After completing a postdoctoral fellowship at Stanford University, working on devices for optical stimulation and recording of neural activity, Anikeeva returned to MIT as a faculty member in 2011.

Anikeeva succeeds Caroline Ross, the Ford Professor of Engineering, who has served as interim department head since August 2023.

“Thanks to Professor Ross’s steadfast leadership, DMSE has continued to thrive during this period of transition. I’m incredibly grateful for her many contributions and long-standing commitment to strengthening the DMSE community,” adds Chandrakasan.

Study reveals how an anesthesia drug induces unconsciousness

There are many drugs that anesthesiologists can use to induce unconsciousness in patients. Exactly how these drugs cause the brain to lose consciousness has been a longstanding question, but MIT neuroscientists have now answered that question for one commonly used anesthesia drug.

Using a novel technique for analyzing neuron activity, the researchers discovered that the drug propofol induces unconsciousness by disrupting the brain’s normal balance between stability and excitability. The drug causes brain activity to become increasingly unstable, until the brain loses consciousness.

“The brain has to operate on this knife’s edge between excitability and chaos.” – Earl K. Miller

“It’s got to be excitable enough for its neurons to influence one another, but if it gets too excitable, it spins off into chaos. Propofol seems to disrupt the mechanisms that keep the brain in that narrow operating range,” says Earl K. Miller, the Picower Professor of Neuroscience and a member of MIT’s Picower Institute for Learning and Memory.

The new findings, reported today in Neuron, could help researchers develop better tools for monitoring patients as they undergo general anesthesia.

Miller and Ila Fiete, a professor of brain and cognitive sciences, the director of the K. Lisa Yang Integrative Computational Neuroscience Center (ICoN), and a member of MIT’s McGovern Institute for Brain Research, are the senior authors of the new study. MIT graduate student Adam Eisen and MIT postdoc Leo Kozachkov are the lead authors of the paper.

Losing consciousness

Propofol is a drug that binds to GABA receptors in the brain, inhibiting neurons that have those receptors. Other anesthesia drugs act on different types of receptors, and the mechanism for how all of these drugs produce unconsciousness is not fully understood.

Miller, Fiete, and their students hypothesized that propofol, and possibly other anesthesia drugs, interfere with a brain state known as “dynamic stability.” In this state, neurons have enough excitability to respond to new input, but the brain is able to quickly regain control and prevent them from becoming overly excited.

Woman gestures with her hand in front of a glass wall with equations written on it.
Ila Fiete in her lab at the McGovern Institute. Photo: Steph Stevens

Previous studies of how anesthesia drugs affect this balance have found conflicting results: Some suggested that during anesthesia, the brain shifts toward becoming too stable and unresponsive, which leads to loss of consciousness. Others found that the brain becomes too excitable, leading to a chaotic state that results in unconsciousness.

Part of the reason for these conflicting results is that it has been difficult to accurately measure dynamic stability in the brain. Measuring dynamic stability as consciousness is lost would help researchers determine if unconsciousness results from too much stability or too little stability.

In this study, the researchers analyzed electrical recordings made in the brains of animals that received propofol over an hour-long period, during which they gradually lost consciousness. The recordings were made in four areas of the brain that are involved in vision, sound processing, spatial awareness, and executive function.

These recordings covered only a tiny fraction of the brain’s overall activity, so to overcome that, the researchers used a technique called delay embedding. This technique allows researchers to characterize dynamical systems from limited measurements by augmenting each measurement with measurements that were recorded previously.

Using this method, the researchers were able to quantify how the brain responds to sensory inputs, such as sounds, or to spontaneous perturbations of neural activity.

In the normal, awake state, neural activity spikes after any input, then returns to its baseline activity level. However, once propofol dosing began, the brain started taking longer to return to its baseline after these inputs, remaining in an overly excited state. This effect became more and more pronounced until the animals lost consciousness.

This suggests that propofol’s inhibition of neuron activity leads to escalating instability, which causes the brain to lose consciousness, the researchers say.

Better anesthesia control

To see if they could replicate this effect in a computational model, the researchers created a simple neural network. When they increased the inhibition of certain nodes in the network, as propofol does in the brain, network activity became destabilized, similar to the unstable activity the researchers saw in the brains of animals that received propofol.

“We looked at a simple circuit model of interconnected neurons, and when we turned up inhibition in that, we saw a destabilization. So, one of the things we’re suggesting is that an increase in inhibition can generate instability, and that is subsequently tied to loss of consciousness,” Eisen says.

As Fiete explains, “This paradoxical effect, in which boosting inhibition destabilizes the network rather than silencing or stabilizing it, occurs because of disinhibition. When propofol boosts the inhibitory drive, this drive inhibits other inhibitory neurons, and the result is an overall increase in brain activity.”

The researchers suspect that other anesthetic drugs, which act on different types of neurons and receptors, may converge on the same effect through different mechanisms — a possibility that they are now exploring.

If this turns out to be true, it could be helpful to the researchers’ ongoing efforts to develop ways to more precisely control the level of anesthesia that a patient is experiencing. These systems, which Miller is working on with Emery Brown, the Edward Hood Taplin Professor of Medical Engineering at MIT, work by measuring the brain’s dynamics and then adjusting drug dosages accordingly, in real-time.

“If you find common mechanisms at work across different anesthetics, you can make them all safer by tweaking a few knobs, instead of having to develop safety protocols for all the different anesthetics one at a time,” Miller says. “You don’t want a different system for every anesthetic they’re going to use in the operating room. You want one that’ll do it all.”

The researchers also plan to apply their technique for measuring dynamic stability to other brain states, including neuropsychiatric disorders.

“This method is pretty powerful, and I think it’s going to be very exciting to apply it to different brain states, different types of anesthetics, and also other neuropsychiatric conditions like depression and schizophrenia,” Fiete says.

The research was funded by the Office of Naval Research, the National Institute of Mental Health, the National Institute of Neurological Disorders and Stroke, the National Science Foundation Directorate for Computer and Information Science and Engineering, the Simons Center for the Social Brain, the Simons Collaboration on the Global Brain, the JPB Foundation, the McGovern Institute, and the Picower Institute.

A prosthesis driven by the nervous system helps people with amputation walk naturally

State-of-the-art prosthetic limbs can help people with amputations achieve a natural walking gait, but they don’t give the user full neural control over the limb. Instead, they rely on robotic sensors and controllers that move the limb using predefined gait algorithms.

Using a new type of surgical intervention and neuroprosthetic interface, MIT researchers, in collaboration with colleagues from Brigham and Women’s Hospital, have shown that a natural walking gait is achievable using a prosthetic leg fully driven by the body’s own nervous system. The surgical amputation procedure reconnects muscles in the residual limb, which allows patients to receive “proprioceptive” feedback about where their prosthetic limb is in space.

In a study of seven patients who had this surgery, the MIT team found that they were able to walk faster, avoid obstacles, and climb stairs much more naturally than people with a traditional amputation.

“This is the first prosthetic study in history that shows a leg prosthesis under full neural modulation, where a biomimetic gait emerges. No one has been able to show this level of brain control that produces a natural gait, where the human’s nervous system is controlling the movement, not a robotic control algorithm,” says Hugh Herr, a professor of media arts and sciences, co-director of the K. Lisa Yang Center for Bionics at MIT, an associate member of MIT’s McGovern Institute for Brain Research, and the senior author of the new study.

Patients also experienced less pain and less muscle atrophy following this surgery, which is known as the agonist-antagonist myoneural interface (AMI). So far, about 60 patients around the world have received this type of surgery, which can also be done for people with arm amputations.

Hyungeun Song, a postdoc in MIT’s Media Lab, is the lead author of the paper, which appears today in Nature Medicine.

Sensory feedback

Most limb movement is controlled by pairs of muscles that take turns stretching and contracting. During a traditional below-the-knee amputation, the interactions of these paired muscles are disrupted. This makes it very difficult for the nervous system to sense the position of a muscle and how fast it’s contracting — sensory information that is critical for the brain to decide how to move the limb.

People with this kind of amputation may have trouble controlling their prosthetic limb because they can’t accurately sense where the limb is in space. Instead, they rely on robotic controllers built into the prosthetic limb. These limbs also include sensors that can detect and adjust to slopes and obstacles.

To try to help people achieve a natural gait under full nervous system control, Herr and his colleagues began developing the AMI surgery several years ago. Instead of severing natural agonist-antagonist muscle interactions, they connect the two ends of the muscles so that they still dynamically communicate with each other within the residual limb. This surgery can be done during a primary amputation, or the muscles can be reconnected after the initial amputation as part of a revision procedure.

“With the AMI amputation procedure, to the greatest extent possible, we attempt to connect native agonists to native antagonists in a physiological way so that after amputation, a person can move their full phantom limb with physiologic levels of proprioception and range of movement,” Herr says.

In a 2021 study, Herr’s lab found that patients who had this surgery were able to more precisely control the muscles of their amputated limb, and that those muscles produced electrical signals similar to those from their intact limb.

After those encouraging results, the researchers set out to explore whether those electrical signals could generate commands for a prosthetic limb and at the same time give the user feedback about the limb’s position in space. The person wearing the prosthetic limb could then use that proprioceptive feedback to volitionally adjust their gait as needed.

In the new Nature Medicine study, the MIT team found this sensory feedback did indeed translate into a smooth, near-natural ability to walk and navigate obstacles.

“Because of the AMI neuroprosthetic interface, we were able to boost that neural signaling, preserving as much as we could. This was able to restore a person’s neural capability to continuously and directly control the full gait, across different walking speeds, stairs, slopes, even going over obstacles,” Song says.

A natural gait

For this study, the researchers compared seven people who had the AMI surgery with seven who had traditional below-the-knee amputations. All of the subjects used the same type of bionic limb: a prosthesis with a powered ankle as well as electrodes that can sense electromyography (EMG) signals from the tibialis anterior the gastrocnemius muscles. These signals are fed into a robotic controller that helps the prosthesis calculate how much to bend the ankle, how much torque to apply, or how much power to deliver.

The researchers tested the subjects in several different situations: level-ground walking across a 10-meter pathway, walking up a slope, walking down a ramp, walking up and down stairs, and walking on a level surface while avoiding obstacles.

In all of these tasks, the people with the AMI neuroprosthetic interface were able to walk faster — at about the same rate as people without amputations — and navigate around obstacles more easily. They also showed more natural movements, such as pointing the toes of the prosthesis upward while going up stairs or stepping over an obstacle, and they were better able to coordinate the movements of their prosthetic limb and their intact limb. They were also able to push off the ground with the same amount of force as someone without an amputation.

“With the AMI cohort, we saw natural biomimetic behaviors emerge,” Herr says. “The cohort that didn’t have the AMI, they were able to walk, but the prosthetic movements weren’t natural, and their movements were generally slower.”

These natural behaviors emerged even though the amount of sensory feedback provided by the AMI was less than 20 percent of what would normally be received in people without an amputation.

“One of the main findings here is that a small increase in neural feedback from your amputated limb can restore significant bionic neural controllability, to a point where you allow people to directly neurally control the speed of walking, adapt to different terrain, and avoid obstacles,” Song says.

“This work represents yet another step in us demonstrating what is possible in terms of restoring function in patients who suffer from severe limb injury. It is through collaborative efforts such as this that we are able to make transformational progress in patient care,” says Matthew Carty, a surgeon at Brigham and Women’s Hospital and associate professor at Harvard Medical School, who is also an author of the paper.

Enabling neural control by the person using the limb is a step toward Herr’s lab’s goal of “rebuilding human bodies,” rather than having people rely on ever more sophisticated robotic controllers and sensors — tools that are powerful but do not feel like part of the user’s body.

“The problem with that long-term approach is that the user would never feel embodied with their prosthesis. They would never view the prosthesis as part of their body, part of self,” Herr says. “The approach we’re taking is trying to comprehensively connect the brain of the human to the electromechanics.”

The research was funded by the MIT K. Lisa Yang Center for Bionics and the Eunice Kennedy Shriver National Institute of Child Health and Human Development.

A new strategy to cope with emotional stress

Some people, especially those in public service, perform admirable feats—healthcare workers fighting to keep patients alive or a first responder arriving at the scene of a car crash. But the emotional weight can become a mental burden. Research has shown that emergency personnel are at elevated risk for mental health challenges like post-traumatic stress disorder. How can people undergo such stressful experiences and also maintain their well-being?

A new study from the McGovern Institute reveals that a cognitive strategy focused on social good may be effective in helping people cope with distressing events. The research team found that the approach was comparable to another well-established emotion regulation strategy, unlocking a new tool for dealing with highly adverse situations.

“How you think can improve how you feel.”
– John Gabrieli

“This research suggests that the social good approach might be particularly useful in improving well-being for those constantly exposed to emotionally taxing events,” says John Gabrieli, the Grover Hermann Professor of Health Sciences and Technology and a professor of brain and cognitive sciences at MIT, who is a senior author of the paper.

The study, published today in PLOS ONE, is the first to examine the efficacy of this cognitive strategy. Nancy Tsai, a postdoctoral research scientist in Gabrieli’s lab at the McGovern Institute, is the lead author of the paper.

Emotion regulation tools

Emotion regulation is the ability to mentally reframe how we experience emotions—a skill critical to maintaining good mental health. Doing so can make one feel better when dealing with adverse events, and emotion regulation has been shown to boost emotional, social, cognitive, and physiological outcomes across the lifespan.

Female scientist poses with her arms crossed.
MIT postdoctoral researcher Nancy Tsai. Photo: Steph Stevens

One emotion regulation strategy is “distancing,” where a person copes with a negative event by imagining it as happening far away, a long time ago, or from a third-person perspective. Distancing has been well-documented as a useful cognitive tool, but it may be less effective in certain situations, especially ones that are socially charged—like a firefighter rescuing a family from a burning home. Rather than distancing themselves, a person may instead be forced to engage directly with the situation.

“In these cases, the ‘social good’ approach may be a powerful alternative,” says Tsai. “When a person uses the social good method, they view a negative situation as an opportunity to help others or prevent further harm.” For example, a firefighter experiencing emotional distress might focus on the fact that their work enables them to save lives. The idea had yet to be backed by scientific investigation, so Tsai and her team, alongside Gabrieli, saw an opportunity to rigorously probe this strategy.

A novel study

The MIT researchers recruited a cohort of adults and had them complete a questionnaire to gather information including demographics, personality traits, and current well-being, as well as how they regulated their emotions and dealt with stress. The cohort was randomly split into two groups: a distancing group and a social good group. In the online study, each group was shown a series of images that were either neutral (such as fruit) or contained highly aversive content (such as bodily injury). Participants were fully informed of the types of images they might see and could opt out of the study at any time.

Each group was asked to use their assigned cognitive strategy to respond to half of the negative images. For example, while looking at a distressing image, a person in the distancing group could have imagined that it was a screenshot from a movie. Conversely, a subject in the social good group might have responded to the image by envisioning that they were a first responder saving people from harm. For the other half of the negative images, participants were asked to only look at them and pay close attention to their emotions. The researchers asked the participants how they felt after each image was shown.

Social good as a potent strategy

The MIT team found that distancing and social good approaches helped diminish negative emotions. Participants reported feeling better when they used these strategies after viewing adverse content compared to when they did not and stated that both strategies were easy to implement.

The results also revealed that, overall, distancing yielded a stronger effect. Importantly, however, Tsai and Gabrieli believe that this study offers compelling evidence for social good as a powerful method better suited to situations when people cannot distance themselves, like rescuing someone from a car crash, “Which is more probable for people in the real world,” notes Tsai. Moreover, the team discovered that people who most successfully used the social good approach were more likely to view stress as enhancing rather than debilitating. Tsai says this link may point to psychological mechanisms that underlie both emotion regulation and how people respond to stress.

“The social good approach may be a potent strategy to combat the immense emotional demands of certain professions.”
– John Gabrieli

Additionally, the results showed that older adults used the cognitive strategies more effectively than younger adults. The team suspects that this is probably because, as prior research has shown, older adults are more adept at regulating their emotions likely due to having greater life experiences. The authors note that successful emotion regulation also requires cognitive flexibility, or having a malleable mindset to adapt well to different situations.

“This is not to say that people, such as physicians, should reframe their emotions to the point where they fully detach themselves from negative situations,” says Gabrieli. “But our study shows that the social good approach may be a potent strategy to combat the immense emotional demands of certain professions.”

The MIT team says that future studies are needed to further validate this work, and that such research is promising in that it can uncover new cognitive tools to equip individuals to take care of themselves as they bravely assume the challenge of taking care of others.

What is language for?

Press Mentions

Language is a defining feature of humanity, and for centuries, philosophers and scientists have contemplated its true purpose. We use language to share information and exchange ideas—but is it more than that? Do we use language not just to communicate, but to think?

In the June 19, 2024, issue of the journal Nature, McGovern Institute neuroscientist Evelina Fedorenko and colleagues argue that we do not. Language, they say, is primarily a tool for communication.

Fedorenko acknowledges that there is an intuitive link between language and thought. Many people experience an inner voice that seems to narrate their own thoughts. And it’s not unreasonable to expect that well-spoken, articulate individuals are also clear thinkers. But as compelling as these associations can be, they are not evidence that we actually use language to think.

 “I think there are a few strands of intuition and confusions that have led people to believe very strongly that language is the medium of thought,” she says.

“But when they are pulled apart thread by thread, they don’t really hold up to empirical scrutiny.”

Separating language and thought

For centuries, language’s potential role in facilitating thinking was nearly impossible to evaluate scientifically.

McGovern Investivator Ev Fedorenko in the Martinos Imaging Center at MIT. Photo: Caitlin Cunningham

But neuroscientists and cognitive scientists now have tools that enable a more rigorous consideration of the idea. Evidence from both fields, which Fedorenko, MIT cognitive scientist and linguist Edward Gibson, and University of California Berkeley cognitive scientist Steven Piantadosi review in their Nature Perspective, supports the idea that language is a tool for communication, not for thought.

“What we’ve learned by using methods that actually tell us about the engagement of the linguistic processing mechanisms is that those mechanisms are not really engaged when we think,” Fedorenko says. Also, she adds, “you can take those mechanisms away, and it seems that thinking can go on just fine.”

Over the past 20 years, Fedorenko and other neuroscientists have advanced our understanding of what happens in the brain as it generates and understands language. Now, using functional MRI to find parts of the brain that are specifically engaged when someone reads or listens to sentences or passages, they can reliably identify an individual’s language-processing network. Then they can monitor those brain regions while the person performs other tasks, from solving a sudoku puzzle to reasoning about other people’s beliefs.

“Your language system is basically silent when you do all sorts of thinking.” – Ev Fedorenko

“Pretty much everything we’ve tested so far, we don’t see any evidence of the engagement of the language mechanisms,” Fedorenko says. “Your language system is basically silent when you do all sorts of thinking.”

That’s consistent with observations from people who have lost the ability to process language due to an injury or stroke. Severely affected patients can be completely unable to process words, yet this does not interfere with their ability to solve math problems, play chess, or plan for future events. “They can do all the things that they could do before their injury. They just can’t take those mental representations and convert them into a format which would allow them to talk about them with others,” Fedorenko says. “If language gives us the core representations that we use for reasoning, then…destroying the language system should lead to problems in thinking as well, and it really doesn’t.”

Conversely, intellectual impairments do not always associate with language impairment; people with intellectual disability disorders or neuropsychiatric disorders that limit their ability to think and reason do not necessarily have problems with basic linguistic functions. Just as language does not appear to be necessary for thought, Fedorenko and colleagues conclude that it is also not sufficient to produce clear thinking.

Language optimization

In addition to arguing that language is unlikely to be used for thinking, the scientists considered its suitability as a communication tool, drawing on findings from linguistic analyses. Analyses across dozens of diverse languages, both spoken and signed, have found recurring features that make them easy to produce and understand. “It turns out that pretty much any property you look at, you can find evidence that languages are optimized in a way that makes information transfer as efficient as possible,” Fedorenko says.

That’s not a new idea, but it has held up as linguists analyze larger corpora across more diverse sets of languages, which has become possible in recent years as the field has assembled corpora that are annotated for various linguistic features. Such studies find that across languages, sounds and words tend to be pieced together in ways that minimize effort for the language producer without muddling the message. For example, commonly used words tend to be short, while words whose meanings depend on one another tend to cluster close together in sentences. Likewise, linguists have noted features that help languages convey meaning despite potential “signal distortions,” whether due to attention lapses or ambient noise.

“All of these features seem to suggest that the forms of languages are optimized to make communication easier,” Fedorenko says, pointing out that such features would be irrelevant if language was primarily a tool for internal thought.

“Given that languages have all these properties, it’s likely that we use language for communication,” she says. She and her coauthors conclude that as a powerful tool for transmitting knowledge, language reflects the sophistication of human cognition—but does not give rise to it.

Symposium highlights scale of mental health crisis and novel methods of diagnosis and treatment

Digital technologies, such as smartphones and machine learning, have revolutionized education. At the McGovern Institute for Brain Research’s 2024 Spring Symposium, “Transformational Strategies in Mental Health,” experts from across the sciences — including psychiatry, psychology, neuroscience, computer science, and others — agreed that these technologies could also play a significant role in advancing the diagnosis and treatment of mental health disorders and neurological conditions.

Co-hosted by the McGovern Institute, MIT Open Learning, McClean Hospital, the Poitras Center for Psychiatric Disorders Research at MIT, and the Wellcome Trust, the symposium raised the alarm about the rise in mental health challenges and showcased the potential for novel diagnostic and treatment methods.

“We have to do something together as a community of scientists and partners of all kinds to make a difference.” – John Gabrieli

John Gabrieli, the Grover Hermann Professor of Health Sciences and Technology at MIT, kicked off the symposium with a call for an effort on par with the Manhattan Project, which in the 1940s saw leading scientists collaborate to do what seemed impossible. While the challenge of mental health is quite different, Gabrieli stressed, the complexity and urgency of the issue are similar. In his later talk, “How can science serve psychiatry to enhance mental health?,” he noted a 35 percent rise in teen suicide deaths between 1999 and 2000 and, between 2007 and 2015, a 100 percent increase in emergency room visits for youths ages 5 to 18 who experienced a suicide attempt or suicidal ideation.

“We have no moral ambiguity, but all of us speaking today are having this meeting in part because we feel this urgency,” said Gabrieli, who is also a professor of brain and cognitive sciences, the director of the Integrated Learning Initiative (MITili) at MIT Open Learning, and a member of the McGovern Institute. “We have to do something together as a community of scientists and partners of all kinds to make a difference.”

An urgent problem

In 2021, U.S. Surgeon General Vivek Murthy issued an advisory on the increase in mental health challenges in youth; in 2023, he issued another, warning of the effects of social media on youth mental health. At the symposium, Susan Whitfield-Gabrieli, a research affiliate at the McGovern Institute and a professor of psychology and director of the Biomedical Imaging Center at Northeastern University, cited these recent advisories, saying they underscore the need to “innovate new methods of intervention.”

Other symposium speakers also highlighted evidence of growing mental health challenges for youth and adolescents. Christian Webb, associate professor of psychology at Harvard Medical School, stated that by the end of adolescence, 15-20 percent of teens will have experienced at least one episode of clinical depression, with girls facing the highest risk. Most teens who experience depression receive no treatment, he added.

Adults who experience mental health challenges need new interventions, too. John Krystal, the Robert L. McNeil Jr. Professor of Translational Research and chair of the Department of Psychiatry at Yale University School of Medicine, pointed to the limited efficacy of antidepressants, which typically take about two months to have an effect on the patient. Patients with treatment-resistant depression face a 75 percent likelihood of relapse within a year of starting antidepressants. Treatments for other mental health disorders, including bipolar and psychotic disorders, have serious side effects that can deter patients from adherence, said Virginie-Anne Chouinard, director of research at McLean OnTrackTM, a program for first episode psychosis at McLean Hospital.

New treatments, new technologies

Emerging technologies, including smartphone technology and artificial intelligence, are key to the interventions that symposium speakers shared.

In a talk on AI and the brain, Dina Katabi, the Thuan and Nicole Pham Professor of Electrical Engineering and Computer Science at MIT, discussed novel ways to detect Parkinson’s and Alzheimer’s, among other diseases. Early-stage research involved developing devices that can analyze how movement within a space impacts the surrounding electromagnetic field, as well as how wireless signals can detect breathing and sleep stages.

“I realize this may sound like la-la land,” Katabi said. “But it’s not! This device is used today by real patients, enabled by a revolution in neural networks and AI.”

Parkinson’s disease often cannot be diagnosed until significant impairment has already occurred. In a set of studies, Katabi’s team collected data on nocturnal breathing and trained a custom neural network to detect occurrences of Parkinson’s. They found the network was over 90 percent accurate in its detection. Next, the team used AI to analyze two sets of breathing data collected from patients at a six-year interval. Could their custom neural network identify patients who did not have a Parkinson’s diagnosis on the first visit, but subsequently received one? The answer was largely yes: Machine learning identified 75 percent of patients who would go on to receive a diagnosis.

Detecting high-risk patients at an early stage could make a substantial difference for intervention and treatment. Similarly, research by Jordan Smoller, professor of psychiatry at Harvard Medical School and director of the Center for Precision Psychiatry at Massachusetts General Hospital, demonstrated that AI-aided suicide risk prediction model could detect 45 percent of suicide attempts or deaths with 90 percent specificity, about two to three years in advance.

Other presentations, including a series of lightning talks, shared new and emerging treatments, such as the use of ketamine to treat depression; the use of smartphones, including daily text surveys and mindfulness apps, in treating depression in adolescents; metabolic interventions for psychotic disorders; the use of machine learning to detect impairment from THC intoxication; and family-focused treatment, rather than individual therapy, for youth depression.

Advancing understanding

The frequency and severity of adverse mental health events for children, adolescents, and adults demonstrate the necessity of funding for mental health research — and the open sharing of these findings.

Niall Boyce, head of mental health field building at the Wellcome Trust — a global charitable foundation dedicated to using science to solve urgent health challenges — outlined the foundation’s funding philosophy of supporting research that is “collaborative, coherent, and focused” and centers on “What is most important to those most affected?” Wellcome research managers Anum Farid and Tayla McCloud stressed the importance of projects that involve people with lived experience of mental health challenges and “blue sky thinking” that takes risks and can advance understanding in innovative ways. Wellcome requires that all published research resulting from its funding be open and accessible in order to maximize their benefits.

Whether through therapeutic models, pharmaceutical treatments, or machine learning, symposium speakers agreed that transformative approaches to mental health call for collaboration and innovation.

“Understanding mental health requires us to understand the unbelievable diversity of humans,” Gabrieli said. “We have to use all the tools we have now to develop new treatments that will work for people for whom our conventional treatments don’t.”