MIT scientists learn how to control muscles with light

For people with paralysis or amputation, neuroprosthetic systems that artificially stimulate muscle contraction with electrical current can help them regain limb function. However, despite many years of research, this type of prosthesis is not widely used because it leads to rapid muscle fatigue and poor control.

McGovern Institute Associate Investigator Hugh Herr. Photo: Jimmy Day / MIT Media Lab

MIT researchers have developed a new approach that they hope could someday offer better muscle control with less fatigue. Instead of using electricity to stimulate muscles, they used light. In a study in mice, the researchers showed that this optogenetic technique offers more precise muscle control, along with a dramatic decrease in fatigue.

“It turns out that by using light, through optogenetics, one can control muscle more naturally. In terms of clinical application, this type of interface could have very broad utility,” says Hugh Herr, a professor of media arts and sciences, co-director of the K. Lisa Yang Center for Bionics at MIT, and an associate member of MIT’s McGovern Institute for Brain Research.

Optogenetics is a method based on genetically engineering cells to express light-sensitive proteins, which allows researchers to control activity of those cells by exposing them to light. This approach is currently not feasible in humans, but Herr, MIT graduate student Guillermo Herrera-Arcos, and their colleagues at the K. Lisa Yang Center for Bionics are now working on ways to deliver light-sensitive proteins safely and effectively into human tissue.

Herr is the senior author of the study, which appears today in Science Robotics. Herrera-Arcos is the lead author of the paper.

Optogenetic control

For decades, researchers have been exploring the use of functional electrical stimulation (FES) to control muscles in the body. This method involves implanting electrodes that stimulate nerve fibers, causing a muscle to contract. However, this stimulation tends to activate the entire muscle at once, which is not the way that the human body naturally controls muscle contraction.

“Humans have this incredible control fidelity that is achieved by a natural recruitment of the muscle, where small motor units, then moderate-sized, then large motor units are recruited, in that order, as signal strength is increased,” Herr says. “With FES, when you artificially blast the muscle with electricity, the largest units are recruited first. So, as you increase signal, you get no force at the beginning, and then suddenly you get too much force.”

This large force not only makes it harder to achieve fine muscle control, it also wears out the muscle quickly, within five or 10 minutes.

The MIT team wanted to see if they could replace that entire interface with something different. Instead of electrodes, they decided to try controlling muscle contraction using optical molecular machines via optogenetics.

Two scientists in the lab.
“This could lead to a minimally invasive strategy that would change the game in terms of clinical care for persons suffering from limb pathology,” Hugh Herr says, pictured on left next to Herrera-Arcos.

Using mice as an animal model, the researchers compared the amount of muscle force they could generate using the traditional FES approach with forces generated by their optogenetic method. For the optogenetic studies, they used mice that had already been genetically engineered to express a light-sensitive protein called channelrhodopsin-2. They implanted a small light source near the tibial nerve, which controls muscles of the lower leg.

The researchers measured muscle force as they gradually increased the amount of light stimulation, and found that, unlike FES stimulation, optogenetic control produced a steady, gradual increase in contraction of the muscle.

“As we change the optical stimulation that we deliver to the nerve, we can proportionally, in an almost linear way, control the force of the muscle. This is similar to how the signals from our brain control our muscles. Because of this, it becomes easier to control the muscle compared with electrical stimulation,” Herrera-Arcos says.

Fatigue resistance

Using data from those experiments, the researchers created a mathematical model of optogenetic muscle control. This model relates the amount of light going into the system to the output of the muscle (how much force is generated).

This mathematical model allowed the researchers to design a closed-loop controller. In this type of system, the controller delivers a stimulatory signal, and after the muscle contracts, a sensor can detect how much force the muscle is exerting. This information is sent back to the controller, which calculates if, and how much, the light stimulation needs to be adjusted to reach the desired force.

Using this type of control, the researchers found that muscles could be stimulated for more than an hour before fatiguing, while muscles became fatigued after only 15 minutes using FES stimulation.

One hurdle the researchers are now working to overcome is how to safely deliver light-sensitive proteins into human tissue. Several years ago, Herr’s lab reported that in rats, these proteins can trigger an immune response that inactivates the proteins and could also lead to muscle atrophy and cell death.

“A key objective of the K. Lisa Yang Center for Bionics is to solve that problem,” Herr says. “A multipronged effort is underway to design new light-sensitive proteins, and strategies to deliver them, without triggering an immune response.”

As additional steps toward reaching human patients, Herr’s lab is also working on new sensors that can be used to measure muscle force and length, as well as new ways to implant the light source. If successful, the researchers hope their strategy could benefit people who have experienced strokes, limb amputation, and spinal cord injuries, as well as others who have impaired ability to control their limbs.

“This could lead to a minimally invasive strategy that would change the game in terms of clinical care for persons suffering from limb pathology,” Herr says.

The research was funded by the K. Lisa Yang Center for Bionics at MIT.

Five MIT faculty elected to the National Academy of Sciences for 2024

The National Academy of Sciences has elected 120 members and 24 international members, including five faculty members from MIT. Guoping Feng, Piotr Indyk, Daniel J. Kleitman, Daniela Rus, and Senthil Todadri were elected in recognition of their “distinguished and continuing achievements in original research.” Membership to the National Academy of Sciences is one of the highest honors a scientist can receive in their career.

Among the new members added this year are also nine MIT alumni, including Zvi Bern ’82; Harold Hwang ’93, SM ’93; Leonard Kleinrock SM ’59, PhD ’63; Jeffrey C. Lagarias ’71, SM ’72, PhD ’74; Ann Pearson PhD ’00; Robin Pemantle PhD ’88; Jonas C. Peters PhD ’98; Lynn Talley PhD ’82; and Peter T. Wolczanski ’76. Those elected this year bring the total number of active members to 2,617, with 537 international members.

The National Academy of Sciences is a private, nonprofit institution that was established under a congressional charter signed by President Abraham Lincoln in 1863. It recognizes achievement in science by election to membership, and — with the National Academy of Engineering and the National Academy of Medicine — provides science, engineering, and health policy advice to the federal government and other organizations.

Guoping Feng

Guoping Feng is the James W. (1963) and Patricia T. Poitras Professor in the Department of Brain and Cognitive Sciences. He is also associate director and investigator in the McGovern Institute for Brain Research, a member of the Broad Institute of MIT and Harvard, and director of the Hock E. Tan and K. Lisa Yang Center for Autism Research.

His research focuses on understanding the molecular mechanisms that regulate the development and function of synapses, the places in the brain where neurons connect and communicate. He’s interested in how defects in the synapses can contribute to psychiatric and neurodevelopmental disorders. By understanding the fundamental mechanisms behind these disorders, he’s producing foundational knowledge that may guide the development of new treatments for conditions like obsessive-compulsive disorder and schizophrenia.

Feng received his medical training at Zhejiang University Medical School in Hangzhou, China, and his PhD in molecular genetics from the State University of New York at Buffalo. He did his postdoctoral training at Washington University at St. Louis and was on the faculty at Duke University School of Medicine before coming to MIT in 2010. He is a member of the American Academy of Arts and Sciences, a fellow of the American Association for the Advancement of Science, and was elected to the National Academy of Medicine in 2023.

Piotr Indyk

Piotr Indyk is the Thomas D. and Virginia W. Cabot Professor of Electrical Engineering and Computer Science. He received his magister degree from the University of Warsaw and his PhD from Stanford University before coming to MIT in 2000.

Indyk’s research focuses on building efficient, sublinear, and streaming algorithms. He’s developed, for example, algorithms that can use limited time and space to navigate massive data streams, that can separate signals into individual frequencies faster than other methods, and can address the “nearest neighbor” problem by finding highly similar data points without needing to scan an entire database. His work has applications on everything from machine learning to data mining.

He has been named a Simons Investigator and a fellow of the Association for Computer Machinery. In 2023, he was elected to the American Academy of Arts and Sciences.

Daniel J. Kleitman

Daniel Kleitman, a professor emeritus of applied mathematics, has been at MIT since 1966. He received his undergraduate degree from Cornell University and his master’s and PhD in physics from Harvard University before doing postdoctoral work at Harvard and the Niels Bohr Institute in Copenhagen, Denmark.

Kleitman’s research interests include operations research, genomics, graph theory, and combinatorics, the area of math concerned with counting. He was actually a professor of physics at Brandeis University before changing his field to math, encouraged by the prolific mathematician Paul Erdős. In fact, Kleitman has the rare distinction of having an Erdős number of just one. The number is a measure of the “collaborative distance” between a mathematician and Erdős in terms of authorship of papers, and studies have shown that leading mathematicians have particularly low numbers.

He’s a member of the American Academy of Arts and Sciences and has made important contributions to the MIT community throughout his career. He was head of the Department of Mathematics and served on a number of committees, including the Applied Mathematics Committee. He also helped create web-based technology and an online textbook for several of the department’s core undergraduate courses. He was even a math advisor for the MIT-based film “Good Will Hunting.”

Daniela Rus

Daniela Rus, the Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science, is the director of the Computer Science and Artificial Intelligence Laboratory (CSAIL). She also serves as director of the Toyota-CSAIL Joint Research Center.

Her research on robotics, artificial intelligence, and data science is geared toward understanding the science and engineering of autonomy. Her ultimate goal is to create a future where machines are seamlessly integrated into daily life to support people with cognitive and physical tasks, and deployed in way that ensures they benefit humanity. She’s working to increase the ability of machines to reason, learn, and adapt to complex tasks in human-centered environments with applications for agriculture, manufacturing, medicine, construction, and other industries. She’s also interested in creating new tools for designing and fabricating robots and in improving the interfaces between robots and people, and she’s done collaborative projects at the intersection of technology and artistic performance.

Rus received her undergraduate degree from the University of Iowa and her PhD in computer science from Cornell University. She was a professor of computer science at Dartmouth College before coming to MIT in 2004. She is part of the Class of 2002 MacArthur Fellows; was elected to the National Academy of Engineering and the American Academy of Arts and Sciences; and is a fellow of the Association for Computer Machinery, the Institute of Electrical and Electronics Engineers, and the Association for the Advancement of Artificial Intelligence.

Senthil Todadri

Senthil Todadri, a professor of physics, came to MIT in 2001. He received his undergraduate degree from the Indian Institute of Technology in Kanpur and his PhD from Yale University before working as a postdoc at the Kavli Institute for Theoretical Physics in Santa Barbara, California.

Todadri’s research focuses on condensed matter theory. He’s interested in novel phases and phase transitions of quantum matter that expand beyond existing paradigms. Combining modeling experiments and abstract methods, he’s working to develop a theoretical framework for describing the physics of these systems. Much of that work involves understanding the phenomena that arise because of impurities or strong interactions between electrons in solids that don’t conform with conventional physical theories. He also pioneered the theory of deconfined quantum criticality, which describes a class of phase transitions, and he discovered the dualities of quantum field theories in two dimensional superconducting states, which has important applications to many problems in the field.

Todadri has been named a Simons Investigator, a Sloan Research Fellow, and a fellow of the American Physical Society. In 2023, he was elected to the American Academy of Arts and Sciences

Using MRI, engineers have found a way to detect light deep in the brain

Scientists often label cells with proteins that glow, allowing them to track the growth of a tumor, or measure changes in gene expression that occur as cells differentiate.

A man stands with his arms crossed in front of a board with mathematical equations written on it.
Alan Jasanoff, associate member of the McGovern Institute, and a professor of brain and cognitive sciences, biological engineering, and nuclear science and engineering at MIT. Photo: Justin Knight

While this technique works well in cells and some tissues of the body, it has been difficult to apply this technique to image structures deep within the brain, because the light scatters too much before it can be detected.

MIT engineers have now come up with a novel way to detect this type of light, known as bioluminescence, in the brain: They engineered blood vessels of the brain to express a protein that causes them to dilate in the presence of light. That dilation can then be observed with magnetic resonance imaging (MRI), allowing researchers to pinpoint the source of light.

“A well-known problem that we face in neuroscience, as well as other fields, is that it’s very difficult to use optical tools in deep tissue. One of the core objectives of our study was to come up with a way to image bioluminescent molecules in deep tissue with reasonably high resolution,” says Alan Jasanoff, an MIT professor of biological engineering, brain and cognitive sciences, and nuclear science and engineering.

The new technique developed by Jasanoff and his colleagues could enable researchers to explore the inner workings of the brain in more detail than has previously been possible.

Jasanoff, who is also an associate investigator at MIT’s McGovern Institute for Brain Research, is the senior author of the study, which appears today in Nature Biomedical Engineering. Former MIT postdocs Robert Ohlendorf and Nan Li are the lead authors of the paper.

Detecting light

Bioluminescent proteins are found in many organisms, including jellyfish and fireflies. Scientists use these proteins to label specific proteins or cells, whose glow can be detected by a luminometer. One of the proteins often used for this purpose is luciferase, which comes in a variety of forms that glow in different colors.

Jasanoff’s lab, which specializes in developing new ways to image the brain using MRI, wanted to find a way to detect luciferase deep within the brain. To achieve that, they came up with a method for transforming the blood vessels of the brain into light detectors. A popular form of MRI works by imaging changes in blood flow in the brain, so the researchers engineered the blood vessels themselves to respond to light by dilating.

“Blood vessels are a dominant source of imaging contrast in functional MRI and other non-invasive imaging techniques, so we thought we could convert the intrinsic ability of these techniques to image blood vessels into a means for imaging light, by photosensitizing the blood vessels themselves,” Jasanoff says.

“We essentially turn the vasculature of the brain into a three-dimensional camera.” – Alan Jasanoff

To make the blood vessels sensitive to light, the researcher engineered them to express a bacterial protein called Beggiatoa photoactivated adenylate cyclase (bPAC). When exposed to light, this enzyme produces a molecule called cAMP, which causes blood vessels to dilate. When blood vessels dilate, it alters the balance of oxygenated and deoxygenated hemoglobin, which have different magnetic properties. This shift in magnetic properties can be detected by MRI.

BPAC responds specifically to blue light, which has a short wavelength, so it detects light generated within close range. The researchers used a viral vector to deliver the gene for bPAC specifically to the smooth muscle cells that make up blood vessels. When this vector was injected in rats, blood vessels throughout a large area of the brain became light-sensitive.

“Blood vessels form a network in the brain that is extremely dense. Every cell in the brain is within a couple dozen microns of a blood vessel,” Jasanoff says. “The way I like to describe our approach is that we essentially turn the vasculature of the brain into a three-dimensional camera.”

Once the blood vessels were sensitized to light, the researchers implanted cells that had been engineered to express luciferase if a substrate called CZT is present. In the rats, the researchers were able to detect luciferase by imaging the brain with MRI, which revealed dilated blood vessels.

Tracking changes in the brain

The researchers then tested whether their technique could detect light produced by the brain’s own cells, if they were engineered to express luciferase. They delivered the gene for a type of luciferase called GLuc to cells in a deep brain region known as the striatum. When the CZT substrate was injected into the animals, MRI imaging revealed the sites where light had been emitted.

This technique, which the researchers dubbed bioluminescence imaging using hemodynamics, or BLUsH, could be used in a variety of ways to help scientists learn more about the brain, Jasanoff says.

For one, it could be used to map changes in gene expression, by linking the expression of luciferase to a specific gene. This could help researchers observe how gene expression changes during embryonic development and cell differentiation, or when new memories form. Luciferase could also be used to map anatomical connections between cells or to reveal how cells communicate with each other.

The researchers now plan to explore some of those applications, as well as adapting the technique for use in mice and other animal models.

The research was funded by the U.S. National Institutes of Health, the G. Harold and Leila Y. Mathers Foundation, Lore Harp McGovern, Gardner Hendrie, a fellowship from the German Research Foundation, a Marie Sklodowska-Curie Fellowship from the European Union, and a Y. Eva Tan Fellowship and a J. Douglas Tan Fellowship, both from the McGovern Institute for Brain Research.

Women in STEM — A celebration of excellence and curiosity

What better way to commemorate Women’s History Month and International Women’s Day than to give  three of the world’s most accomplished scientists an opportunity to talk about their careers? On March 7, MindHandHeart invited professors Paula Hammond, Ann Graybiel, and Sangeeta Bhatia to share their career journeys, from the progress they have witnessed to the challenges they have faced as women in STEM. Their conversation was moderated by Mary Fuller, chair of the faculty and professor of literature.

Hammond, an Institute professor with appointments in the Department of Chemical Engineering and the Koch Institute for Integrative Cancer Research, reflected on the strides made by women faculty at MIT, while acknowledging ongoing challenges. “I think that we have advanced a great deal in the last few decades in terms of the numbers of women who are present, although we still have a long way to go,” Hammond noted in her opening. “We’ve seen a remarkable increase over the past couple of decades in our undergraduate population here at MIT, and now we’re beginning to see it in the graduate population, which is really exciting.” Hammond was recently appointed to the role of vice provost for faculty.

Ann Graybiel, also an Institute professor, who has appointments in the Department of Brain and Cognitive Sciences and the McGovern Institute for Brain Research, described growing up in the Deep South. “Girls can’t do science,” she remembers being told in school, and they “can’t do research.” Yet her father, a physician scientist, often took her with him to work and had her assist from a young age, eventually encouraging her directly to pursue a career in science. Graybiel, who first came to MIT in 1973, noted that she continued to face barriers and rejection throughout her career long after leaving the South, but that individual gestures of inspiration, generosity, or simple statements of “You can do it” from her peers helped her power through and continue in her scientific pursuits.

Sangeeta Bhatia, the John and Dorothy Wilson Professor of Health Sciences and Technology and Electrical Engineering and Computer Science, director of the Marble Center for Cancer Nanomedicine at the Koch Institute for Integrative Cancer Research, and a member of the Institute for Medical Engineering and Science, is also the mother of two teenage girls. She shared her perspective on balancing career and family life: “I wanted to pick up my kids from school and I wanted to know their friends. … I had a vision for the life that I wanted.” Setting boundaries at work, she noted, empowered her to achieve both personal and professional goals. Bhatia also described her collaboration with President Emerita Susan Hockfield and MIT Amgen Professor of Biology Emerita Nancy Hopkins to spearhead the Future Founders Initiative, which aims to boost the representation of female faculty members pursuing biotechnology ventures.

A video of the full panel discussion is available on the MindHandHeart YouTube channel.

From neurons to learning and memory

Mark Harnett, an associate professor at MIT, still remembers the first time he saw electrical activity spiking from a living neuron.

He was a senior at Reed College and had spent weeks building a patch clamp rig — an experimental setup with an electrode that can be used to gently probe a neuron and measure its electrical activity.

“The first time I stuck one of these electrodes onto one of these cells and could see the electrical activity happening in real time on the oscilloscope, I thought, ‘Oh my God, this is what I’m going to do for the rest of my life. This is the coolest thing I’ve ever seen!’” Harnett says.

Harnett, who recently earned tenure in MIT’s Department of Brain and Cognitive Sciences, now studies the electrical properties of neurons and how these properties enable neural circuits to perform the computations that give rise to brain functions such as learning, memory, and sensory perception.

“My lab’s ultimate goal is to understand how the cortex works,” Harnett says. “What are the computations? How do the cells and the circuits and the synapses support those computations? What are the molecular and structural substrates of learning and memory? How do those things interact with circuit dynamics to produce flexible, context-dependent computation?”

“We go after that by looking at molecules, like synaptic receptors and ion channels, all the way up to animal behavior, and building theoretical models of neural circuits,” he adds.

Influence on the mind

Harnett’s interest in science was sparked in middle school, when he had a teacher who made the subject come to life. “It was middle school science, which was a lot of just mixing random things together. It wasn’t anything particularly advanced, but it was really fun,” he says. “Our teacher was just super encouraging and inspirational, and she really sparked what became my lifelong interest in science.”

When Harnett was 11, his father got a new job at a technology company in Minneapolis and the family moved from New Jersey to Minnesota, which proved to be a difficult adjustment. When choosing a college, Harnett decided to go far away, and ended up choosing Reed College, a school in Portland, Oregon, that encourages a great deal of independence in both academics and personal development.

“Reed was really free,” he recalls. “It let you grow into who you wanted to be, and try things, both for what you wanted to do academically or artistically, but also the kind of person you wanted to be.”

While in college, Harnett enjoyed both biology and English, especially Shakespeare. His English professors encouraged him to go into science, believing that the field needed scientists who could write and think creatively. He was interested in neuroscience, but Reed didn’t have a neuroscience department, so he took the closest subject he could find — a course in neuropharmacology.

“That class totally blew my mind. It was just fascinating to think about all these pharmacological agents, be they from plants or synthetic or whatever, influencing how your mind worked,” Harnett says. “That class really changed my whole way of thinking about what I wanted to do, and that’s when I decided I wanted to become a neuroscientist.”

For his senior research thesis, Harnett joined an electrophysiology lab at Oregon Health Sciences University (OHSU), working with Professor Larry Trussell, who studies synaptic transmission in the auditory system. That lab was where he first built and used a patch clamp rig to measure neuron activity.

After graduating from college, he spent a year as a research technician in a lab at the University of Minnesota, then returned to OHSU to work in a different research lab studying ion channels and synaptic physiology. Eventually he decided to go to graduate school, ending up at the University of Texas at Austin, where his future wife was studying public policy.

For his PhD research, he studied the neurons that release the neuromodulator dopamine and how they are affected by drugs of abuse and addiction. However, once he finished his degree, he decided to return to studying the biophysics of computation, which he pursued during a postdoc at the Howard Hughes Medical Institute Janelia Research Campus with Jeff Magee.

A broad approach

When he started his lab at MIT’s McGovern Institute in 2015, Harnett set out to expand his focus. While the physiology of ion channels and synapses forms the basis of much of his lab’s work, they connect these processes to neuronal computation, cortical circuit operation, and higher-level cognitive functions.

Electrical impulses that flow between neurons, allowing them to communicate with each other, are produced by ion channels that control the flow of ions such as potassium and sodium. In a 2021 study, Harnett and his students discovered that human neurons have a much smaller number of these channels than expected, compared to the neurons of other mammals.

This reduction in density may have evolved to help the brain operate more efficiently, allowing it to divert resources to other energy-intensive processes that are required to perform complex cognitive tasks. Harnett’s lab has also found that in human neurons, electrical signals weaken as they flow along dendrites, meaning that small sections of dendrites can form units that perform individual computations within a neuron.

Harnett’s lab also recently discovered, to their surprise, that the adult brain contains millions of “silent synapses” — immature connections that remain inactive until they’re recruited to help form new memories. The existence of these synapses offers a clue to how the adult brain is able to continually form new memories and learn new things without having to modify mature synapses.

Many of these projects fall into areas that Harnett didn’t necessarily envision himself working on when he began his faculty career, but they naturally grew out of the broad approach he wanted to take to studying the cortex. To that end, he sought to bring people to the lab who wanted to work at different levels — from molecular physiology up to behavior and computational modeling.

As a postdoc studying electrophysiology, Harnett spent most of his time working alone with his patch clamp device and two-photon microscope. While that type of work still goes on his lab, the overall atmosphere is much more collaborative and convivial, and as a mentor, he likes to give his students broad leeway to come up with their own projects that fit in with the lab’s overall mission.

“I have this incredible, dynamic group that has been really great to work with. We take a broad approach to studying the cortex, and I think that’s what makes it fun,” he says. “Working with the folks that I’ve been able to recruit — grad students, techs, undergrads, and postdocs — is probably the thing that really matters the most to me.”

A new computational technique could make it easier to engineer useful proteins

To engineer proteins with useful functions, researchers usually begin with a natural protein that has a desirable function, such as emitting fluorescent light, and put it through many rounds of random mutation that eventually generate an optimized version of the protein.

This process has yielded optimized versions of many important proteins, including green fluorescent protein (GFP). However, for other proteins, it has proven difficult to generate an optimized version. MIT researchers have now developed a computational approach that makes it easier to predict mutations that will lead to better proteins, based on a relatively small amount of data.

Using this model, the researchers generated proteins with mutations that were predicted to lead to improved versions of GFP and a protein from adeno-associated virus (AAV), which is used to deliver DNA for gene therapy. They hope it could also be used to develop additional tools for neuroscience research and medical applications.

Woman gestures with her hand in front of a glass wall with equations written on it.
MIT Professor of Brain and Cognitive Sciences Ila Fiete in her lab at the McGovern Institute. Photo: Steph Stevens

“Protein design is a hard problem because the mapping from DNA sequence to protein structure and function is really complex. There might be a great protein 10 changes away in the sequence, but each intermediate change might correspond to a totally nonfunctional protein. It’s like trying to find your way to the river basin in a mountain range, when there are craggy peaks along the way that block your view. The current work tries to make the riverbed easier to find,” says Ila Fiete, a professor of brain and cognitive sciences at MIT, a member of MIT’s McGovern Institute for Brain Research, director of the K. Lisa Yang Integrative Computational Neuroscience Center, and one of the senior authors of the study.

Regina Barzilay, the School of Engineering Distinguished Professor for AI and Health at MIT, and Tommi Jaakkola, the Thomas Siebel Professor of Electrical Engineering and Computer Science at MIT, are also senior authors of an open-access paper on the work, which will be presented at the International Conference on Learning Representations in May. MIT graduate students Andrew Kirjner and Jason Yim are the lead authors of the study. Other authors include Shahar Bracha, an MIT postdoc, and Raman Samusevich, a graduate student at Czech Technical University.

Optimizing proteins

Many naturally occurring proteins have functions that could make them useful for research or medical applications, but they need a little extra engineering to optimize them. In this study, the researchers were originally interested in developing proteins that could be used in living cells as voltage indicators. These proteins, produced by some bacteria and algae, emit fluorescent light when an electric potential is detected. If engineered for use in mammalian cells, such proteins could allow researchers to measure neuron activity without using electrodes.

While decades of research have gone into engineering these proteins to produce a stronger fluorescent signal, on a faster timescale, they haven’t become effective enough for widespread use. Bracha, who works in Edward Boyden’s lab at the McGovern Institute, reached out to Fiete’s lab to see if they could work together on a computational approach that might help speed up the process of optimizing the proteins.

“This work exemplifies the human serendipity that characterizes so much science discovery,” Fiete says.

“This work grew out of the Yang Tan Collective retreat, a scientific meeting of researchers from multiple centers at MIT with distinct missions unified by the shared support of K. Lisa Yang. We learned that some of our interests and tools in modeling how brains learn and optimize could be applied in the totally different domain of protein design, as being practiced in the Boyden lab.”

For any given protein that researchers might want to optimize, there is a nearly infinite number of possible sequences that could generated by swapping in different amino acids at each point within the sequence. With so many possible variants, it is impossible to test all of them experimentally, so researchers have turned to computational modeling to try to predict which ones will work best.

In this study, the researchers set out to overcome those challenges, using data from GFP to develop and test a computational model that could predict better versions of the protein.

They began by training a type of model known as a convolutional neural network (CNN) on experimental data consisting of GFP sequences and their brightness — the feature that they wanted to optimize.

The model was able to create a “fitness landscape” — a three-dimensional map that depicts the fitness of a given protein and how much it differs from the original sequence — based on a relatively small amount of experimental data (from about 1,000 variants of GFP).

These landscapes contain peaks that represent fitter proteins and valleys that represent less fit proteins. Predicting the path that a protein needs to follow to reach the peaks of fitness can be difficult, because often a protein will need to undergo a mutation that makes it less fit before it reaches a nearby peak of higher fitness. To overcome this problem, the researchers used an existing computational technique to “smooth” the fitness landscape.

Once these small bumps in the landscape were smoothed, the researchers retrained the CNN model and found that it was able to reach greater fitness peaks more easily. The model was able to predict optimized GFP sequences that had as many as seven different amino acids from the protein sequence they started with, and the best of these proteins were estimated to be about 2.5 times fitter than the original.

“Once we have this landscape that represents what the model thinks is nearby, we smooth it out and then we retrain the model on the smoother version of the landscape,” Kirjner says. “Now there is a smooth path from your starting point to the top, which the model is now able to reach by iteratively making small improvements. The same is often impossible for unsmoothed landscapes.”

Proof-of-concept

The researchers also showed that this approach worked well in identifying new sequences for the viral capsid of adeno-associated virus (AAV), a viral vector that is commonly used to deliver DNA. In that case, they optimized the capsid for its ability to package a DNA payload.

“We used GFP and AAV as a proof-of-concept to show that this is a method that works on data sets that are very well-characterized, and because of that, it should be applicable to other protein engineering problems,” Bracha says.

The researchers now plan to use this computational technique on data that Bracha has been generating on voltage indicator proteins.

“Dozens of labs having been working on that for two decades, and still there isn’t anything better,” she says. “The hope is that now with generation of a smaller data set, we could train a model in silico and make predictions that could be better than the past two decades of manual testing.”

The research was funded, in part, by the U.S. National Science Foundation, the Machine Learning for Pharmaceutical Discovery and Synthesis consortium, the Abdul Latif Jameel Clinic for Machine Learning in Health, the DTRA Discovery of Medical Countermeasures Against New and Emerging threats program, the DARPA Accelerated Molecular Discovery program, the Sanofi Computational Antibody Design grant, the U.S. Office of Naval Research, the Howard Hughes Medical Institute, the National Institutes of Health, the K. Lisa Yang ICoN Center, and the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics at MIT.

For people who speak many languages, there’s something special about their native tongue

A new study of people who speak many languages has found that there is something special about how the brain processes their native language.

In the brains of these polyglots — people who speak five or more languages — the same language regions light up when they listen to any of the languages that they speak. In general, this network responds more strongly to languages in which the speaker is more proficient, with one notable exception: the speaker’s native language. When listening to one’s native language, language network activity drops off significantly.

The findings suggest there is something unique about the first language one acquires, which allows the brain to process it with minimal effort, the researchers say.

“Something makes it a little bit easier to process — maybe it’s that you’ve spent more time using that language — and you get a dip in activity for the native language compared to other languages that you speak proficiently,” says Evelina Fedorenko, an associate professor of neuroscience at MIT, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Saima Malik-Moraleda, a graduate student in the Speech and Hearing Bioscience and Technology Program at Harvard University, and Olessia Jouravlev, a former MIT postdoc who is now an associate professor at Carleton University, are the lead authors of the paper, which appears today in the journal Cerebral Cortex.

Many languages, one network

McGovern Investivator Ev Fedorenko in the Martinos Imaging Center at MIT. Photo: Caitlin Cunningham

The brain’s language processing network, located primarily in the left hemisphere, includes regions in the frontal and temporal lobes. In a 2021 study, Fedorenko’s lab found that in the brains of polyglots, the language network was less active when listening to their native language than the language networks of people who speak only one language.

In the new study, the researchers wanted to expand on that finding and explore what happens in the brains of polyglots as they listen to languages in which they have varying levels of proficiency. Studying polyglots can help researchers learn more about the functions of the language network, and how languages learned later in life might be represented differently than a native language or languages.

“With polyglots, you can do all of the comparisons within one person. You have languages that vary along a continuum, and you can try to see how the brain modulates responses as a function of proficiency,” Fedorenko says.

For the study, the researchers recruited 34 polyglots, each of whom had at least some degree of proficiency in five or more languages but were not bilingual or multilingual from infancy. Sixteen of the participants spoke 10 or more languages, including one who spoke 54 languages with at least some proficiency.

Each participant was scanned with functional magnetic resonance imaging (fMRI) as they listened to passages read in eight different languages. These included their native language, a language they were highly proficient in, a language they were moderately proficient in, and a language in which they described themselves as having low proficiency.

They were also scanned while listening to four languages they didn’t speak at all. Two of these were languages from the same family (such as Romance languages) as a language they could speak, and two were languages completely unrelated to any languages they spoke.

The passages used for the study came from two different sources, which the researchers had previously developed for other language studies. One was a set of Bible stories recorded in many different languages, and the other consisted of passages from “Alice in Wonderland” translated into many languages.

Brain scans revealed that the language network lit up the most when participants listened to languages in which they were the most proficient. However, that did not hold true for the participants’ native languages, which activated the language network much less than non-native languages in which they had similar proficiency. This suggests that people are so proficient in their native language that the language network doesn’t need to work very hard to interpret it.

“As you increase proficiency, you can engage linguistic computations to a greater extent, so you get these progressively stronger responses. But then if you compare a really high-proficiency language and a native language, it may be that the native language is just a little bit easier, possibly because you’ve had more experience with it,” Fedorenko says.

Brain engagement

The researchers saw a similar phenomenon when polyglots listened to languages that they don’t speak: Their language network was more engaged when listening to languages related to a language that they could understand, than compared to listening to completely unfamiliar languages.

“Here we’re getting a hint that the response in the language network scales up with how much you understand from the input,” Malik-Moraleda says. “We didn’t quantify the level of understanding here, but in the future we’re planning to evaluate how much people are truly understanding the passages that they’re listening to, and then see how that relates to the activation.”

The researchers also found that a brain network known as the multiple demand network, which turns on whenever the brain is performing a cognitively demanding task, also becomes activated when listening to languages other than one’s native language.

“What we’re seeing here is that the language regions are engaged when we process all these languages, and then there’s this other network that comes in for non-native languages to help you out because it’s a harder task,” Malik-Moraleda says.

In this study, most of the polyglots began studying their non-native languages as teenagers or adults, but in future work, the researchers hope to study people who learned multiple languages from a very young age. They also plan to study people who learned one language from infancy but moved to the United States at a very young age and began speaking English as their dominant language, while becoming less proficient in their native language, to help disentangle the effects of proficiency versus age of acquisition on brain responses.

The research was funded by the McGovern Institute for Brain Research, MIT’s Department of Brain and Cognitive Sciences, and the Simons Center for the Social Brain.

How the brain coordinates speaking and breathing

MIT researchers have discovered a brain circuit that drives vocalization and ensures that you talk only when you breathe out, and stop talking when you breathe in.

McGovern Investigator Fan Wang. Photo: Caitliin Cunningham

The newly discovered circuit controls two actions that are required for vocalization: narrowing of the larynx and exhaling air from the lungs. The researchers also found that this vocalization circuit is under the command of a brainstem region that regulates the breathing rhythm, which ensures that breathing remains dominant over speech.

“When you need to breathe in, you have to stop vocalization. We found that the neurons that control vocalization receive direct inhibitory input from the breathing rhythm generator,” says Fan Wang, an MIT professor of brain and cognitive sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Jaehong Park, a Duke University graduate student who is currently a visiting student at MIT, is the lead author of the study, which appears today in Science. Other authors of the paper include MIT technical associates Seonmi Choi and Andrew Harrahill, former MIT research scientist Jun Takatoh, and Duke University researchers Shengli Zhao and Bao-Xia Han.

Vocalization control

Located in the larynx, the vocal cords are two muscular bands that can open and close. When they are mostly closed, or adducted, air exhaled from the lungs generates sound as it passes through the cords.

The MIT team set out to study how the brain controls this vocalization process, using a mouse model. Mice communicate with each other using sounds known as ultrasonic vocalizations (USVs), which they produce using the unique whistling mechanism of exhaling air through a small hole between nearly closed vocal cords.

“We wanted to understand what are the neurons that control the vocal cord adduction, and then how do those neurons interact with the breathing circuit?” Wang says.

To figure that out, the researchers used a technique that allows them to map the synaptic connections between neurons. They knew that vocal cord adduction is controlled by laryngeal motor neurons, so they began by tracing backward to find the neurons that innervate those motor neurons.

This revealed that one major source of input is a group of premotor neurons found in the hindbrain region called the retroambiguus nucleus (RAm). Previous studies have shown that this area is involved in vocalization, but it wasn’t known exactly which part of the RAm was required or how it enabled sound production.

Image of green and magenta cells under a microscope.
Laryngeal premotor neurons (green) and Fos (magenta) labeling in the RAm. Image: Fan Wang

The researchers found that these synaptic tracing-labeled RAm neurons were strongly activated during USVs. This observation prompted the team to use an activity-dependent method to target these vocalization-specific RAm neurons, termed as RAmVOC. They used chemogenetics and optogenetics to explore what would happen if they silenced or stimulated their activity. When the researchers blocked the RAmVOC neurons, the mice were no longer able to produce USVs or any other kind of vocalization. Their vocal cords did not close, and their abdominal muscles did not contract, as they normally do during exhalation for vocalization.

Conversely, when the RAmVOC neurons were activated, the vocal cords closed, the mice exhaled, and USVs were produced. However, if the stimulation lasted two seconds or longer, these USVs would be interrupted by inhalations, suggesting that the process is under control of the same part of the brain that regulates breathing.

“Breathing is a survival need,” Wang says. “Even though these neurons are sufficient to elicit vocalization, they are under the control of breathing, which can override our optogenetic stimulation.”

Rhythm generation

Additional synaptic mapping revealed that neurons in a part of the brainstem called the pre-Bötzinger complex, which acts as a rhythm generator for inhalation, provide direct inhibitory input to the RAmVOC neurons.

“The pre-Bötzinger complex generates inhalation rhythms automatically and continuously, and the inhibitory neurons in that region project to these vocalization premotor neurons and essentially can shut them down,” Wang says.

This ensures that breathing remains dominant over speech production, and that we have to pause to breathe while speaking.

The researchers believe that although human speech production is more complex than mouse vocalization, the circuit they identified in mice plays the conserved role in speech production and breathing in humans.

“Even though the exact mechanism and complexity of vocalization in mice and humans is really different, the fundamental vocalization process, called phonation, which requires vocal cord closure and the exhalation of air, is shared in both the human and the mouse,” Park says.

The researchers now hope to study how other functions such as coughing and swallowing food may be affected by the brain circuits that control breathing and vocalization.

The research was funded by the National Institutes of Health.

School of Science announces 2024 Infinite Expansion Awards

The MIT School of Science has announced nine postdocs and research scientists as recipients of the 2024 Infinite Expansion Award, which highlights extraordinary members of the MIT community.

The following are the 2024 School of Science Infinite Expansion winners:

  • Sarthak Chandra, a research scientist in the Department of Brain and Cognitive Sciences, was nominated by Professor Ila Fiete, who wrote, “He has expanded the research abilities of my group by being a versatile and brilliant scientist, by drawing connections with a different area that he was an expert in from his PhD training, and by being a highly involved and caring mentor.”
  • Michal Fux, a research scientist in the Department of Brain and Cognitive Sciences, was nominated by Professor Pawan Sinha, who wrote, “She is one of those figurative beams of light that not only brilliantly illuminate scientific questions, but also enliven a research team.”
  • Andrew Savinov, a postdoc in the Department of Biology, was nominated by Associate Professor Gene-Wei Li, who wrote, “Andrew is an extraordinarily creative and accomplished biophysicist, as well as an outstanding contributor to the broader MIT community.”
  • Ho Fung Cheng, a postdoc in the Department of Chemistry, was nominated by Professor Jeremiah Johnson, who wrote, “His impact on research and our departmental community during his time at MIT has been outstanding, and I believe that he will be a worldclass teacher and research group leader in his independent career next year.”
  • Gabi Wenzel, a postdoc in the Department of Chemistry, was nominated by Assistant Professor Brett McGuire, who wrote, “In the one year since Gabi joined our team, she has become an indispensable leader, demonstrating exceptional skill, innovation, and dedication in our challenging research environment.”
  • Yu-An Zhang, a postdoc in the Department of Chemistry, was nominated by Professor Alison Wendlandt, who wrote, “He is a creative, deep-thinking scientist and a superb organic chemist. But above all, he is an off-scale mentor and a cherished coworker.”
  • Wouter Van de Pontseele, a senior postdoc in the Laboratory for Nuclear Science, was nominated by Professor Joseph Formaggio, who wrote, “He is a talented scientist with an intense creativity, scholarship, and student mentorship record. In the time he has been with my group, he has led multiple facets of my experimental program and has been a wonderful citizen of the MIT community.”
  • Alexander Shvonski, a lecturer in the Department of Physics, was nominated by Assistant Professor Andrew Vanderburg, who wrote, “… I have been blown away by Alex’s knowledge of education research and best practices, his skills as a teacher and course content designer, and I have been extremely grateful for his assistance.”
  • David Stoppel, a research scientist in The Picower Institute for Learning and Memory, was nominated by Professor Mark Bear and his research group, who wrote, “As impressive as his research achievements might be, David’s most genuine qualification for this award is his incredible commitment to mentorship and the dissemination of knowledge.”

Winners are honored with a monetary award and will be celebrated with family, friends, and nominators at a later date, along with recipients of the Infinite Mile Award.

Exposure to different kinds of music influences how the brain interprets rhythm

When listening to music, the human brain appears to be biased toward hearing and producing rhythms composed of simple integer ratios — for example, a series of four beats separated by equal time intervals (forming a 1:1:1 ratio).

However, the favored ratios can vary greatly between different societies, according to a large-scale study led by researchers at MIT and the Max Planck Institute for Empirical Aesthetics and carried out in 15 countries. The study included 39 groups of participants, many of whom came from societies whose traditional music contains distinctive patterns of rhythm not found in Western music.

“Our study provides the clearest evidence yet for some degree of universality in music perception and cognition, in the sense that every single group of participants that was tested exhibits biases for integer ratios. It also provides a glimpse of the variation that can occur across cultures, which can be quite substantial,” says Nori Jacoby, the study’s lead author and a former MIT postdoc, who is now a research group leader at the Max Planck Institute for Empirical Aesthetics in Frankfurt, Germany.

The brain’s bias toward simple integer ratios may have evolved as a natural error-correction system that makes it easier to maintain a consistent body of music, which human societies often use to transmit information.

“When people produce music, they often make small mistakes. Our results are consistent with the idea that our mental representation is somewhat robust to those mistakes, but it is robust in a way that pushes us toward our preexisting ideas of the structures that should be found in music,” says Josh McDermott, an associate professor of brain and cognitive sciences at MIT and a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds, and Machines.

McDermott is the senior author of the study, which appears today in Nature Human Behaviour. The research team also included scientists from more than two dozen institutions around the world.

A global approach

The new study grew out of a smaller analysis that Jacoby and McDermott published in 2017. In that paper, the researchers compared rhythm perception in groups of listeners from the United States and the Tsimane’, an Indigenous society located in the Bolivian Amazon rainforest.

pitch perception study
Nori Jacoby, a former MIT postdoc now at the Max Planck Institute for Empirical Aesthetics, runs an experiment with a member of the Tsimane’ tribe, who have had little exposure to Western music. Photo: Josh McDermott

To measure how people perceive rhythm, the researchers devised a task in which they play a randomly generated series of four beats and then ask the listener to tap back what they heard. The rhythm produced by the listener is then played back to the listener, and they tap it back again. Over several iterations, the tapped sequences became dominated by the listener’s internal biases, also known as priors.

“The initial stimulus pattern is random, but at each iteration the pattern is pushed by the listener’s biases, such that it tends to converge to a particular point in the space of possible rhythms,” McDermott says. “That can give you a picture of what we call the prior, which is the set of internal implicit expectations for rhythms that people have in their heads.”

When the researchers first did this experiment, with American college students as the test subjects, they found that people tended to produce time intervals that are related by simple integer ratios. Furthermore, most of the rhythms they produced, such as those with ratios of 1:1:2 and 2:3:3, are commonly found in Western music.

The researchers then went to Bolivia and asked members of the Tsimane’ society to perform the same task. They found that Tsimane’ also produced rhythms with simple integer ratios, but their preferred ratios were different and appeared to be consistent with those that have been documented in the few existing records of Tsimane’ music.

“At that point, it provided some evidence that there might be very widespread tendencies to favor these small integer ratios, and that there might be some degree of cross-cultural variation. But because we had just looked at this one other culture, it really wasn’t clear how this was going to look at a broader scale,” Jacoby says.

To try to get that broader picture, the MIT team began seeking collaborators around the world who could help them gather data on a more diverse set of populations. They ended up studying listeners from 39 groups, representing 15 countries on five continents — North America, South America, Europe, Africa, and Asia.

“This is really the first study of its kind in the sense that we did the same experiment in all these different places, with people who are on the ground in those locations,” McDermott says. “That hasn’t really been done before at anything close to this scale, and it gave us an opportunity to see the degree of variation that might exist around the world.”

A grid of nine different photos showing a researcher working with an individual at a table. The individuals are wearing headphones.
Example testing sites. a, Yaranda, Bolivia. b, Montevideo, Uruguay. c, Sagele, Mali. d, Spitzkoppe, Namibia. e, Pleven, Bulgaria. f, Bamako, Mali. g, D’Kar, Botswana. h, Stockholm, Sweden. i, Guizhou, China. j, Mumbai, India. Verbal informed consent was obtained from the individuals in each photo.

Cultural comparisons

Just as they had in their original 2017 study, the researchers found that in every group they tested, people tended to be biased toward simple integer ratios of rhythm. However, not every group showed the same biases. People from North America and Western Europe, who have likely been exposed to the same kinds of music, were more likely to generate rhythms with the same ratios. However, many groups, for example those in Turkey, Mali, Bulgaria, and Botswana showed a bias for other rhythms.

“There are certain cultures where there are particular rhythms that are prominent in their music, and those end up showing up in the mental representation of rhythm,” Jacoby says.

The researchers believe their findings reveal a mechanism that the brain uses to aid in the perception and production of music.

“When you hear somebody playing something and they have errors in their performance, you’re going to mentally correct for those by mapping them onto where you implicitly think they ought to be,” McDermott says. “If you didn’t have something like this, and you just faithfully represented what you heard, these errors might propagate and make it much harder to maintain a musical system.”

Among the groups that they studied, the researchers took care to include not only college students, who are easy to study in large numbers, but also people living in traditional societies, who are more difficult to reach. Participants from those more traditional groups showed significant differences from college students living in the same countries, and from people who live in those countries but performed the test online.

“What’s very clear from the paper is that if you just look at the results from undergraduate students around the world, you vastly underestimate the diversity that you see otherwise,” Jacoby says. “And the same was true of experiments where we tested groups of people online in Brazil and India, because you’re dealing with people who have internet access and presumably have more exposure to Western music.”

The researchers now hope to run additional studies of different aspects of music perception, taking this global approach.

“If you’re just testing college students around the world or people online, things look a lot more homogenous. I think it’s very important for the field to realize that you actually need to go out into communities and run experiments there, as opposed to taking the low-hanging fruit of running studies with people in a university or on the internet,” McDermott says.

The research was funded by the James S. McDonnell Foundation, the Canadian National Science and Engineering Research Council, the South African National Research Foundation, the United States National Science Foundation, the Chilean National Research and Development Agency, the Austrian Academy of Sciences, the Japan Society for the Promotion of Science, the Keio Global Research Institute, the United Kingdom Arts and Humanities Research Council, the Swedish Research Council, and the John Fell Fund.