Brain pathways that control dopamine release may influence motor control

Within the human brain, movement is coordinated by a brain region called the striatum, which sends instructions to motor neurons in the brain. Those instructions are conveyed by two pathways, one that initiates movement (“go”) and one that suppresses it (“no-go”).

In a new study, MIT researchers have discovered an additional two pathways that arise in the striatum and appear to modulate the effects of the go and no-go pathways. These newly discovered pathways connect to dopamine-producing neurons in the brain — one stimulates dopamine release and the other inhibits it.

By controlling the amount of dopamine in the brain via clusters of neurons known as striosomes, these pathways appear to modify the instructions given by the go and no-go pathways. They may be especially involved in influencing decisions that have a strong emotional component, the researchers say.

“Among all the regions of the striatum, the striosomes alone turned out to be able to project to the dopamine-containing neurons, which we think has something to do with motivation, mood, and controlling movement,” says Ann Graybiel, an MIT Institute Professor, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the new study.

Iakovos Lazaridis, a research scientist at the McGovern Institute, is the lead author of the paper, which appears today in the journal Current Biology.

New pathways

Graybiel has spent much of her career studying the striatum, a structure located deep within the brain that is involved in learning and decision-making, as well as control of movement.

Within the striatum, neurons are arranged in a labyrinth-like structure that includes striosomes, which Graybiel discovered in the 1970s. The classical go and no-go pathways arise from neurons that surround the striosomes, which are known collectively as the matrix. The matrix cells that give rise to these pathways receive input from sensory processing regions such as the visual cortex and auditory cortex. Then, they send go or no-go commands to neurons in the motor cortex.

However, the function of the striosomes, which are not part of those pathways, remained unknown. For many years, researchers in Graybiel’s lab have been trying to solve that mystery.

Their previous work revealed that striosomes receive much of their input from parts of the brain that process emotion. Within striosomes, there are two major types of neurons, classified as D1 and D2. In a 2015 study, Graybiel found that one of these cell types, D1, sends input to the substantia nigra, which is the brain’s major dopamine-producing center.

It took much longer to trace the output of the other set, D2 neurons. In the new Current Biology study, the researchers discovered that those neurons also eventually project to the substantia nigra, but first they connect to a set of neurons in the globus palladus, which inhibits dopamine output. This pathway, an indirect connection to the substantia nigra, reduces the brain’s dopamine output and inhibits movement.

The researchers also confirmed their earlier finding that the pathway arising from D1 striosomes connects directly to the substantia nigra, stimulating dopamine release and initiating movement.

“In the striosomes, we’ve found what is probably a mimic of the classical go/no-go pathways,” Graybiel says. “They’re like classic motor go/no-go pathways, but they don’t go to the motor output neurons of the basal ganglia. Instead, they go to the dopamine cells, which are so important to movement and motivation.”

Emotional decisions

The findings suggest that the classical model of how the striatum controls movement needs to be modified to include the role of these newly identified pathways. The researchers now hope to test their hypothesis that input related to motivation and emotion, which enters the striosomes from the cortex and the limbic system, influences dopamine levels in a way that can encourage or discourage action.

That dopamine release may be especially relevant for actions that induce anxiety or stress. In their 2015 study, Graybiel’s lab found that striosomes play a key role in making decisions that provoke high levels of anxiety; in particular, those that are high risk but may also have a big payoff.

“Ann Graybiel and colleagues have earlier found that the striosome is concerned with inhibiting dopamine neurons. Now they show unexpectedly that another type of striosomal neuron exerts the opposite effect and can signal reward. The striosomes can thus both up- or down-regulate dopamine activity, a very important discovery. Clearly, the regulation of dopamine activity is critical in our everyday life with regard to both movements and mood, to which the striosomes contribute,” says Sten Grillner, a professor of neuroscience at the Karolinska Institute in Sweden, who was not involved in the research.

Another possibility the researchers plan to explore is whether striosomes and matrix cells are arranged in modules that affect motor control of specific parts of the body.

“The next step is trying to isolate some of these modules, and by simultaneously working with cells that belong to the same module, whether they are in the matrix or striosomes, try to pinpoint how the striosomes modulate the underlying function of each of these modules,” Lazaridis says.

They also hope to explore how the striosomal circuits, which project to the same region of the brain that is ravaged by Parkinson’s disease, may influence that disorder.

The research was funded by the National Institutes of Health, the Saks-Kavanaugh Foundation, the William N. and Bernice E. Bumpus Foundation, Jim and Joan Schattinger, the Hock E. Tan and K. Lisa Yang Center for Autism Research, Robert Buxton, the Simons Foundation, the CHDI Foundation, and an Ellen Schapiro and Gerald Axelbaum Investigator BBRF Young Investigator Grant.

Seven with MIT ties elected to National Academy of Medicine for 2024

The National Academy of Medicine recently announced the election of more than 90 members during its annual meeting, including MIT faculty members Matthew Vander Heiden and Fan Wang, along with five MIT alumni.

Election to the National Academy of Medicine (NAM) is considered one of the highest honors in the fields of health and medicine and recognizes individuals who have demonstrated outstanding professional achievement and commitment to service.

Matthew Vander Heiden is the director of the Koch Institute for Integrative Cancer Research at MIT, a Lester Wolfe Professor of Molecular Biology, and a member of the Broad Institute of MIT and Harvard. His research explores how cancer cells reprogram their metabolism to fuel tumor growth and has provided key insights into metabolic pathways that support cancer progression, with implications for developing new therapeutic strategies. The National Academy of Medicine recognized Vander Heiden for his contributions to “the development of approved therapies for cancer and anemia” and his role as a “thought leader in understanding metabolic phenotypes and their relations to disease pathogenesis.”

Vander Heiden earned his MD and PhD from the University of Chicago and completed  his clinical training in internal medicine and medical oncology at the Brigham and Women’s Hospital and the Dana-Farber Cancer Institute. After postdoctoral research at Harvard Medical School, Vander Heiden joined the faculty of the MIT Department of Biology and the Koch Institute in 2010. He is also a practicing oncologist and instructor in medicine at Dana-Farber Cancer Institute and Harvard Medical School.

Fan Wang is a professor of brain and cognitive sciences, an investigator at the McGovern Institute, and director of the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics at MIT.  Wang’s research focuses on the neural circuits governing the bidirectional interactions between the brain and body. She is specifically interested in the circuits that control the sensory and emotional aspects of pain and addiction, as well as the sensory and motor circuits that work together to execute behaviors such as eating, drinking, and moving. The National Academy of Medicine has recognized her body of work for “providing the foundational knowledge to develop new therapies to treat chronic pain and movement disorders.”

Before coming to MIT in 2021, Wang obtained her PhD from Columbia University and received her postdoctoral training at the University of California at San Francisco and Stanford University. She became a faculty member at Duke University in 2003 and was later appointed the Morris N. Broad Professor of Neurobiology. Wang is also a member of the American Academy of Arts and Sciences and she continues to make important contributions to the neural mechanisms underlying general anesthesia, pain perception, and movement control.

MIT alumni who were elected to the NAM for 2024 include:

  • Leemore Dafny PhD ’01 (Economics);
  • David Huang ’85 MS ’89  (Electrical Engineering and Computer Science) PhD ’93 Medical Engineering and Medical Physics);
  • Nola M. Hylton ’79 (Chemical Engineering);
  • Mark R. Prausnitz PhD ’94 (Chemical Engineering); and
  • Konstantina M. Stankovic ’92 (Biology and Physics) PhD ’98 (Speech and Hearing Bioscience and Technology)

Established originally as the Institute of Medicine in 1970 by the National Academy of Sciences, the National Academy of Medicine addresses critical issues in health, science, medicine, and related policy and inspires positive actions across sectors.

“This class of new members represents the most exceptional researchers and leaders in health and medicine, who have made significant breakthroughs, led the response to major public health challenges, and advanced health equity,” said National Academy of Medicine President Victor J. Dzau. “Their expertise will be necessary to supporting NAM’s work to address the pressing health and scientific challenges we face today.”

A new method makes high-resolution imaging more accessible

A classical way to image nanoscale structures in cells is with high-powered, expensive super-resolution microscopes. As an alternative, MIT researchers have developed a way to expand tissue before imaging it — a technique that allows them to achieve nanoscale resolution with a conventional light microscope.

In the newest version of this technique, the researchers have made it possible to expand tissue 20-fold in a single step. This simple, inexpensive method could pave the way for nearly any biology lab to perform nanoscale imaging.

“This democratizes imaging,” says Laura Kiessling, the Novartis Professor of Chemistry at MIT and a member of the Broad Institute of MIT and Harvard and MIT’s Koch Institute for Integrative Cancer Research. “Without this method, if you want to see things with a high resolution, you have to use very expensive microscopes. What this new technique allows you to do is see things that you couldn’t normally see with standard microscopes. It drives down the cost of imaging because you can see nanoscale things without the need for a specialized facility.”

At the resolution achieved by this technique, which is around 20 nanometers, scientists can see organelles inside cells, as well as clusters of proteins.

“Twenty-fold expansion gets you into the realm that biological molecules operate in. The building blocks of life are nanoscale things: biomolecules, genes, and gene products,” says Edward Boyden, the Y. Eva Tan Professor in Neurotechnology at MIT; a professor of biological engineering, media arts and sciences, and brain and cognitive sciences; a Howard Hughes Medical Institute investigator; and a member of MIT’s McGovern Institute for Brain Research and Koch Institute for Integrative Cancer Research.

Boyden and Kiessling are the senior authors of the new study, which appears today in Nature Methods. MIT graduate student Shiwei Wang and Tay Won Shin PhD ’23 are the lead authors of the paper.

A single expansion

Boyden’s lab invented expansion microscopy in 2015. The technique requires embedding tissue into an absorbent polymer and breaking apart the proteins that normally hold tissue together. When water is added, the gel swells and pulls biomolecules apart from each other.

The original version of this technique, which expanded tissue about fourfold, allowed researchers to obtain images with a resolution of around 70 nanometers. In 2017, Boyden’s lab modified the process to include a second expansion step, achieving an overall 20-fold expansion. This enables even higher resolution, but the process is more complicated.

“We’ve developed several 20-fold expansion technologies in the past, but they require multiple expansion steps,” Boyden says. “If you could do that amount of expansion in a single step, that could simplify things quite a bit.”

With 20-fold expansion, researchers can get down to a resolution of about 20 nanometers, using a conventional light microscope. This allows them see cell structures like microtubules and mitochondria, as well as clusters of proteins.

In the new study, the researchers set out to perform 20-fold expansion with only a single step. This meant that they had to find a gel that was both extremely absorbent and mechanically stable, so that it wouldn’t fall apart when expanded 20-fold.

To achieve that, they used a gel assembled from N,N-dimethylacrylamide (DMAA) and sodium acrylate. Unlike previous expansion gels that rely on adding another molecule to form crosslinks between the polymer strands, this gel forms crosslinks spontaneously and exhibits strong mechanical properties. Such gel components previously had been used in expansion microscopy protocols, but the resulting gels could expand only about tenfold. The MIT team optimized the gel and the polymerization process to make the gel more robust, and to allow for 20-fold expansion.

To further stabilize the gel and enhance its reproducibility, the researchers removed oxygen from the polymer solution prior to gelation, which prevents side reactions that interfere with crosslinking. This step requires running nitrogen gas through the polymer solution, which replaces most of the oxygen in the system.

Once the gel is formed, select bonds in the proteins that hold the tissue together are broken and water is added to make the gel expand. After the expansion is performed, target proteins in tissue can be labeled and imaged.

“This approach may require more sample preparation compared to other super-resolution techniques, but it’s much simpler when it comes to the actual imaging process, especially for 3D imaging,” Shin says. “We document the step-by-step protocol in the manuscript so that readers can go through it easily.”

Imaging tiny structures

Using this technique, the researchers were able to image many tiny structures within brain cells, including structures called synaptic nanocolumns. These are clusters of proteins that are arranged in a specific way at neuronal synapses, allowing neurons to communicate with each other via secretion of neurotransmitters such as dopamine.

In studies of cancer cells, the researchers also imaged microtubules — hollow tubes that help give cells their structure and play important roles in cell division. They were also able to see mitochondria (organelles that generate energy) and even the organization of individual nuclear pore complexes (clusters of proteins that control access to the cell nucleus).

Wang is now using this technique to image carbohydrates known as glycans, which are found on cell surfaces and help control cells’ interactions with their environment. This method could also be used to image tumor cells, allowing scientists to glimpse how proteins are organized within those cells, much more easily than has previously been possible.

The researchers envision that any biology lab should be able to use this technique at a low cost since it relies on standard, off-the-shelf chemicals and common equipment such confocal microscopes and glove bags, which most labs already have or can easily access.

“Our hope is that with this new technology, any conventional biology lab can use this protocol with their existing microscopes, allowing them to approach resolution that can only be achieved with very specialized and costly state-of-the-art microscopes,” Wang says.

The research was funded, in part, by the U.S. National Institutes of Health, an MIT Presidential Graduate Fellowship, U.S. National Science Foundation Graduate Research Fellowship grants, Open Philanthropy, Good Ventures, the Howard Hughes Medical Institute, Lisa Yang, Ashar Aziz, and the European Research Council.

Finding some stability in adaptable brains

One of the brain’s most celebrated qualities is its adaptability. Changes to neural circuits, whose connections are continually adjusted as we experience and interact with the world, are key to how we learn. But to keep knowledge and memories intact, some parts of the circuitry must be resistant to this constant change.

“Brains have figured out how to navigate this landscape of balancing between stability and flexibility, so that you can have new learning and you can have lifelong memory,” says neuroscientist Mark Harnett, an investigator at MIT’s McGovern Institute.

In the August 27, 2024 of the journal Cell Reports, Harnett and his team show how individual neurons can contribute to both parts of this vital duality. By studying the synapses through which pyramidal neurons in the brain’s sensory cortex communicate, they have learned how the cells preserve their understanding of some of the world’s most fundamental features, while also maintaining the flexibility they need to adapt to a changing world.

McGovern Institute Investigator Mark Harnett. Photo: Adam Glanzman

Visual connections

Pyramidal neurons receive input from other neurons via thousands of connection points. Early in life, these synapses are extremely malleable; their strength can shift as a young animal takes in visual information and learns to interpret it. Most remain adaptable into adulthood, but Harnett’s team discovered that some of the cells’ synapses lose their flexibility when the animals are less than a month old. Having both stable and flexible synapses means these neurons can combine input from different sources to use visual information in flexible ways.

Microscopic image of a mouse brain.
A confocal image of a mouse brain showing dLGN neurons in pink. Image: Courtney Yaeger, Mark Harnett.

Postdoctoral fellow Courtney Yaeger took a close look at these unusually stable synapses, which cluster together along a narrow region of the elaborately branched pyramidal cells. She was interested in the connections through which the cells receive primary visual information, so she traced their connections with neurons in a vision-processing center of the brain’s thalamus called the dorsal lateral geniculate nucleus (dLGN).

The long extensions through which a neuron receives signals from other cells are called dendrites, and they branch of from the main body of the cell into a tree-like structure. Spiny protrusions along the dendrites form the synapses that connect pyramidal neurons to other cells. Yaeger’s experiments showed that connections from the dLGN all led to a defined region of the pyramidal cells—a tight band within what she describes as the trunk of the dendritic tree.

Yaeger found several ways in which synapses in this region— formally known as the apical oblique dendrite domain—differ from other synapses on the same cells. “They’re not actually that far away from each other, but they have completely different properties,” she says.

Stable synapses

In one set of experiments, Yaeger activated synapses on the pyramidal neurons and measured the effect on the cells’ electrical potential. Changes to a neuron’s electrical potential generate the impulses the cells use to communicate with one another. It is common for a synapse’s electrical effects to amplify when synapses nearby are also activated. But when signals were delivered to the apical oblique dendrite domain, each one had the same effect, no matter how many synapses were stimulated. Synapses there don’t interact with one another at all, Harnett says. “They just do what they do. No matter what their neighbors are doing, they all just do kind of the same thing.”

Two rows of seven confocal microscope images of dendrites.
Representative oblique (top) and basal (bottom) dendrites from the same Layer 5 pyramidal neuron imaged across 7 days. Transient spines are labeled with yellow arrowheads the day before disappearance. Image: Courtney Yaeger, Mark Harnett.

The team was also able to visualize the molecular contents of individual synapses. This revealed a surprising lack of a certain kind of neurotransmitter receptor, called NMDA receptors, in the apical oblique dendrites. That was notable because of NMDA receptors’ role in mediating changes in the brain. “Generally when we think about any kind of learning and memory and plasticity, it’s NMDA receptors that do it,” Harnett says. “That is the by far most common substrate of learning and memory in all brains.”

When Yaeger stimulated the apical oblique synapses with electricity, generating patterns of activity that would strengthen most synapses, the team discovered a consequence of the limited presence of NMDA receptors. The synapses’ strength did not change. “There’s no activity-dependent plasticity going on there, as far as we have tested,” Yaeger says.

That makes sense, the researchers say, because the cells’ connections from the thalamus relay primary visual information detected by the eyes. It is through these connections that the brain learns to recognize basic visual features like shapes and lines.

“These synapses are basically a robust, high fidelity readout of this visual information,” Harnett explains. “That’s what they’re conveying, and it’s not context sensitive. So it doesn’t matter how many other synapses are active, they just do exactly what they’re going to do, and you can’t modify them up and down based on activity. So they’re very, very stable.”

“You actually don’t want those to be plastic,” adds Yaeger.

“Can you imagine going to sleep and then forgetting what a vertical line looks like? That would be disastrous.” – Courtney Yaeger

By conducting the same experiments in mice of different ages, the researchers determined that the synapses that connect pyramidal neurons to the thalamus become stable a few weeks after young mice first open their eyes. By that point, Harnett says, they have learned everything they need to learn. On the other hand, if mice spend the first weeks of their lives in the dark, the synapses never stabilize—further evidence that the transition depends on visual experience.

The team’s findings not only help explain how the brain balances flexibility and stability, they could help researchers teach artificial intelligence how to do the same thing. Harnett says artificial neural networks are notoriously bad at this: When an artificial neural network that does something well is trained to do something new, it almost always experiences “catastrophic forgetting” and can no longer perform its original task. Harnett’s team is exploring how they can use what they’ve learned about real brains to overcome this problem in artificial networks.

Harnessing the power of placebo for pain relief

Placebos are inert treatments, generally not expected to impact biological pathways or improve a person’s physical health. But time and again, some patients report that they feel better after taking a placebo. Increasingly, doctors and scientists are recognizing that rather than dismissing placebos as mere trickery, they may be able to help patients by harnessing their power.

To maximize the impact of the placebo effect and design reliable therapeutic strategies, researchers need a better understanding of how it works. Now, with a new animal model developed by scientists at the McGovern Institute, they will be able to investigate the neural circuits that underlie placebos’ ability to elicit pain relief.

“The brain and body interaction has a lot of potential, in a way that we don’t fully understand,” says McGovern investigator Fan Wang. “I really think there needs to be more of a push to understand placebo effect, in pain and probably in many other conditions. Now we have a strong model to probe the circuit mechanism.”

Context-dependent placebo effect

McGovern Investigator Fan Wang. Photo: Caitliin Cunningham

In the September 5, 2024, issue of the journal Current Biology, Wang and her team report that they have elicited strong placebo pain relief in mice by activating pain-suppressing neurons in the brain while the mice are in a specific environment—thereby teaching the animals that they feel better when they are in that context. Following their training, placing the mice in that environment alone is enough to suppress pain. The team’s experiments, which were funded by the National Institutes of Health, the K. Lisa Yang Brain-Body Center and the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics within MIT’s Yang Tan Collective show that this context-dependent placebo effect relieves both acute and chronic pain.

Context is critical for the placebo effect. While a pill can help a patient feel better when they expect it to, even if it is made only of sugar or starch, it seems to be not just the pill that sets up those expectations, but the entire scenario in which the pill is taken. For example, being in a hospital and interacting with doctors can contribute to a patient’s perception of care, and these social and environmental factors can make a placebo effect more probable.

Postdoctoral fellows Bin Chen and Nitsan Goldstein used visual and textural cues to define a specific place. Then they activated pain-suppressing neurons in the brain while the animals were in this “pain-relief box.” Those pain-suppressing neurons, which Wang’s lab discovered a few years ago, are located in an emotion-processing center of the brain called the central amygdala. By expressing light-sensitive channels in these neurons, the researchers were able to suppress pain with light in the pain-relief box and leave the neurons inactive when mice were in a control box.

Animals learned to prefer the pain-relief box to other environments. And when the researchers tested their response to potentially painful stimuli after they had made that association, they found the mice were less sensitive while they were there. “Just by being in the context that they had associated with pain suppression, we saw that reduced pain—even though we weren’t actually activating those [pain-suppressing] neurons,” Goldstein explains.

Acute and chronic pain relief

Some scientists have been able to elicit placebo pain relief in rodents by treating the animals with morphine, linking environmental cues to the pain suppression caused by the drugs similar to the way Wang’s team did by directly activating pain-suppressing neurons. This drug-based approach works best for setting up expectations of relief for acute pain; its placebo effect is short-lived and mostly ineffective against chronic pain. So Wang, Chen, and Goldstein were particularly pleased to find that their engineered placebo effect was effective for relieving both acute and chronic pain.

In their experiments, animals experiencing a chemotherapy-induced hypersensitivity to touch exhibited a preference for the pain relief box as much as animals who were exposed to a chemical that induces acute pain, days after their initial conditioning. Once there, their chemotherapy-induced pain sensitivity was eliminated; they exhibited no more sensitivity to painful stimuli than they had prior to receiving chemotherapy.

One of the biggest surprises came when the researchers turned their attention back to the pain-suppressing neurons in the central amygdala that they had used to trigger pain relief. They suspected that those neurons might be reactivated when mice returned to the pain-relief box. Instead, they found that after the initial conditioning period, those neurons remained quiet. “These neurons are not reactivated, yet the mice appear to be no longer in pain,” Wang says. “So it suggests this memory of feeling well is transferred somewhere else.”

Goldstein adds that there must be a pain-suppressing neural circuit somewhere that is activated by pain-relief-associated contexts—and the team’s new placebo model sets researchers up to investigate those pathways. A deeper understanding of that circuitry could enable clinicians to deploy the placebo effect—alone or in combination with active treatments—to better manage patients’ pain in the future.

Scientists find neurons that process language on different timescales

Using functional magnetic resonance imaging (fMRI), neuroscientists have identified several regions of the brain that are responsible for processing language. However, discovering the specific functions of neurons in those regions has proven difficult because fMRI, which measures changes in blood flow, doesn’t have high enough resolution to reveal what small populations of neurons are doing.

Now, using a more precise technique that involves recording electrical activity directly from the brain, MIT neuroscientists have identified different clusters of neurons that appear to process different amounts of linguistic context. These “temporal windows” range from just one word up to about six words.

The temporal windows may reflect different functions for each population, the researchers say. Populations with shorter windows may analyze the meanings of individual words, while those with longer windows may interpret more complex meanings created when words are strung together.

“This is the first time we see clear heterogeneity within the language network,” says Evelina Fedorenko, an associate professor of neuroscience at MIT. “Across dozens of fMRI experiments, these brain areas all seem to do the same thing, but it’s a large, distributed network, so there’s got to be some structure there. This is the first clear demonstration that there is structure, but the different neural populations are spatially interleaved so we can’t see these distinctions with fMRI.”

Fedorenko, who is also a member of MIT’s McGovern Institute for Brain Research, is the senior author of the study, which appears today in Nature Human Behavior. MIT postdoc Tamar Regev and Harvard University graduate student Colton Casto are the lead authors of the paper.

Temporal windows

Functional MRI, which has helped scientists learn a great deal about the roles of different parts of the brain, works by measuring changes in blood flow in the brain. These measurements act as a proxy of neural activity during a particular task. However, each “voxel,” or three-dimensional chunk, of an fMRI image represents hundreds of thousands to millions of neurons and sums up activity across about two seconds, so it can’t reveal fine-grained detail about what those neurons are doing.

One way to get more detailed information about neural function is to record electrical activity using electrodes implanted in the brain. These data are hard to come by because this procedure is done only in patients who are already undergoing surgery for a neurological condition such as severe epilepsy.

“It can take a few years to get enough data for a task because these patients are relatively rare, and in a given patient electrodes are implanted in idiosyncratic locations based on clinical needs, so it takes a while to assemble a dataset with sufficient coverage of some target part of the cortex. But these data, of course, are the best kind of data we can get from human brains: You know exactly where you are spatially and you have very fine-grained temporal information,” Fedorenko says.

In a 2016 study, Fedorenko reported using this approach to study the language processing regions of six people. Electrical activity was recorded while the participants read four different types of language stimuli: complete sentences, lists of words, lists of non-words, and “jabberwocky” sentences — sentences that have grammatical structure but are made of nonsense words.

Those data showed that in some neural populations in language processing regions, activity would gradually build up over a period of several words, when the participants were reading sentences. However, this did not happen when they read lists of words, lists of nonwords, of Jabberwocky sentences.

In the new study, Regev and Casto went back to those data and analyzed the temporal response profiles in greater detail. In their original dataset, they had recordings of electrical activity from 177 language-responsive electrodes across the six patients. Conservative estimates suggest that each electrode represents an average of activity from about 200,000 neurons. They also obtained new data from a second set of 16 patients, which included recordings from another 362 language-responsive electrodes.

When the researchers analyzed these data, they found that in some of the neural populations, activity would fluctuate up and down with each word. In others, however, activity would build up over multiple words before falling again, and yet others would show a steady buildup of neural activity over longer spans of words.

By comparing their data with predictions made by a computational model that the researchers designed to process stimuli with different temporal windows, the researchers found that neural populations from language processing areas could be divided into three clusters. These clusters represent temporal windows of either one, four, or six words.

“It really looks like these neural populations integrate information across different timescales along the sentence,” Regev says.

Processing words and meaning

These differences in temporal window size would have been impossible to see using fMRI, the researchers say.

“At the resolution of fMRI, we don’t see much heterogeneity within language-responsive regions. If you localize in individual participants the voxels in their brain that are most responsive to language, you find that their responses to sentences, word lists, jabberwocky sentences and non-word lists are highly similar,” Casto says.

The researchers were also able to determine the anatomical locations where these clusters were found. Neural populations with the shortest temporal window were found predominantly in the posterior temporal lobe, though some were also found in the frontal or anterior temporal lobes. Neural populations from the two other clusters, with longer temporal windows, were spread more evenly throughout the temporal and frontal lobes.

Fedorenko’s lab now plans to study whether these timescales correspond to different functions. One possibility is that the shortest timescale populations may be processing the meanings of a single word, while those with longer timescales interpret the meanings represented by multiple words.

“We already know that in the language network, there is sensitivity to how words go together and to the meanings of individual words,” Regev says. “So that could potentially map to what we’re finding, where the longest timescale is sensitive to things like syntax or relationships between words, and maybe the shortest timescale is more sensitive to features of single words or parts of them.”

The research was funded by the Zuckerman-CHE STEM Leadership Program, the Poitras Center for Psychiatric Disorders Research, the Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University, the U.S. National Institutes of Health, an American Epilepsy Society Research and Training Fellowship, the McDonnell Center for Systems Neuroscience, Fondazione Neurone, the McGovern Institute, MIT’s Department of Brain and Cognitive Sciences, and the Simons Center for the Social Brain.

Five MIT faculty elected to the National Academy of Sciences for 2024

The National Academy of Sciences has elected 120 members and 24 international members, including five faculty members from MIT. Guoping Feng, Piotr Indyk, Daniel J. Kleitman, Daniela Rus, and Senthil Todadri were elected in recognition of their “distinguished and continuing achievements in original research.” Membership to the National Academy of Sciences is one of the highest honors a scientist can receive in their career.

Among the new members added this year are also nine MIT alumni, including Zvi Bern ’82; Harold Hwang ’93, SM ’93; Leonard Kleinrock SM ’59, PhD ’63; Jeffrey C. Lagarias ’71, SM ’72, PhD ’74; Ann Pearson PhD ’00; Robin Pemantle PhD ’88; Jonas C. Peters PhD ’98; Lynn Talley PhD ’82; and Peter T. Wolczanski ’76. Those elected this year bring the total number of active members to 2,617, with 537 international members.

The National Academy of Sciences is a private, nonprofit institution that was established under a congressional charter signed by President Abraham Lincoln in 1863. It recognizes achievement in science by election to membership, and — with the National Academy of Engineering and the National Academy of Medicine — provides science, engineering, and health policy advice to the federal government and other organizations.

Guoping Feng

Guoping Feng is the James W. (1963) and Patricia T. Poitras Professor in the Department of Brain and Cognitive Sciences. He is also associate director and investigator in the McGovern Institute for Brain Research, a member of the Broad Institute of MIT and Harvard, and director of the Hock E. Tan and K. Lisa Yang Center for Autism Research.

His research focuses on understanding the molecular mechanisms that regulate the development and function of synapses, the places in the brain where neurons connect and communicate. He’s interested in how defects in the synapses can contribute to psychiatric and neurodevelopmental disorders. By understanding the fundamental mechanisms behind these disorders, he’s producing foundational knowledge that may guide the development of new treatments for conditions like obsessive-compulsive disorder and schizophrenia.

Feng received his medical training at Zhejiang University Medical School in Hangzhou, China, and his PhD in molecular genetics from the State University of New York at Buffalo. He did his postdoctoral training at Washington University at St. Louis and was on the faculty at Duke University School of Medicine before coming to MIT in 2010. He is a member of the American Academy of Arts and Sciences, a fellow of the American Association for the Advancement of Science, and was elected to the National Academy of Medicine in 2023.

Piotr Indyk

Piotr Indyk is the Thomas D. and Virginia W. Cabot Professor of Electrical Engineering and Computer Science. He received his magister degree from the University of Warsaw and his PhD from Stanford University before coming to MIT in 2000.

Indyk’s research focuses on building efficient, sublinear, and streaming algorithms. He’s developed, for example, algorithms that can use limited time and space to navigate massive data streams, that can separate signals into individual frequencies faster than other methods, and can address the “nearest neighbor” problem by finding highly similar data points without needing to scan an entire database. His work has applications on everything from machine learning to data mining.

He has been named a Simons Investigator and a fellow of the Association for Computer Machinery. In 2023, he was elected to the American Academy of Arts and Sciences.

Daniel J. Kleitman

Daniel Kleitman, a professor emeritus of applied mathematics, has been at MIT since 1966. He received his undergraduate degree from Cornell University and his master’s and PhD in physics from Harvard University before doing postdoctoral work at Harvard and the Niels Bohr Institute in Copenhagen, Denmark.

Kleitman’s research interests include operations research, genomics, graph theory, and combinatorics, the area of math concerned with counting. He was actually a professor of physics at Brandeis University before changing his field to math, encouraged by the prolific mathematician Paul Erdős. In fact, Kleitman has the rare distinction of having an Erdős number of just one. The number is a measure of the “collaborative distance” between a mathematician and Erdős in terms of authorship of papers, and studies have shown that leading mathematicians have particularly low numbers.

He’s a member of the American Academy of Arts and Sciences and has made important contributions to the MIT community throughout his career. He was head of the Department of Mathematics and served on a number of committees, including the Applied Mathematics Committee. He also helped create web-based technology and an online textbook for several of the department’s core undergraduate courses. He was even a math advisor for the MIT-based film “Good Will Hunting.”

Daniela Rus

Daniela Rus, the Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science, is the director of the Computer Science and Artificial Intelligence Laboratory (CSAIL). She also serves as director of the Toyota-CSAIL Joint Research Center.

Her research on robotics, artificial intelligence, and data science is geared toward understanding the science and engineering of autonomy. Her ultimate goal is to create a future where machines are seamlessly integrated into daily life to support people with cognitive and physical tasks, and deployed in way that ensures they benefit humanity. She’s working to increase the ability of machines to reason, learn, and adapt to complex tasks in human-centered environments with applications for agriculture, manufacturing, medicine, construction, and other industries. She’s also interested in creating new tools for designing and fabricating robots and in improving the interfaces between robots and people, and she’s done collaborative projects at the intersection of technology and artistic performance.

Rus received her undergraduate degree from the University of Iowa and her PhD in computer science from Cornell University. She was a professor of computer science at Dartmouth College before coming to MIT in 2004. She is part of the Class of 2002 MacArthur Fellows; was elected to the National Academy of Engineering and the American Academy of Arts and Sciences; and is a fellow of the Association for Computer Machinery, the Institute of Electrical and Electronics Engineers, and the Association for the Advancement of Artificial Intelligence.

Senthil Todadri

Senthil Todadri, a professor of physics, came to MIT in 2001. He received his undergraduate degree from the Indian Institute of Technology in Kanpur and his PhD from Yale University before working as a postdoc at the Kavli Institute for Theoretical Physics in Santa Barbara, California.

Todadri’s research focuses on condensed matter theory. He’s interested in novel phases and phase transitions of quantum matter that expand beyond existing paradigms. Combining modeling experiments and abstract methods, he’s working to develop a theoretical framework for describing the physics of these systems. Much of that work involves understanding the phenomena that arise because of impurities or strong interactions between electrons in solids that don’t conform with conventional physical theories. He also pioneered the theory of deconfined quantum criticality, which describes a class of phase transitions, and he discovered the dualities of quantum field theories in two dimensional superconducting states, which has important applications to many problems in the field.

Todadri has been named a Simons Investigator, a Sloan Research Fellow, and a fellow of the American Physical Society. In 2023, he was elected to the American Academy of Arts and Sciences

Using MRI, engineers have found a way to detect light deep in the brain

Scientists often label cells with proteins that glow, allowing them to track the growth of a tumor, or measure changes in gene expression that occur as cells differentiate.

A man stands with his arms crossed in front of a board with mathematical equations written on it.
Alan Jasanoff, associate member of the McGovern Institute, and a professor of brain and cognitive sciences, biological engineering, and nuclear science and engineering at MIT. Photo: Justin Knight

While this technique works well in cells and some tissues of the body, it has been difficult to apply this technique to image structures deep within the brain, because the light scatters too much before it can be detected.

MIT engineers have now come up with a novel way to detect this type of light, known as bioluminescence, in the brain: They engineered blood vessels of the brain to express a protein that causes them to dilate in the presence of light. That dilation can then be observed with magnetic resonance imaging (MRI), allowing researchers to pinpoint the source of light.

“A well-known problem that we face in neuroscience, as well as other fields, is that it’s very difficult to use optical tools in deep tissue. One of the core objectives of our study was to come up with a way to image bioluminescent molecules in deep tissue with reasonably high resolution,” says Alan Jasanoff, an MIT professor of biological engineering, brain and cognitive sciences, and nuclear science and engineering.

The new technique developed by Jasanoff and his colleagues could enable researchers to explore the inner workings of the brain in more detail than has previously been possible.

Jasanoff, who is also an associate investigator at MIT’s McGovern Institute for Brain Research, is the senior author of the study, which appears today in Nature Biomedical Engineering. Former MIT postdocs Robert Ohlendorf and Nan Li are the lead authors of the paper.

Detecting light

Bioluminescent proteins are found in many organisms, including jellyfish and fireflies. Scientists use these proteins to label specific proteins or cells, whose glow can be detected by a luminometer. One of the proteins often used for this purpose is luciferase, which comes in a variety of forms that glow in different colors.

Jasanoff’s lab, which specializes in developing new ways to image the brain using MRI, wanted to find a way to detect luciferase deep within the brain. To achieve that, they came up with a method for transforming the blood vessels of the brain into light detectors. A popular form of MRI works by imaging changes in blood flow in the brain, so the researchers engineered the blood vessels themselves to respond to light by dilating.

“Blood vessels are a dominant source of imaging contrast in functional MRI and other non-invasive imaging techniques, so we thought we could convert the intrinsic ability of these techniques to image blood vessels into a means for imaging light, by photosensitizing the blood vessels themselves,” Jasanoff says.

“We essentially turn the vasculature of the brain into a three-dimensional camera.” – Alan Jasanoff

To make the blood vessels sensitive to light, the researcher engineered them to express a bacterial protein called Beggiatoa photoactivated adenylate cyclase (bPAC). When exposed to light, this enzyme produces a molecule called cAMP, which causes blood vessels to dilate. When blood vessels dilate, it alters the balance of oxygenated and deoxygenated hemoglobin, which have different magnetic properties. This shift in magnetic properties can be detected by MRI.

BPAC responds specifically to blue light, which has a short wavelength, so it detects light generated within close range. The researchers used a viral vector to deliver the gene for bPAC specifically to the smooth muscle cells that make up blood vessels. When this vector was injected in rats, blood vessels throughout a large area of the brain became light-sensitive.

“Blood vessels form a network in the brain that is extremely dense. Every cell in the brain is within a couple dozen microns of a blood vessel,” Jasanoff says. “The way I like to describe our approach is that we essentially turn the vasculature of the brain into a three-dimensional camera.”

Once the blood vessels were sensitized to light, the researchers implanted cells that had been engineered to express luciferase if a substrate called CZT is present. In the rats, the researchers were able to detect luciferase by imaging the brain with MRI, which revealed dilated blood vessels.

Tracking changes in the brain

The researchers then tested whether their technique could detect light produced by the brain’s own cells, if they were engineered to express luciferase. They delivered the gene for a type of luciferase called GLuc to cells in a deep brain region known as the striatum. When the CZT substrate was injected into the animals, MRI imaging revealed the sites where light had been emitted.

This technique, which the researchers dubbed bioluminescence imaging using hemodynamics, or BLUsH, could be used in a variety of ways to help scientists learn more about the brain, Jasanoff says.

For one, it could be used to map changes in gene expression, by linking the expression of luciferase to a specific gene. This could help researchers observe how gene expression changes during embryonic development and cell differentiation, or when new memories form. Luciferase could also be used to map anatomical connections between cells or to reveal how cells communicate with each other.

The researchers now plan to explore some of those applications, as well as adapting the technique for use in mice and other animal models.

The research was funded by the U.S. National Institutes of Health, the G. Harold and Leila Y. Mathers Foundation, Lore Harp McGovern, Gardner Hendrie, a fellowship from the German Research Foundation, a Marie Sklodowska-Curie Fellowship from the European Union, and a Y. Eva Tan Fellowship and a J. Douglas Tan Fellowship, both from the McGovern Institute for Brain Research.

Women in STEM — A celebration of excellence and curiosity

What better way to commemorate Women’s History Month and International Women’s Day than to give  three of the world’s most accomplished scientists an opportunity to talk about their careers? On March 7, MindHandHeart invited professors Paula Hammond, Ann Graybiel, and Sangeeta Bhatia to share their career journeys, from the progress they have witnessed to the challenges they have faced as women in STEM. Their conversation was moderated by Mary Fuller, chair of the faculty and professor of literature.

Hammond, an Institute professor with appointments in the Department of Chemical Engineering and the Koch Institute for Integrative Cancer Research, reflected on the strides made by women faculty at MIT, while acknowledging ongoing challenges. “I think that we have advanced a great deal in the last few decades in terms of the numbers of women who are present, although we still have a long way to go,” Hammond noted in her opening. “We’ve seen a remarkable increase over the past couple of decades in our undergraduate population here at MIT, and now we’re beginning to see it in the graduate population, which is really exciting.” Hammond was recently appointed to the role of vice provost for faculty.

Ann Graybiel, also an Institute professor, who has appointments in the Department of Brain and Cognitive Sciences and the McGovern Institute for Brain Research, described growing up in the Deep South. “Girls can’t do science,” she remembers being told in school, and they “can’t do research.” Yet her father, a physician scientist, often took her with him to work and had her assist from a young age, eventually encouraging her directly to pursue a career in science. Graybiel, who first came to MIT in 1973, noted that she continued to face barriers and rejection throughout her career long after leaving the South, but that individual gestures of inspiration, generosity, or simple statements of “You can do it” from her peers helped her power through and continue in her scientific pursuits.

Sangeeta Bhatia, the John and Dorothy Wilson Professor of Health Sciences and Technology and Electrical Engineering and Computer Science, director of the Marble Center for Cancer Nanomedicine at the Koch Institute for Integrative Cancer Research, and a member of the Institute for Medical Engineering and Science, is also the mother of two teenage girls. She shared her perspective on balancing career and family life: “I wanted to pick up my kids from school and I wanted to know their friends. … I had a vision for the life that I wanted.” Setting boundaries at work, she noted, empowered her to achieve both personal and professional goals. Bhatia also described her collaboration with President Emerita Susan Hockfield and MIT Amgen Professor of Biology Emerita Nancy Hopkins to spearhead the Future Founders Initiative, which aims to boost the representation of female faculty members pursuing biotechnology ventures.

A video of the full panel discussion is available on the MindHandHeart YouTube channel.

A new computational technique could make it easier to engineer useful proteins

To engineer proteins with useful functions, researchers usually begin with a natural protein that has a desirable function, such as emitting fluorescent light, and put it through many rounds of random mutation that eventually generate an optimized version of the protein.

This process has yielded optimized versions of many important proteins, including green fluorescent protein (GFP). However, for other proteins, it has proven difficult to generate an optimized version. MIT researchers have now developed a computational approach that makes it easier to predict mutations that will lead to better proteins, based on a relatively small amount of data.

Using this model, the researchers generated proteins with mutations that were predicted to lead to improved versions of GFP and a protein from adeno-associated virus (AAV), which is used to deliver DNA for gene therapy. They hope it could also be used to develop additional tools for neuroscience research and medical applications.

Woman gestures with her hand in front of a glass wall with equations written on it.
MIT Professor of Brain and Cognitive Sciences Ila Fiete in her lab at the McGovern Institute. Photo: Steph Stevens

“Protein design is a hard problem because the mapping from DNA sequence to protein structure and function is really complex. There might be a great protein 10 changes away in the sequence, but each intermediate change might correspond to a totally nonfunctional protein. It’s like trying to find your way to the river basin in a mountain range, when there are craggy peaks along the way that block your view. The current work tries to make the riverbed easier to find,” says Ila Fiete, a professor of brain and cognitive sciences at MIT, a member of MIT’s McGovern Institute for Brain Research, director of the K. Lisa Yang Integrative Computational Neuroscience Center, and one of the senior authors of the study.

Regina Barzilay, the School of Engineering Distinguished Professor for AI and Health at MIT, and Tommi Jaakkola, the Thomas Siebel Professor of Electrical Engineering and Computer Science at MIT, are also senior authors of an open-access paper on the work, which will be presented at the International Conference on Learning Representations in May. MIT graduate students Andrew Kirjner and Jason Yim are the lead authors of the study. Other authors include Shahar Bracha, an MIT postdoc, and Raman Samusevich, a graduate student at Czech Technical University.

Optimizing proteins

Many naturally occurring proteins have functions that could make them useful for research or medical applications, but they need a little extra engineering to optimize them. In this study, the researchers were originally interested in developing proteins that could be used in living cells as voltage indicators. These proteins, produced by some bacteria and algae, emit fluorescent light when an electric potential is detected. If engineered for use in mammalian cells, such proteins could allow researchers to measure neuron activity without using electrodes.

While decades of research have gone into engineering these proteins to produce a stronger fluorescent signal, on a faster timescale, they haven’t become effective enough for widespread use. Bracha, who works in Edward Boyden’s lab at the McGovern Institute, reached out to Fiete’s lab to see if they could work together on a computational approach that might help speed up the process of optimizing the proteins.

“This work exemplifies the human serendipity that characterizes so much science discovery,” Fiete says.

“This work grew out of the Yang Tan Collective retreat, a scientific meeting of researchers from multiple centers at MIT with distinct missions unified by the shared support of K. Lisa Yang. We learned that some of our interests and tools in modeling how brains learn and optimize could be applied in the totally different domain of protein design, as being practiced in the Boyden lab.”

For any given protein that researchers might want to optimize, there is a nearly infinite number of possible sequences that could generated by swapping in different amino acids at each point within the sequence. With so many possible variants, it is impossible to test all of them experimentally, so researchers have turned to computational modeling to try to predict which ones will work best.

In this study, the researchers set out to overcome those challenges, using data from GFP to develop and test a computational model that could predict better versions of the protein.

They began by training a type of model known as a convolutional neural network (CNN) on experimental data consisting of GFP sequences and their brightness — the feature that they wanted to optimize.

The model was able to create a “fitness landscape” — a three-dimensional map that depicts the fitness of a given protein and how much it differs from the original sequence — based on a relatively small amount of experimental data (from about 1,000 variants of GFP).

These landscapes contain peaks that represent fitter proteins and valleys that represent less fit proteins. Predicting the path that a protein needs to follow to reach the peaks of fitness can be difficult, because often a protein will need to undergo a mutation that makes it less fit before it reaches a nearby peak of higher fitness. To overcome this problem, the researchers used an existing computational technique to “smooth” the fitness landscape.

Once these small bumps in the landscape were smoothed, the researchers retrained the CNN model and found that it was able to reach greater fitness peaks more easily. The model was able to predict optimized GFP sequences that had as many as seven different amino acids from the protein sequence they started with, and the best of these proteins were estimated to be about 2.5 times fitter than the original.

“Once we have this landscape that represents what the model thinks is nearby, we smooth it out and then we retrain the model on the smoother version of the landscape,” Kirjner says. “Now there is a smooth path from your starting point to the top, which the model is now able to reach by iteratively making small improvements. The same is often impossible for unsmoothed landscapes.”

Proof-of-concept

The researchers also showed that this approach worked well in identifying new sequences for the viral capsid of adeno-associated virus (AAV), a viral vector that is commonly used to deliver DNA. In that case, they optimized the capsid for its ability to package a DNA payload.

“We used GFP and AAV as a proof-of-concept to show that this is a method that works on data sets that are very well-characterized, and because of that, it should be applicable to other protein engineering problems,” Bracha says.

The researchers now plan to use this computational technique on data that Bracha has been generating on voltage indicator proteins.

“Dozens of labs having been working on that for two decades, and still there isn’t anything better,” she says. “The hope is that now with generation of a smaller data set, we could train a model in silico and make predictions that could be better than the past two decades of manual testing.”

The research was funded, in part, by the U.S. National Science Foundation, the Machine Learning for Pharmaceutical Discovery and Synthesis consortium, the Abdul Latif Jameel Clinic for Machine Learning in Health, the DTRA Discovery of Medical Countermeasures Against New and Emerging threats program, the DARPA Accelerated Molecular Discovery program, the Sanofi Computational Antibody Design grant, the U.S. Office of Naval Research, the Howard Hughes Medical Institute, the National Institutes of Health, the K. Lisa Yang ICoN Center, and the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics at MIT.