Imaging method reveals new cells and structures in human brain tissue

Using a novel microscopy technique, MIT and Brigham and Women’s Hospital/Harvard Medical School researchers have imaged human brain tissue in greater detail than ever before, revealing cells and structures that were not previously visible.

McGovern Institute Investigator Edward Boyden. Photo: Justin Knight

Among their findings, the researchers discovered that some “low-grade” brain tumors contain more putative aggressive tumor cells than expected, suggesting that some of these tumors may be more aggressive than previously thought.

The researchers hope that this technique could eventually be deployed to diagnose tumors, generate more accurate prognoses, and help doctors choose treatments.

“We’re starting to see how important the interactions of neurons and synapses with the surrounding brain are to the growth and progression of tumors. A lot of those things we really couldn’t see with conventional tools, but now we have a tool to look at those tissues at the nanoscale and try to understand these interactions,” says Pablo Valdes, a former MIT postdoc who is now an assistant professor of neuroscience at the University of Texas Medical Branch and the lead author of the study.

Edward Boyden, the Y. Eva Tan Professor in Neurotechnology at MIT; a professor of biological engineering, media arts and sciences, and brain and cognitive sciences; a Howard Hughes Medical Institute investigator; and a member of MIT’s McGovern Institute for Brain Research and Koch Institute for Integrative Cancer Research; and E. Antonio Chiocca, a professor of neurosurgery at Harvard Medical School and chair of neurosurgery at Brigham and Women’s Hospital, are the senior authors of the study, which appears today in Science Translational Medicine.

Making molecules visible

The new imaging method is based on expansion microscopy, a technique developed in Boyden’s lab in 2015 based on a simple premise: Instead of using powerful, expensive microscopes to obtain high-resolution images, the researchers devised a way to expand the tissue itself, allowing it to be imaged at very high resolution with a regular light microscope.

The technique works by embedding the tissue into a polymer that swells when water is added, and then softening up and breaking apart the proteins that normally hold tissue together. Then, adding water swells the polymer, pulling all the proteins apart from each other. This tissue enlargement allows researchers to obtain images with a resolution of around 70 nanometers, which was previously possible only with very specialized and expensive microscopes such as scanning electron microscopes.

In 2017, the Boyden lab developed a way to expand preserved human tissue specimens, but the chemical reagents that they used also destroyed the proteins that the researchers were interested in labeling. By labeling the proteins with fluorescent antibodies before expansion, the proteins’ location and identity could be visualized after the expansion process was complete. However, the antibodies typically used for this kind of labeling can’t easily squeeze through densely packed tissue before it’s expanded.

So, for this study, the authors devised a different tissue-softening protocol that breaks up the tissue but preserves proteins in the sample. After the tissue is expanded, proteins can be labelled with commercially available fluorescent antibodies. The researchers then can perform several rounds of imaging, with three or four different proteins labeled in each round. This labeling of proteins enables many more structures to be imaged, because once the tissue is expanded, antibodies can squeeze through and label proteins they couldn’t previously reach.

The technique works by embedding the tissue into a polymer that swells when water is added, and then softening up and breaking apart the proteins that normally hold tissue together.

“We open up the space between the proteins so that we can get antibodies into crowded spaces that we couldn’t otherwise,” Valdes says. “We saw that we could expand the tissue, we could decrowd the proteins, and we could image many, many proteins in the same tissue by doing multiple rounds of staining.”

Working with MIT Assistant Professor Deblina Sarkar, the researchers demonstrated a form of this “decrowding” in 2022 using mouse tissue.

The new study resulted in a decrowding technique for use with human brain tissue samples that are used in clinical settings for pathological diagnosis and to guide treatment decisions. These samples can be more difficult to work with because they are usually embedded in paraffin and treated with other chemicals that need to be broken down before the tissue can be expanded.

In this study, the researchers labeled up to 16 different molecules per tissue sample. The molecules they targeted include markers for a variety of structures, including axons and synapses, as well as markers that identify cell types such as astrocytes and cells that form blood vessels. They also labeled molecules linked to tumor aggressiveness and neurodegeneration.

Using this approach, the researchers analyzed healthy brain tissue, along with samples from patients with two types of glioma — high-grade glioblastoma, which is the most aggressive primary brain tumor, with a poor prognosis, and low-grade gliomas, which are considered less aggressive.

“We wanted to look at brain tumors so that we can understand them better at the nanoscale level, and by doing that, to be able to develop better treatments and diagnoses in the future. At this point, it was more developing a tool to be able to understand them better, because currently in neuro-oncology, people haven’t done much in terms of super-resolution imaging,” Valdes says.

A diagnostic tool

To identify aggressive tumor cells in gliomas they studied, the researchers labeled vimentin, a protein that is found in highly aggressive glioblastomas. To their surprise, they found many more vimentin-expressing tumor cells in low-grade gliomas than had been seen using any other method.

“This tells us something about the biology of these tumors, specifically, how some of them probably have a more aggressive nature than you would suspect by doing standard staining techniques,” Valdes says.

When glioma patients undergo surgery, tumor samples are preserved and analyzed using immunohistochemistry staining, which can reveal certain markers of aggressiveness, including some of the markers analyzed in this study.

“These are incurable brain cancers, and this type of discovery will allow us to figure out which cancer molecules to target so we can design better treatments. It also proves the profound impact of having clinicians like us at the Brigham and Women’s interacting with basic scientists such as Ed Boyden at MIT to discover new technologies that can improve patient lives,” Chiocca says.

The researchers hope their expansion microscopy technique could allow doctors to learn much more about patients’ tumors, helping them to determine how aggressive the tumor is and guiding treatment choices. Valdes now plans to do a larger study of tumor types to try to establish diagnostic guidelines based on the tumor traits that can be revealed using this technique.

“Our hope is that this is going to be a diagnostic tool to pick up marker cells, interactions, and so on, that we couldn’t before,” he says. “It’s a practical tool that will help the clinical world of neuro-oncology and neuropathology look at neurological diseases at the nanoscale like never before, because fundamentally it’s a very simple tool to use.”

Boyden’s lab also plans to use this technique to study other aspects of brain function, in healthy and diseased tissue.

“Being able to do nanoimaging is important because biology is about nanoscale things — genes, gene products, biomolecules — and they interact over nanoscale distances,” Boyden says. “We can study all sorts of nanoscale interactions, including synaptic changes, immune interactions, and changes that occur during cancer and aging.”

The research was funded by K. Lisa Yang, the Howard Hughes Medical Institute, John Doerr, Open Philanthropy, the Bill and Melinda Gates Foundation, the Koch Institute Frontier Research Program, the National Institutes of Health, and the Neurosurgery Research and Education Foundation.

Study reveals a universal pattern of brain wave frequencies

Throughout the brain’s cortex, neurons are arranged in six distinctive layers, which can be readily seen with a microscope. A team of MIT and Vanderbilt University neuroscientists has now found that these layers also show distinct patterns of electrical activity, which are consistent over many brain regions and across several animal species, including humans.

The researchers found that in the topmost layers, neuron activity is dominated by rapid oscillations known as gamma waves. In the deeper layers, slower oscillations called alpha and beta waves predominate. The universality of these patterns suggests that these oscillations are likely playing an important role across the brain, the researchers say.

“When you see something that consistent and ubiquitous across cortex, it’s playing a very fundamental role in what the cortex does,” says Earl Miller, the Picower Professor of Neuroscience, a member of MIT’s Picower Institute for Learning and Memory, and one of the senior authors of the new study.

Imbalances in how these oscillations interact with each other may be involved in brain disorders such as attention deficit hyperactivity disorder, the researchers say.

“Overly synchronous neural activity is known to play a role in epilepsy, and now we suspect that different pathologies of synchrony may contribute to many brain disorders, including disorders of perception, attention, memory, and motor control. In an orchestra, one instrument played out of synchrony with the rest can disrupt the coherence of the entire piece of music,” says Robert Desimone, director of MIT’s McGovern Institute for Brain Research and one of the senior authors of the study.

André Bastos, an assistant professor of psychology at Vanderbilt University, is also a senior author of the open-access paper, which appears today in Nature Neuroscience. The lead authors of the paper are MIT research scientist Diego Mendoza-Halliday and MIT postdoc Alex Major.

Layers of activity

The human brain contains billions of neurons, each of which has its own electrical firing patterns. Together, groups of neurons with similar patterns generate oscillations of electrical activity, or brain waves, which can have different frequencies. Miller’s lab has previously shown that high-frequency gamma rhythms are associated with encoding and retrieving sensory information, while low-frequency beta rhythms act as a control mechanism that determines which information is read out from working memory.

His lab has also found that in certain parts of the prefrontal cortex, different brain layers show distinctive patterns of oscillation: faster oscillation at the surface and slower oscillation in the deep layers. One study, led by Bastos when he was a postdoc in Miller’s lab, showed that as animals performed working memory tasks, lower-frequency rhythms generated in deeper layers regulated the higher-frequency gamma rhythms generated in the superficial layers.

In addition to working memory, the brain’s cortex also is the seat of thought, planning, and high-level processing of emotion and sensory information. Throughout the regions involved in these functions, neurons are arranged in six layers, and each layer has its own distinctive combination of cell types and connections with other brain areas.

“The cortex is organized anatomically into six layers, no matter whether you look at mice or humans or any mammalian species, and this pattern is present in all cortical areas within each species,” Mendoza-Halliday says. “Unfortunately, a lot of studies of brain activity have been ignoring those layers because when you record the activity of neurons, it’s been difficult to understand where they are in the context of those layers.”

In the new paper, the researchers wanted to explore whether the layered oscillation pattern they had seen in the prefrontal cortex is more widespread, occurring across different parts of the cortex and across species.

Using a combination of data acquired in Miller’s lab, Desimone’s lab, and labs from collaborators at Vanderbilt, the Netherlands Institute for Neuroscience, and the University of Western Ontario, the researchers were able to analyze 14 different areas of the cortex, from four mammalian species. This data included recordings of electrical activity from three human patients who had electrodes inserted in the brain as part of a surgical procedure they were undergoing.

Recording from individual cortical layers has been difficult in the past, because each layer is less than a millimeter thick, so it’s hard to know which layer an electrode is recording from. For this study, electrical activity was recorded using special electrodes that record from all of the layers at once, then feed the data into a new computational algorithm the authors designed, termed FLIP (frequency-based layer identification procedure). This algorithm can determine which layer each signal came from.

“More recent technology allows recording of all layers of cortex simultaneously. This paints a broader perspective of microcircuitry and allowed us to observe this layered pattern,” Major says. “This work is exciting because it is both informative of a fundamental microcircuit pattern and provides a robust new technique for studying the brain. It doesn’t matter if the brain is performing a task or at rest and can be observed in as little as five to 10 seconds.”

Across all species, in each region studied, the researchers found the same layered activity pattern.

“We did a mass analysis of all the data to see if we could find the same pattern in all areas of the cortex, and voilà, it was everywhere. That was a real indication that what had previously been seen in a couple of areas was representing a fundamental mechanism across the cortex,” Mendoza-Halliday says.

Maintaining balance

The findings support a model that Miller’s lab has previously put forth, which proposes that the brain’s spatial organization helps it to incorporate new information, which carried by high-frequency oscillations, into existing memories and brain processes, which are maintained by low-frequency oscillations. As information passes from layer to layer, input can be incorporated as needed to help the brain perform particular tasks such as baking a new cookie recipe or remembering a phone number.

“The consequence of a laminar separation of these frequencies, as we observed, may be to allow superficial layers to represent external sensory information with faster frequencies, and for deep layers to represent internal cognitive states with slower frequencies,” Bastos says. “The high-level implication is that the cortex has multiple mechanisms involving both anatomy and oscillations to separate ‘external’ from ‘internal’ information.”

Under this theory, imbalances between high- and low-frequency oscillations can lead to either attention deficits such as ADHD, when the higher frequencies dominate and too much sensory information gets in, or delusional disorders such as schizophrenia, when the low frequency oscillations are too strong and not enough sensory information gets in.

“The proper balance between the top-down control signals and the bottom-up sensory signals is important for everything the cortex does,” Miller says. “When the balance goes awry, you get a wide variety of neuropsychiatric disorders.”

The researchers are now exploring whether measuring these oscillations could help to diagnose these types of disorders. They are also investigating whether rebalancing the oscillations could alter behavior — an approach that could one day be used to treat attention deficits or other neurological disorders, the researchers say.

The researchers also hope to work with other labs to characterize the layered oscillation patterns in more detail across different brain regions.

“Our hope is that with enough of that standardized reporting, we will start to see common patterns of activity across different areas or functions that might reveal a common mechanism for computation that can be used for motor outputs, for vision, for memory and attention, et cetera,” Mendoza-Halliday says.

The research was funded by the U.S. Office of Naval Research, the U.S. National Institutes of Health, the U.S. National Eye Institute, the U.S. National Institute of Mental Health, the Picower Institute, a Simons Center for the Social Brain Postdoctoral Fellowship, and a Canadian Institutes of Health Postdoctoral Fellowship.

Researchers uncover new CRISPR-like system in animals that can edit the human genome

A team of researchers led by Feng Zhang at the McGovern Institute and the Broad Institute of MIT and Harvard has uncovered the first programmable RNA-guided system in eukaryotes — organisms that include fungi, plants, and animals.

In a study in Nature, the team describes how the system is based on a protein called Fanzor. They showed that Fanzor proteins use RNA as a guide to target DNA precisely, and that Fanzors can be reprogrammed to edit the genome of human cells. The compact Fanzor systems have the potential to be more easily delivered to cells and tissues as therapeutics than CRISPR/Cas systems, and further refinements to improve their targeting efficiency could make them a valuable new technology for human genome editing.

CRISPR/Cas was first discovered in prokaryotes (bacteria and other single-cell organisms that lack nuclei) and scientists including Zhang’s lab have long wondered whether similar systems exist in eukaryotes. The new study demonstrates that RNA-guided DNA-cutting mechanisms are present across all kingdoms of life.

“This new system is another way to make precise changes in human cells, complementing the genome editing tools we already have.” — Feng Zhang

“CRISPR-based systems are widely used and powerful because they can be easily reprogrammed to target different sites in the genome,” said Zhang, senior author on the study and a core institute member at the Broad, an investigator at MIT’s McGovern Institute, the James and Patricia Poitras Professor of Neuroscience at MIT, and a Howard Hughes Medical Institute investigator. “This new system is another way to make precise changes in human cells, complementing the genome editing tools we already have.”

Searching the domains of life

A major aim of the Zhang lab is to develop genetic medicines using systems that can modulate human cells by targeting specific genes and processes. “A number of years ago, we started to ask, ‘What is there beyond CRISPR, and are there other RNA-programmable systems out there in nature?’” said Zhang.

Feng Zhang with folded arms in lab
McGovern Investigator Feng Zhang in his lab.

Two years ago, Zhang lab members discovered a class of RNA-programmable systems in prokaryotes called OMEGAs, which are often linked with transposable elements, or “jumping genes”, in bacterial genomes and likely gave rise to CRISPR/Cas systems. That work also highlighted similarities between prokaryotic OMEGA systems and Fanzor proteins in eukaryotes, suggesting that the Fanzor enzymes might also use an RNA-guided mechanism to target and cut DNA.

In the new study, the researchers continued their study of RNA-guided systems by isolating Fanzors from fungi, algae, and amoeba species, in addition to a clam known as the Northern Quahog. Co-first author Makoto Saito of the Zhang lab led the biochemical characterization of the Fanzor proteins, showing that they are DNA-cutting endonuclease enzymes that use nearby non-coding RNAs known as ωRNAs to target particular sites in the genome. It is the first time this mechanism has been found in eukaryotes, such as animals.

Unlike CRISPR proteins, Fanzor enzymes are encoded in the eukaryotic genome within transposable elements and the team’s phylogenetic analysis suggests that the Fanzor genes have migrated from bacteria to eukaryotes through so-called horizontal gene transfer.

“These OMEGA systems are more ancestral to CRISPR and they are among the most abundant proteins on the planet, so it makes sense that they have been able to hop back and forth between prokaryotes and eukaryotes,” said Saito.

To explore Fanzor’s potential as a genome editing tool, the researchers demonstrated that it can generate insertions and deletions at targeted genome sites within human cells. The researchers found the Fanzor system to initially be less efficient at snipping DNA than CRISPR/Cas systems, but by systematic engineering, they introduced a combination of mutations into the protein that increased its activity 10-fold. Additionally, unlike some CRISPR systems and the OMEGA protein TnpB, the team found that a fungal-derived Fanzor protein did not exhibit “collateral activity,” where an RNA-guided enzyme cleaves its DNA target as well as degrading nearby DNA or RNA. The results suggest that Fanzors could potentially be developed as efficient genome editors.

Co-first author Peiyu Xu led an effort to analyze the molecular structure of the Fanzor/ωRNA complex and illustrate how it latches onto DNA to cut it. Fanzor shares structural similarities with its prokaryotic counterpart CRISPR-Cas12 protein, but the interaction between the ωRNA and the catalytic domains of Fanzor is more extensive, suggesting that the ωRNA might play a role in the catalytic reactions. “We are excited about these structural insights for helping us further engineer and optimize Fanzor for improved efficiency and precision as a genome editor,” said Xu.

Like CRISPR-based systems, the Fanzor system can be easily reprogrammed to target specific genome sites, and Zhang said it could one day be developed into a powerful new genome editing technology for research and therapeutic applications. The abundance of RNA-guided endonucleases like Fanzors further expands the number of OMEGA systems known across kingdoms of life and suggests that there are more yet to be found.

“Nature is amazing. There’s so much diversity,” said Zhang. “There are probably more RNA-programmable systems out there, and we’re continuing to explore and will hopefully discover more.”

The paper’s other authors include Guilhem Faure, Samantha Maguire, Soumya Kannan, Han Altae-Tran, Sam Vo, AnAn Desimone, and Rhiannon Macrae.

Support for this work was provided by the Howard Hughes Medical Institute; Poitras Center for Psychiatric Disorders Research at MIT; K. Lisa Yang and Hock E. Tan Molecular Therapeutics Center at MIT; Broad Institute Programmable Therapeutics Gift Donors; The Pershing Square Foundation, William Ackman, and Neri Oxman; James and Patricia Poitras; BT Charitable Foundation; Asness Family Foundation; Kenneth C. Griffin; the Phillips family; David Cheng; Robert Metcalfe; and Hugo Shong.

 

Unraveling connections between the brain and gut

The brain and the digestive tract are in constant communication, relaying signals that help to control feeding and other behaviors. This extensive communication network also influences our mental state and has been implicated in many neurological disorders.

MIT engineers have designed a new technology for probing those connections. Using fibers embedded with a variety of sensors, as well as light sources for optogenetic stimulation, the researchers have shown that they can control neural circuits connecting the gut and the brain, in mice.

In a new study, the researchers demonstrated that they could induce feelings of fullness or reward-seeking behavior in mice by manipulating cells of the intestine. In future work, they hope to explore some of the correlations that have been observed between digestive health and neurological conditions such as autism and Parkinson’s disease.

“The exciting thing here is that we now have technology that can drive gut function and behaviors such as feeding. More importantly, we have the ability to start accessing the crosstalk between the gut and the brain with the millisecond precision of optogenetics, and we can do it in behaving animals,” says Polina Anikeeva, the Matoula S. Salapatas Professor in Materials Science and Engineering, a professor of brain and cognitive sciences, director of the K. Lisa Yang Brain-Body Center, associate director of MIT’s Research Laboratory of Electronics, and a member of MIT’s McGovern Institute for Brain Research.

Portait of MIT scientist Polina Anikeeva
McGovern Institute Associate Investigator Polina Anikeeva in her lab. Photo: Steph Stevens

Anikeeva is the senior author of the new study, which appears today in Nature Biotechnology. The paper’s lead authors are MIT graduate student Atharva Sahasrabudhe, Duke University postdoc Laura Rupprecht, MIT postdoc Sirma Orguc, and former MIT postdoc Tural Khudiyev.

The brain-body connection

Last year, the McGovern Institute launched the K. Lisa Yang Brain-Body Center to study the interplay between the brain and other organs of the body. Research at the center focuses on illuminating how these interactions help to shape behavior and overall health, with a goal of developing future therapies for a variety of diseases.

“There’s continuous, bidirectional crosstalk between the body and the brain,” Anikeeva says. “For a long time, we thought the brain is a tyrant that sends output into the organs and controls everything. But now we know there’s a lot of feedback back into the brain, and this feedback potentially controls some of the functions that we have previously attributed exclusively to the central neural control.”

As part of the center’s work, Anikeeva set out to probe the signals that pass between the brain and the nervous system of the gut, also called the enteric nervous system. Sensory cells in the gut influence hunger and satiety via both the neuronal communication and hormone release.

Untangling those hormonal and neural effects has been difficult because there hasn’t been a good way to rapidly measure the neuronal signals, which occur within milliseconds.

“We needed a device that didn’t exist. So, we decided to make it,” says Atharva Sahasrabudhe.

“To be able to perform gut optogenetics and then measure the effects on brain function and behavior, which requires millisecond precision, we needed a device that didn’t exist. So, we decided to make it,” says Sahasrabudhe, who led the development of the gut and brain probes.

The electronic interface that the researchers designed consists of flexible fibers that can carry out a variety of functions and can be inserted into the organs of interest. To create the fibers, Sahasrabudhe used a technique called thermal drawing, which allowed him to create polymer filaments, about as thin as a human hair, that can be embedded with electrodes and temperature sensors.

The filaments also carry microscale light-emitting devices that can be used to optogenetically stimulate cells, and microfluidic channels that can be used to deliver drugs.

The mechanical properties of the fibers can be tailored for use in different parts of the body. For the brain, the researchers created stiffer fibers that could be threaded deep into the brain. For digestive organs such as the intestine, they designed more delicate rubbery fibers that do not damage the lining of the organs but are still sturdy enough to withstand the harsh environment of the digestive tract.

“To study the interaction between the brain and the body, it is necessary to develop technologies that can interface with organs of interest as well as the brain at the same time, while recording physiological signals with high signal-to-noise ratio,” Sahasrabudhe says. “We also need to be able to selectively stimulate different cell types in both organs in mice so that we can test their behaviors and perform causal analyses of these circuits.”

The fibers are also designed so that they can be controlled wirelessly, using an external control circuit that can be temporarily affixed to the animal during an experiment. This wireless control circuit was developed by Orguc, a Schmidt Science Fellow, and Harrison Allen ’20, MEng ’22, who were co-advised between the Anikeeva lab and the lab of Anantha Chandrakasan, dean of MIT’s School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science.

Driving behavior

Using this interface, the researchers performed a series of experiments to show that they could influence behavior through manipulation of the gut as well as the brain.

First, they used the fibers to deliver optogenetic stimulation to a part of the brain called the ventral tegmental area (VTA), which releases dopamine. They placed mice in a cage with three chambers, and when the mice entered one particular chamber, the researchers activated the dopamine neurons. The resulting dopamine burst made the mice more likely to return to that chamber in search of the dopamine reward.

Then, the researchers tried to see if they could also induce that reward-seeking behavior by influencing the gut. To do that, they used fibers in the gut to release sucrose, which also activated dopamine release in the brain and prompted the animals to seek out the chamber they were in when sucrose was delivered.

Next, working with colleagues from Duke University, the researchers found they could induce the same reward-seeking behavior by skipping the sucrose and optogenetically stimulating nerve endings in the gut that provide input to the vagus nerve, which controls digestion and other bodily functions.

Three scientists holding a fiber in a lab.
Duke University postdoc Laura Rupprecht, MIT graduate student Atharva Sahasrabudhe, and MIT postdoc Sirma Orguc holding their engineered flexible fiber in Polina Anikeeva’s lab at MIT. Photo: Courtesy of the researchers

“Again, we got this place preference behavior that people have previously seen with stimulation in the brain, but now we are not touching the brain. We are just stimulating the gut, and we are observing control of central function from the periphery,” Anikeeva says.

Sahasrabudhe worked closely with Rupprecht, a postdoc in Professor Diego Bohorquez’ group at Duke, to test the fibers’ ability to control feeding behaviors. They found that the devices could optogenetically stimulate cells that produce cholecystokinin, a hormone that promotes satiety. When this hormone release was activated, the animals’ appetites were suppressed, even though they had been fasting for several hours. The researchers also demonstrated a similar effect when they stimulated cells that produce a peptide called PYY, which normally curbs appetite after very rich foods are consumed.

The researchers now plan to use this interface to study neurological conditions that are believed to have a gut-brain connection. For instance, studies have shown that autistic children are far more likely than their peers to be diagnosed with GI dysfunction, while anxiety and irritable bowel syndrome share genetic risks.

“We can now begin asking, are those coincidences, or is there a connection between the gut and the brain? And maybe there is an opportunity for us to tap into those gut-brain circuits to begin managing some of those conditions by manipulating the peripheral circuits in a way that does not directly ‘touch’ the brain and is less invasive,” Anikeeva says.

The research was funded, in part, by the Hock E. Tan and K. Lisa Yang Center for Autism Research and the K. Lisa Yang Brain-Body Center, the National Institute of Neurological Disorders and Stroke, the National Science Foundation (NSF) Center for Materials Science and Engineering, the NSF Center for Neurotechnology, the National Center for Complementary and Integrative Health, a National Institutes of Health Director’s Pioneer Award, the National Institute of Mental Health, and the National Institute of Diabetes and Digestive and Kidney Diseases.

Computational model mimics humans’ ability to predict emotions

When interacting with another person, you likely spend part of your time trying to anticipate how they will feel about what you’re saying or doing. This task requires a cognitive skill called theory of mind, which helps us to infer other people’s beliefs, desires, intentions, and emotions.

MIT neuroscientists have now designed a computational model that can predict other people’s emotions — including joy, gratitude, confusion, regret, and embarrassment — approximating human observers’ social intelligence. The model was designed to predict the emotions of people involved in a situation based on the prisoner’s dilemma, a classic game theory scenario in which two people must decide whether to cooperate with their partner or betray them.

To build the model, the researchers incorporated several factors that have been hypothesized to influence people’s emotional reactions, including that person’s desires, their expectations in a particular situation, and whether anyone was watching their actions.

“These are very common, basic intuitions, and what we said is, we can take that very basic grammar and make a model that will learn to predict emotions from those features,” says Rebecca Saxe, the John W. Jarve Professor of Brain and Cognitive Sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Sean Dae Houlihan PhD ’22, a postdoc at the Neukom Institute for Computational Science at Dartmouth College, is the lead author of the paper, which appears today in Philosophical Transactions A. Other authors include Max Kleiman-Weiner PhD ’18, a postdoc at MIT and Harvard University; Luke Hewitt PhD ’22, a visiting scholar at Stanford University; and Joshua Tenenbaum, a professor of computational cognitive science at MIT and a member of the Center for Brains, Minds, and Machines and MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).

Predicting emotions

While a great deal of research has gone into training computer models to infer someone’s emotional state based on their facial expression, that is not the most important aspect of human emotional intelligence, Saxe says. Much more important is the ability to predict someone’s emotional response to events before they occur.

“The most important thing about what it is to understand other people’s emotions is to anticipate what other people will feel before the thing has happened,” she says. “If all of our emotional intelligence was reactive, that would be a catastrophe.”

To try to model how human observers make these predictions, the researchers used scenarios taken from a British game show called “Golden Balls.” On the show, contestants are paired up with a pot of $100,000 at stake. After negotiating with their partner, each contestant decides, secretly, whether to split the pool or try to steal it. If both decide to split, they each receive $50,000. If one splits and one steals, the stealer gets the entire pot. If both try to steal, no one gets anything.

Depending on the outcome, contestants may experience a range of emotions — joy and relief if both contestants split, surprise and fury if one’s opponent steals the pot, and perhaps guilt mingled with excitement if one successfully steals.

To create a computational model that can predict these emotions, the researchers designed three separate modules. The first module is trained to infer a person’s preferences and beliefs based on their action, through a process called inverse planning.

“This is an idea that says if you see just a little bit of somebody’s behavior, you can probabilistically infer things about what they wanted and expected in that situation,” Saxe says.

Using this approach, the first module can predict contestants’ motivations based on their actions in the game. For example, if someone decides to split in an attempt to share the pot, it can be inferred that they also expected the other person to split. If someone decides to steal, they may have expected the other person to steal, and didn’t want to be cheated. Or, they may have expected the other person to split and decided to try to take advantage of them.

The model can also integrate knowledge about specific players, such as the contestant’s occupation, to help it infer the players’ most likely motivation.

The second module compares the outcome of the game with what each player wanted and expected to happen. Then, a third module predicts what emotions the contestants may be feeling, based on the outcome and what was known about their expectations. This third module was trained to predict emotions based on predictions from human observers about how contestants would feel after a particular outcome. The authors emphasize that this is a model of human social intelligence, designed to mimic how observers causally reason about each other’s emotions, not a model of how people actually feel.

“From the data, the model learns that what it means, for example, to feel a lot of joy in this situation, is to get what you wanted, to do it by being fair, and to do it without taking advantage,” Saxe says.

Core intuitions

Once the three modules were up and running, the researchers used them on a new dataset from the game show to determine how the models’ emotion predictions compared with the predictions made by human observers. This model performed much better at that task than any previous model of emotion prediction.

The model’s success stems from its incorporation of key factors that the human brain also uses when predicting how someone else will react to a given situation, Saxe says. Those include computations of how a person will evaluate and emotionally react to a situation, based on their desires and expectations, which relate to not only material gain but also how they are viewed by others.

“Our model has those core intuitions, that the mental states underlying emotion are about what you wanted, what you expected, what happened, and who saw. And what people want is not just stuff. They don’t just want money; they want to be fair, but also not to be the sucker, not to be cheated,” she says.

“The researchers have helped build a deeper understanding of how emotions contribute to determining our actions; and then, by flipping their model around, they explain how we can use people’s actions to infer their underlying emotions. This line of work helps us see emotions not just as ‘feelings’ but as playing a crucial, and subtle, role in human social behavior,” says Nick Chater, a professor of behavioral science at the University of Warwick, who was not involved in the study.

In future work, the researchers hope to adapt the model so that it can perform more general predictions based on situations other than the game-show scenario used in this study. They are also working on creating models that can predict what happened in the game based solely on the expression on the faces of the contestants after the results were announced.

The research was funded by the McGovern Institute; the Paul E. and Lilah Newton Brain Science Award; the Center for Brains, Minds, and Machines; the MIT-IBM Watson AI Lab; and the Multidisciplinary University Research Initiative.

Bionics researchers develop technologies to ease pain and transcend human limitations

This story originally appeared in the Spring 2023 issue of Spectrum.

___

In early December 2022, a middle-aged woman from California arrived at Boston’s Brigham and Women’s Hospital for the amputation of her right leg below the knee following an accident. This was no ordinary procedure. At the end of her remaining leg, surgeons attached a titanium fixture through which they threaded eight thin, electrically conductive wires. These flexible leads, implanted on her leg muscles, would, in the coming months, connect to a robotic, battery-powered prosthetic ankle and foot.

The goal of this unprecedented surgery, driven by MIT researchers from the K. Lisa Yang Center for Bionics at MIT, was the restoration of near-natural function to the patient, enabling her to sense and control the position and motion of her ankle and foot—even with her eyes closed.

In the K. Lisa Yang Center for Bionics, codirector Hugh Herr SM ’93 and graduate student Christopher Shallal are working to return mobility to people disabled by disease or physical trauma. Photo: Tony Luong

“The brain knows exactly how to control the limb, and it doesn’t matter whether it is flesh and bone or made of titanium, silicon, and carbon composite,” says Hugh Herr SM ’93, professor of media arts and sciences, head of the MIT Media Lab’s Biomechatronics Group, codirector of the Yang Center, and an associate member of MIT’s McGovern Institute for Brain Research.

For Herr, in attendance during that long day, the surgery represented a critical milestone in a decades-long mission to develop technologies returning mobility to people disabled by disease or physical trauma. His research combines a dizzying range of disciplines—electrical, mechanical, tissue, and biomedical engineering, as well as neuroscience and robotics—and has yielded pathbreaking results. Herr’s more than 100 patents include a computer-controlled knee and powered ankle-foot prosthesis and have enabled thousands of people around the world to live more on their own terms, including Herr.

Surmounting catastrophe

For much of Herr’s life, “go” meant “up.”

“Starting when I was eight, I developed an extraordinary passion, an absolute obsession, for climbing; it’s all I thought about in life,” says Herr. He aspired “to be the best climber in the world,” a goal he nearly achieved in his teenage years, enthralled by the “purity” of ascending mountains ropeless and solo in record times, by “a vertical dance, a balance between physicality and mind control.”

McGovern Institute Associate Investigator Hugh Herr. Photo: Jimmy Day / MIT Media Lab

At 17, Herr became disoriented while climbing New Hampshire’s Mt. Washington during a blizzard. Days in the cold permanently damaged his legs, which had to be amputated below his knees. His rescue cost another man’s life, and Herr was despondent, disappointed in himself, and fearful for his future.

Then, following months of rehabilitation, he felt compelled to test himself. His first weekend home, when he couldn’t walk without canes and crutches, he headed back to the mountains. “I hobbled to the base of this vertical cliff and started ascending,” he recalls. “It brought me joy to realize that I was still me, the same person.”

But he also recognized that as a person with amputated limbs, he faced severe disadvantages. “Society doesn’t look kindly on people with unusual bodies; we are viewed as crippled and weak, and that did not sit well with me.” Unable to tolerate both the new physical and social constraints on his life, Herr determined to view his disability not as a loss but as an opportunity. “I think the rage was the catapult that led me to do something that was without precedent,” he says.

Lifelike limb

On hand in the surgical theater in December was a member of Herr’s Biomechatronics Group for whom the bionic limb procedure also held special resonance. Christopher Shallal, a second-year graduate student in the Harvard-MIT Health Sciences and Technology program who received bilateral lower limb amputations at birth, worked alongside surgeon Matthew Carty testing the electric leads before implantation in the patient. Shallal found this, his first direct involvement with a reconstruction surgery, deeply fulfilling.

“Ever since I was a kid, I’ve wanted to do medicine plus engineering,” says Shallal. “I’m really excited to work on this bionic limb reconstruction, which will probably be one of the most advanced systems yet in terms of neural interfacing and control, with a far greater range of motion possible.”

Hugh and Shallal are working on a next-generation, biomimetic limb with implanted sensors that can relay signals between the external prosthesis and muscles in the remaining limb. Photo: Tony Luong

Like other Herr lab designs, the new prosthesis features onboard, battery-powered propulsion, microprocessors, and tunable actuators. But this next-generation, biomimetic limb represents a major leap forward, replacing electrodes sited on a patient’s skin, subject to sweat and other environmental threats, with implanted sensors that can relay signals between the external prosthesis and muscles in the remaining limb.

This system takes advantage of a breakthrough technique invented several years ago by the Herr lab called CMI (for cutaneous mechanoneural interface), which constructs muscle-skin-nerve bundles at the amputation site. Muscle actuators controlled by computers on board the external prosthesis apply forces on skin cells implanted within the amputated residuum when a person with amputation touches an object with their prosthesis.

With CMI and electric leads connecting the prosthesis to these muscle actuators within the residual limb, the researchers hypothesize that a person with an amputation will be able to “feel” their prosthetic leg step onto the ground. This sensory capability is the holy grail for persons with major limb loss. After recovery from her surgery, the woman from California will be wearing Herr’s latest state-of-the-art prosthetic system in the lab.

‘Tinkering’ with the body

Not all artificial limbs emulate those that humans are born with. “You can make them however you want, swapping them in and out depending on what you want to do, and they can take you anywhere,” Herr says. Committed to extreme climbing even after his accident, Herr came up with special limbs that became a commercial hit early in his career. His designs made it possible for someone with amputated legs to run and dance.

But he also knew the day-to-day discomfort of navigating on flatter earth with most prostheses. He won his first patent during his senior year of college for a fluid-controlled socket attachment designed to reduce the pain of walking. Growing up in a Mennonite family skilled in handcrafting things they needed, and in a larger community that was disdainful of technology, Herr says he had “difficulty trusting machines.” Yet by the time he began his master’s program at MIT, intent on liberating persons with limb amputation to live more fully in the world, he had embraced the tools of science and engineering as the means to this end.

“I want to be in the business of designing not more and more powerful tools but designing new bodies,” says Hugh Herr.

For Shallal, Herr was an early icon, and his inventions and climbing exploits served as inspiration. “I’d known about Hugh since middle school; he was famous among those with amputations,” he says. “As a kid, I liked tinkering with things, and I kind of saw my body as a canvas, a place where I could explore different boundaries and expand possibilities for myself and others with amputations.” In school, Shallal sometimes encountered resistance to his prostheses. “People would say I couldn’t do certain things, like running and playing different sports, and I found these barriers frustrating,” he says. “I did things in my own way and didn’t want people to pity me.”

In fact, Shallal felt he could do some things better than his peers. In high school, he used a 3-D printer to make a mobile phone charger case he could plug into his prosthesis. “As a kid, I would wear long pants to hide my legs, but as the technology got cooler, I started wearing shorts,” he says. “I got comfortable and liked kind of showing off my legs.”

Global impact

December’s surgery was the first phase in the bionic limb project. Shallal will be following up with the patient over many months, ensuring that the connections between her limb and implanted sensors function and provide appropriate sensorimotor data for the built-in processor. Research on this and other patients to determine the impact of these limbs on gait and ease of managing slopes, for instance, will form the basis for Shallal’s dissertation.

“After graduation, I’d be really interested in translating technology out of the lab, maybe doing a startup related to neural interfacing technology,” he says. “I watched Inspector Gadget on television when I was a kid. Making the tool you need at the time you need it to fix problems would be my dream.”

Herr will be overseeing Shallal’s work, as well as a suite of research efforts propelled by other graduate students, postdocs, and research scientists that together promise to strengthen the technology behind this generation of biomimetic prostheses.

One example: devising an innovative method for measuring muscle length and velocity with tiny implanted magnets. In work published in November 2022, researchers including Herr; project lead Cameron Taylor SM ’16, PhD ’20, a research associate in the Biomechatronics Group; and Brown University partners demonstrated that this new tool, magnetomicrometry, yields the kind of high-resolution data necessary for even more precise bionic limb control. The Herr lab awaits FDA approval on human implantation of the magnetic beads.

These intertwined initiatives are central to the ambitious mission of the K. Lisa Yang Center for Bionics, established with a $24 million gift from Yang in 2021 to tackle transformative bionic interventions to address an extensive range of human limitations.

Herr is committed to making the broadest possible impact with his technologies. “Shoes and braces hurt, so my group is developing the science of comfort—designing mechanical parts that attach to the body and transfer loads without causing pain.” These inventions may prove useful not just to people living with amputation but to patients suffering from arthritis or other diseases affecting muscles, joints, and bones, whether in lower limbs or arms and hands.

The Yang Center aims to make prosthetic and orthotic devices more accessible globally, so Herr’s group is ramping up services in Sierra Leone, where civil war left tens of thousands missing limbs after devastating machete attacks. “We’re educating clinicians, helping with supply chain infrastructure, introducing novel assistive technology, and developing mobile delivery platforms,” he says.

In the end, says Herr, “I want to be in the business of designing not more and more powerful tools but designing new bodies.” Herr uses himself as an example: “I walk on two very powerful robots, but they’re not linked to my skeleton, or to my brain, so when I walk it feels like I’m on powerful machines that are not me. What I want is such a marriage between human physiology and electromechanics that a person feels at one with the synthetic, designed content of their body.” and control, with a far greater range of motion possible.”

Mehrdad Jazayeri wants to know how our brains model the external world

Much of our daily life requires us to make inferences about the world around us. As you think about which direction your tennis opponent will hit the ball, or try to figure out why your child is crying, your brain is searching for answers about possibilities that are not directly accessible through sensory experiences.

MIT Associate Professor Mehrdad Jazayeri has devoted most of his career to exploring how the brain creates internal representations, or models, of the external world to make intelligent inferences about hidden states of the world.

“The one question I am most interested in is how does the brain form internal models of the external world? Studying inference is really a powerful way of gaining insight into these internal models,” says Jazayeri, who recently earned tenure in the Department of Brain and Cognitive Sciences and is also a member of MIT’s McGovern Institute for Brain Research.

Using a variety of approaches, including detailed analysis of behavior, direct recording of activity of neurons in the brain, and mathematical modeling, he has discovered how the brain builds models of statistical regularities in the environment. He has also found circuits and mechanisms that enable the brain to capture the causal relationships between observations and outcomes.

An unusual path

Jazayeri, who has been on the faculty at MIT since 2013, took an unusual path to a career in neuroscience. Growing up in Tehran, Iran, he was an indifferent student until his second year of high school when he got interested in solving challenging geometry puzzles. He also started programming with the ZX Spectrum, an early 8-bit personal computer, that his father had given him.

During high school, he was chosen to train for Iran’s first ever National Physics Olympiad team, but when he failed to make it to the international team, he became discouraged and temporarily gave up on the idea of going to college. Eventually, he participated in the University National Entrance Exam and was admitted to the electrical engineering department at Sharif University of Technology.

Jazayeri didn’t enjoy his four years of college education. The experience mostly helped him realize that he was not meant to become an engineer. “I realized that I’m not an inventor. What inspires me is the process of discovery,” he says. “I really like to figure things out, not build things, so those four years were not very inspiring.”

After graduating from college, Jazayeri spent a few years working on a banana farm near the Caspian Sea, along with two friends. He describes those years as among the best and most formative of his life. He would wake by 4 a.m., work on the farm until late afternoon, and spend the rest of the day thinking and reading. One topic he read about with great interest was neuroscience, which led him a few years later to apply to graduate school.

He immigrated to Canada and was admitted to the University of Toronto, where he earned a master’s degree in physiology and neuroscience. While there, he worked on building small circuit models that would mimic the activity of neurons in the hippocampus.

From there, Jazayeri went on to New York University to earn a PhD in neuroscience, where he studied how signals in the visual cortex support perception and decision-making. “I was less interested in how the visual cortex encodes the external world,” he says. “I wanted to understand how the rest of the brain decodes the signals in visual cortex, which is, in effect, an inference problem.”

He continued pursuing his interest in the neurobiology of inference as a postdoc at the University of Washington, where he investigated how the brain uses temporal regularities in the environment to estimate time intervals, and uses knowledge about those intervals to plan for future actions.

Building internal models to make inferences

Inference is the process of drawing conclusions based on information that is not readily available. Making rich inferences from scarce data is one of humans’ core mental capacities, one that is central to what makes us the most intelligent species on Earth. To do so, our nervous system builds internal models of the external world, and those models that help us think through possibilities without directly experiencing them.

The problem of inferences presents itself in many behavioral settings.

“Our nervous system makes all sorts of internal models for different behavioral goals, some that capture the statistical regularities in the environment, some that link potential causes to effects, some that reflect relationships between entities, and some that enable us to think about others,” Jazayeri says.

Jazayeri’s lab at MIT is made up of a group of cognitive scientists, electrophysiologists, engineers, and physicists with a shared interest in understanding the nature of internal models in the brain and how those models enable us to make inferences in different behavioral tasks.

Early work in the lab focused on a simple timing task to examine the problem of statistical inference, that is, how we use statistical regularities in the environment to make accurate inference. First, they found that the brain coordinates movements in time using a dynamic process, akin to an analog timer. They also found that the neural representation of time in the frontal cortex is being continuously calibrated based on prior experience so that we can make more accurate time estimates in the presence of uncertainty.

Later, the lab developed a complex decision-making task to examine the neural basis of causal inference, or the process of deducing a hidden cause based on its effects. In a paper that appeared in 2019, Jazayeri and his colleagues identified a hierarchical and distributed brain circuit in the frontal cortex that helps the brain to determine the most probable cause of failure within a hierarchy of decisions.

More recently, the lab has extended its investigation to other behavioral domains, including relational inference and social inference. Relational inference is about situating an ambiguous observation using relational memory. For example, coming out of a subway in a new neighborhood, we may use our knowledge of the relationship between visible landmarks to infer which way is north. Social inference, which is extremely difficult to study, involves deducing other people’s beliefs and goals based on their actions.

Along with studies in human volunteers and animal models, Jazayeri’s lab develops computational models based on neural networks, which helps them to test different possible hypotheses of how the brain performs specific tasks. By comparing the activity of those models with neural activity data from animals, the researchers can gain insight into how the brain actually performs a particular type of inference task.

“My main interest is in how the brain makes inferences about the world based on the neural signals,” Jazayeri says. “All of my work is about looking inside the brain, measuring signals, and using mathematical tools to try to understand how those signals are manifestations of an internal model within the brain.”

A hunger for social contact

Since the coronavirus pandemic began in the spring, many people have only seen their close friends and loved ones during video calls, if at all. A new study from MIT finds that the longings we feel during this kind of social isolation share a neural basis with the food cravings we feel when hungry.

The researchers found that after one day of total isolation, the sight of people having fun together activates the same brain region that lights up when someone who hasn’t eaten all day sees a picture of a plate of cheesy pasta.

“People who are forced to be isolated crave social interactions similarly to the way a hungry person craves food.”

“Our finding fits the intuitive idea that positive social interactions are a basic human need, and acute loneliness is an aversive state that motivates people to repair what is lacking, similar to hunger,” says Rebecca Saxe, the John W. Jarve Professor of Brain and Cognitive Sciences at MIT, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

The research team collected the data for this study in 2018 and 2019, long before the coronavirus pandemic and resulting lockdowns. Their new findings, described today in Nature Neuroscience, are part of a larger research program focusing on how social stress affects people’s behavior and motivation.

Former MIT postdoc Livia Tomova, who is now a research associate at Cambridge University, is the lead author of the paper. Other authors include Kimberly Wang, a McGovern Institute research associate; Todd Thompson, a McGovern Institute scientist; Atsushi Takahashi, assistant director of the Martinos Imaging Center; Gillian Matthews, a research scientist at the Salk Institute for Biological Studies; and Kay Tye, a professor at the Salk Institute.

Social craving

The new study was partly inspired by a recent paper from Tye, a former member of MIT’s Picower Institute for Learning and Memory. In that 2016 study, she and Matthews, then an MIT postdoc, identified a cluster of neurons in the brains of mice that represent feelings of loneliness and generate a drive for social interaction following isolation. Studies in humans have shown that being deprived of social contact can lead to emotional distress, but the neurological basis of these feelings is not well-known.

“We wanted to see if we could experimentally induce a certain kind of social stress, where we would have control over what the social stress was,” Saxe says. “It’s a stronger intervention of social isolation than anyone had tried before.”

To create that isolation environment, the researchers enlisted healthy volunteers, who were mainly college students, and confined them to a windowless room on MIT’s campus for 10 hours. They were not allowed to use their phones, but the room did have a computer that they could use to contact the researchers if necessary.

“There were a whole bunch of interventions we used to make sure that it would really feel strange and different and isolated,” Saxe says. “They had to let us know when they were going to the bathroom so we could make sure it was empty. We delivered food to the door and then texted them when it was there so they could go get it. They really were not allowed to see people.”

After the 10-hour isolation ended, each participant was scanned in an MRI machine. This posed additional challenges, as the researchers wanted to avoid any social contact during the scanning. Before the isolation period began, each subject was trained on how to get into the machine, so that they could do it by themselves, without any help from the researcher.

“Normally, getting somebody into an MRI machine is actually a really social process. We engage in all kinds of social interactions to make sure people understand what we’re asking them, that they feel safe, that they know we’re there,” Saxe says. “In this case, the subjects had to do it all by themselves, while the researcher, who was gowned and masked, just stood silently by and watched.”

Each of the 40 participants also underwent 10 hours of fasting, on a different day. After the 10-hour period of isolation or fasting, the participants were scanned while looking at images of food, images of people interacting, and neutral images such as flowers. The researchers focused on a part of the brain called the substantia nigra, a tiny structure located in the midbrain, which has previously been linked with hunger cravings and drug cravings. The substantia nigra is also believed to share evolutionary origins with a brain region in mice called the dorsal raphe nucleus, which is the area that Tye’s lab showed was active following social isolation in their 2016 study.

The researchers hypothesized that when socially isolated subjects saw photos of people enjoying social interactions, the “craving signal” in their substantia nigra would be similar to the signal produced when they saw pictures of food after fasting. This was indeed the case. Furthermore, the amount of activation in the substantia nigra was correlated with how strongly the patients rated their feelings of craving either food or social interaction.

Degrees of loneliness

The researchers also found that people’s responses to isolation varied depending on their normal levels of loneliness. People who reported feeling chronically isolated months before the study was done showed weaker cravings for social interaction after the 10-hour isolation period than people who reported a richer social life.

“For people who reported that their lives were really full of satisfying social interactions, this intervention had a bigger effect on their brains and on their self-reports,” Saxe says.

The researchers also looked at activation patterns in other parts of the brain, including the striatum and the cortex, and found that hunger and isolation each activated distinct areas of those regions. That suggests that those areas are more specialized to respond to different types of longings, while the substantia nigra produces a more general signal representing a variety of cravings.

Now that the researchers have established that they can observe the effects of social isolation on brain activity, Saxe says they can now try to answer many additional questions. Those questions include how social isolation affect people’s behavior, whether virtual social contacts such as video calls help to alleviate cravings for social interaction, and how isolation affects different age groups.

The researchers also hope to study whether the brain responses that they saw in this study could be used to predict how the same participants responded to being isolated during the lockdowns imposed during the early stages of the coronavirus pandemic.

The research was funded by a SFARI Explorer Grant from the Simons Foundation, a MINT grant from the McGovern Institute, the National Institutes of Health, including an NIH Pioneer Award, a Max Kade Foundation Fellowship, and an Erwin Schroedinger Fellowship from the Austrian Science Fund.

20 Years of Discovery

 

McGovern Institute Director Robert Desimone.

Pat and Lore McGovern founded the McGovern Institute 20 years ago with a dual mission – to understand the brain, and to apply that knowledge to help the many people affected by brain disorders. Some of the amazing developments of the past 20 years, such as CRISPR, may seem entirely unexpected and “out of the blue.” But they were all built on a foundation of basic research spanning many years. With the incredible foundation we are building right now, I feel we are poised for many more “unexpected” discoveries in the years ahead.

I predict that in 20 years, we will have quantitative models of brain function that will not only explain how the brain gives rise to at least some aspects of our mind, but will also give us a new mechanistic understanding of brain disorders. This, in turn, will lead to new types of therapies, in what I imagine to be a post-pharmaceutical era of the future. I have no doubt that these same brain models will inspire new educational approaches for our children, and will be incorporated into whatever replaces my automobile, and iPhone, in 2040. I encourage you to read some other predictions from our faculty.

Our cutting-edge work depends not only on our stellar line up of faculty, but the more than 400 postdocs, graduate students, undergraduates, summer students, and staff who make up our community.

For this reason, I am particularly delighted to share with you McGovern’s rising stars — 20 young scientists from each of our labs — who represent the next generation of neuroscience.

And finally, we remain deeply indebted to our supporters for funding our research, including ongoing support from the Patrick J. McGovern Foundation. In recent years, more than 40% of our annual research funding has come from private individuals and foundations. This support enables critical seed funding for new research projects, the development of new technologies, our new research into autism and psychiatric disorders, and fellowships for young scientists just starting their careers. Our annual fund supporters have made possible more than 42 graduate fellowships, and you can read about some of these fellows on our website.

I hope that as you visit our website and read the pages of our special anniversary issue of Brain Scan, you will feel as optimistic as I do about our future.

Robert Desimone
Director, McGovern Institute
Doris and Don Berkey Professor of Neuroscience

SHERLOCK-based one-step test provides rapid and sensitive COVID-19 detection 

A team of researchers at the McGovern Institute for Brain Research at MIT, the Broad Institute of MIT and Harvard, the Ragon Institute, and the Howard Hughes Medical Institute (HHMI) has developed a new diagnostics platform called STOP (SHERLOCK Testing in One Pot) COVID. The test can be run in an hour as a single-step reaction with minimal handling, advancing the CRISPR-based SHERLOCK diagnostic technology closer to a point-of-care or at-home testing tool. The test has not been reviewed or approved by the FDA and is currently for research purposes only.

The team began developing tests for COVID-19 in January after learning about the emergence of a new virus which has challenged the healthcare system in China. The first version of the team’s SHERLOCK-based COVID-19 diagnostics system is already being used in hospitals in Thailand to help screen patients for COVID-19 infection.

The ability to test for COVID-19 at home, or even in pharmacies or places of employment, could be a game-changer for getting people safely back to work and into their communities.

The new test is named “STOPCovid” and is based on the STOP platform. In research it has been shown to enable rapid, accurate, and highly sensitive detection of the COVID-19 virus SARS-CoV-2 with a simple protocol that requires minimal training and uses simple, readily-available equipment, such as test tubes and water baths. STOPCovid has been validated in research settings using nasopharyngeal swabs from patients diagnosed with COVID-19. It has also been tested successfully in saliva samples to which SARS-CoV-2 RNA has been added as a proof-of-principle.

The team is posting the open protocol today on a new website, STOPCovid.science. It is being made openly available in line with the COVID-19 Technology Access Framework organized by Harvard, MIT, and Stanford. The Framework sets a model by which critically important technologies that may help prevent, diagnose, or treat COVID-19 infections may be deployed for the greatest public benefit without delay.

There is an urgent need for widespread, accurate COVID-19 testing to rapidly detect new cases, ideally without the need for specialized lab equipment. Such testing would enable early detection of new infections and drive effective “test-trace-isolate” measures to quickly contain new outbreaks. However, current testing capacity is limited by a combination of requirements for complex procedures and laboratory instrumentation and dependence on limited supplies. STOPCovid can be performed without RNA extraction, and while all patient tests have been performed with samples from nasopharyngeal swabs, preliminary experiments suggest that eventually swabs may not be necessary. Removing these barriers could help enable broad distribution.

“The ability to test for COVID-19 at home, or even in pharmacies or places of employment, could be a game-changer for getting people safely back to work and into their communities,” says Feng Zhang, a co-inventor of the CRISPR genome editing technology, an investigator at the McGovern Institute and HHMI, and a core member at the Broad Institute. “Creating a point-of-care tool is a critically important goal to allow timely decisions for protecting patients and those around them.”

To meet this need, Zhang, McGovern Fellows Omar Abudayyeh and Jonathan Gootenberg, and colleagues initiated a push to develop STOPCovid. They are sharing their findings and packaging reagents so other research teams can rapidly follow up with additional testing or development. The group is also sharing data on the StopCOVID.science website and via a submitted preprint. The website is also a hub where the public can find the latest information on the team’s developments.

McGovern Institute Fellows Jonathan Gootenberg (far left) Omar Abudayyeh and have developed a CRISPR research tool to detect COVID-19 with McGovern Investigator Feng Zhang (far right).
Credit: Justin Knight

How it works

The STOPCovid test combines CRISPR enzymes, programmed to recognize signatures of the SARS-CoV-2 virus, with complementary amplification reagents. This combination allows detection of as few as 100 copies of SARS-CoV-2 virus in a sample. As a result, the STOPCovid test allows for rapid, accurate, and highly sensitive detection of COVID-19 that can be conducted outside clinical laboratory settings.

STOPCovid has been tested on patient nasopharyngeal swab in parallel with clinically-validated tests. In these head-to-head comparisons, STOPCovid detected infection with 97% sensitivity and 100% specificity. Results appear on an easy-to-read strip that is akin to a pregnancy test, in the absence of any expensive or specialized lab equipment. Moreover, the researchers spiked mock SARS-CoV-2 genomes into healthy saliva samples and showed that STOPCovid is capable of sensitive detection from saliva, which would obviate the need for swabs in short supply and potentially make sampling much easier.

“The test aims to ultimately be simple enough that anyone can operate it in low-resource settings, including in clinics, pharmacies, or workplaces, and it could potentially even be put into a turn-key format for use at home,” says Abudayyeh.

Gootenberg adds, “Since STOPCovid can work in less than an hour and does not require any specialized equipment, and if our preliminary results from testing synthetic virus in saliva bear out in patient samples, it could address the need for scalable testing to reopen our society.”

The STOPCovid team during a recent zoom meeting. Image: Omar Abudayyeh

Importantly, the full test — both the viral genome amplification and subsequent detection — can be completed in a single reaction, as outlined on the website, from swabs or saliva. To engineer this, the team tested a number of CRISPR enzymes to find one that works well at the same temperature needed by the enzymes that perform the amplification. Zhang, Abudayyeh, Gootenberg and their teams, including graduate students Julia Joung and Alim Ladha, settled on a protein called AapCas12b, a CRISPR protein from the bacterium Alicyclobacillus acidophilus, responsible for the “off” taste associated with spoiled orange juice. With AapCas12b, the team was able to develop a test that can be performed at a constant temperature and does not require opening tubes midway through the process, a step that often leads to contamination and unreliable test results.

Information sharing and next steps

The team has prepared reagents for 10,000 tests to share with scientists and clinical collaborators for free around the world who want to evaluate the STOPCovid test for potential diagnostic use, and they have set up a website to share the latest data and updates with the scientific and clinical community. Kits and reagents can also be requested via a form on the website.


Acknowledgments: Patient samples were provided by Keith Jerome, Alex Greninger, Robert Bruneau, Mee-li W. Huang, Nam G. Kim, Xu Yu, Jonathan Li, and Bruce Walker. This work was supported by the Patrick J. McGovern Foundation and the McGovern Institute for Brain Research. F.Z is also supported by the NIH (1R01- MH110049 and 1DP1-HL141201 grants); Mathers Foundation; the Howard Hughes Medical Institute; Open Philanthropy Project; J. and P. Poitras; and R. Metcalfe.

Declaration of conflicts of interest: F.Z., O.O.A., J.S.G., J.J., and A.L. are inventors on patent applications related to this technology filed by the Broad Institute, with the specific aim of ensuring this technology can be made freely, widely, and rapidly available for research and deployment. O.O.A., J.S.G., and F.Z. are co-founders, scientific advisors, and hold equity interests in Sherlock Biosciences, Inc. F.Z. is also a co-founder of Editas Medicine, Beam Therapeutics, Pairwise Plants, and Arbor Biotechnologies.