Study decodes surprising approach mice take in learning

Neuroscience discoveries ranging from the nature of memory to treatments for disease have depended on reading the minds of mice, so researchers need to truly understand what the rodents’ behavior is telling them during experiments. In a new study that examines learning from reward, MIT researchers deciphered some initially mystifying mouse behavior, yielding new ideas about how mice think and a mathematical tool to aid future research.

The task the mice were supposed to master is simple: Turn a wheel left or right to get a reward and then recognize when the reward direction switches. When neurotypical people play such “reversal learning” games they quickly infer the optimal approach: stick with the direction that works until it doesn’t and then switch right away. Notably, people with schizophrenia struggle with the task. In the new study in PLOS Computational Biology, mice surprised scientists by showing that while they were capable of learning the “win-stay, lose-shift” strategy, they nonetheless refused to fully adopt it.

“It is not that mice cannot form an inference-based model of this environment—they can,” said corresponding author Mriganka Sur, Newton Professor in The Picower Institute for Learning and Memory and MIT’s Department of Brain and Cognitive Sciences (BCS). “The surprising thing is that they don’t persist with it. Even in a single block of the game where you know the reward is 100 percent on one side, every so often they will try the other side.”

While the mouse motif of departing from the optimal strategy could be due to a failure to hold it in memory, said lead author and Sur Lab graduate student Nhat Le, another possibility is that mice don’t commit to the “win-stay, lose-shift” approach because they don’t trust that their circumstances will remain stable or predictable. Instead, they might deviate from the optimal regime to test whether the rules have changed. Natural settings, after all, are rarely stable or predictable.

“I’d like to think mice are smarter than we give them credit for,” Le said.

But regardless of which reason may cause the mice to mix strategies, added co-senior author Mehrdad Jazayeri, Associate Professor in BCS and the McGovern Institute for Brain Research, it is important for researchers to recognize that they do and to be able to tell when and how they are choosing one strategy or another.

“This study highlights the fact that, unlike the accepted wisdom, mice doing lab tasks do not necessarily adopt a stationary strategy and it offers a computationally rigorous approach to detect and quantify such non-stationarities,” he said. “This ability is important because when researchers record the neural activity, their interpretation of the underlying algorithms and mechanisms may be invalid when they do not take the animals’ shifting strategies into account.”

Tracking thinking

The research team, which also includes co-author Murat Yildirim, a former Sur lab postdoc who is now an assistant professor at the Cleveland Clinic Lerner Research Institute, initially expected that the mice might adopt one strategy or the other. They simulated the results they’d expect to see if the mice either adopted the optimal strategy of inferring a rule about the task, or more randomly surveying whether left or right turns were being rewarded. Mouse behavior on the task, even after days, varied widely but it never resembled the results simulated by just one strategy.

To differing, individual extents, mouse performance on the task reflected variance along three parameters: how quickly they switched directions after the rule switched, how long it took them to transition to the new direction, and how loyal they remained to the new direction. Across 21 mice, the raw data represented a surprising diversity of outcomes on a task that neurotypical humans uniformly optimize. But the mice clearly weren’t helpless. Their average performance significantly improved over time, even though it plateaued below the optimal level.

In the task, the rewarded side switched every 15-25 turns. The team realized the mice were using more than one strategy in each such “block” of the game, rather than just inferring the simple rule and optimizing based on that inference. To disentangle when the mice were employing that strategy or another, the team harnessed an analytical framework called a Hidden Markov Model (HMM), which can computationally tease out when one unseen state is producing a result vs. another unseen state. Le likens it to what a judge on a cooking show might do: inferring which chef contestant made which version of a dish based on patterns in each plate of food before them.

Before the team could use an HMM to decipher their mouse performance results, however, they had to adapt it. A typical HMM might apply to individual mouse choices, but here the team modified it to explain choice transitions over the course of whole blocks. They dubbed their modified model the blockHMM. Computational simulations of task performance using the blockHMM showed that the algorithm is able to infer the true hidden states of an artificial agent. The authors then used this technique to show the mice were persistently blending multiple strategies, achieving varied levels of performance.

“We verified that each animal executes a mixture of behavior from multiple regimes instead of a behavior in a single domain,” Le and his co-authors wrote. “Indeed 17/21 mice used a combination of low, medium and high-performance behavior modes.”

Further analysis revealed that the strategies afoot were indeed the “correct” rule inference strategy and a more exploratory strategy consistent with randomly testing options to get turn-by-turn feedback.

Now that the researchers have decoded the peculiar approach mice take to reversal learning, they are planning to look more deeply into the brain to understand which brain regions and circuits are involved. By watching brain cell activity during the task, they hope to discern what underlies the decisions the mice make to switch strategies.

By examining reversal learning circuits in detail, Sur said, it’s possible the team will gain insights that could help explain why people with schizophrenia show diminished performance on reversal learning tasks. Sur added that some people with autism spectrum disorders also persist with newly unrewarded behaviors longer than neurotypical people, so his lab will also have that phenomenon in mind as they investigate.

Yildirim, too, is interested in examining potential clinical connections.

“This reversal learning paradigm fascinates me since I want to use it in my lab with various preclinical models of neurological disorders,” he said. “The next step for us is to determine the brain mechanisms underlying these differences in behavioral strategies and whether we can manipulate these strategies.”

Funding for the study came from The National Institutes of Health, the Army Research Office, a Paul and Lilah Newton Brain Science Research Award, the Massachusetts Life Sciences Initiative, The Picower Institute for Learning and Memory and The JPB Foundation.

One scientist’s journey from the Middle East to MIT

Smiling man holidng paper in a room.
Ubadah Sabbagh, soon after receiving his US citizenship papers, in April 2023. Photo: Ubadah Sabbagh

“I recently exhaled a breath I’ve been holding in for nearly half my life. After applying over a decade ago, I’m finally an American. This means so many things to me. Foremost, it means I can go back to the the Middle East, and see my mama and the family, for the first time in 14 years.” — McGovern Institute Postdoctoral Associate Ubadah Sabbagh, X (formerly Twitter) post, April 27, 2023

The words sit atop a photo of Ubadah Sabbagh, who joined the lab of Guoping Feng, James W. (1963) and Patricia T. Poitras Professor at MIT, as a postdoctoral associate in 2021. Sabbagh, a Syrian national, is dressed in a charcoal grey jacket, a keffiyeh loose around his neck, and holding his US citizenship papers, which he began applying for when he was 19 and an undergraduate at the University of Missouri-Kansas City (UMKC) studying biology and bioinformatics.

In the photo he is 29.

A clarity of vision

Sabbagh’s journey from the Middle East to his research position at MIT has been marked by determination and courage, a multifaceted curiosity, and a role as a scientist-writer/scientist-advocate.  He is particularly committed to the importance of humanity in science.

“For me, a scientist is a person who is not only in the lab but also has a unique perspective to contribute to society,” he says. “The scientific method is an idea, and that can be objective. But the process of doing science is a human endeavor, and like all human endeavors, it is inherently both social and political.”

At just 30 years of age, some of Sabbagh’s ideas have disrupted conventional thinking about how science is done in the United States. He believes nations should do science not primarily to compete, for example, but to be aspirational.

“It is our job to make our work accessible to the public, to educate and inform, and to help ground policy,” he says. “In our technologically advanced society, we need to raise the baseline for public scientific intuition so that people are empowered and better equipped to separate truth from myth.”

Two men sitting at a booth wearing headphones.
Ubadah Sabbagh is interviewed for Max Planck Forida’s Neurotransmissions podcast at the 2023 Society for Neuroscience conference in San Diego. Photo: Max Planck Florida

His research and advocacy work have won him accolades, including the 2023 Young Arab Pioneers Award from the Arab Youth Center and the 2020 Young Investigator Award from the American Society of Neurochemistry. He was also named to the 2021 Forbes “30 under 30” list, the first Syrian to be selected in the Science category.

A path to knowledge

Sabbagh’s path to that knowledge began when, living on his own at age 16, he attended Longview Community College, in Kansas City, often juggling multiple jobs. It continued at UMKC, where he fell in love with biology and had his first research experience with bioinformatician Gerald Wyckoff at the same time the civil war in Syria escalated, with his family still in the Middle East. “That was a rough time for me,” he says. “I had a lot of survivor’s guilt: I am here, I have all of this stability and security compared to what they have, and while they had suffocation, I had opportunity. I need to make this mean something positive, not just for me, but in as broad a way as possible for other people.”

Child smiles in front of scientific poster.
Ubadah Sabbagh, age 9, presents his first scientific poster. Photo: Ubadah Sabbagh

The war also sparked Sabbagh’s interest in human behavior—“where it originates, what motivates people to do things, but in a biological, not a psychological way,” he says. “What circuitry is engaged? What is the infrastructure of the brain that leads to X, Y, Z?”

His passion for neuroscience blossomed as a graduate student at Virginia Tech, where he earned his PhD in translational biology, medicine, and health. There, he received a six-year NIH F99/K00 Award, and under the mentorship of neuroscientist at the Fralin Biomedical Research Institute he researched the connections between the eye and the brain, specifically, mapping the architecture of the principle neurons in a region of the thalamus essential to visual processing.

“The retina, and the entire visual system, struck me as elegant, with beautiful layers of diverse cells found at every node,” says Sabbagh, his own eyes lighting up.

His research earned him a coveted spot on the Forbes “30 under 30” list, generating enormous visibility, including in the Arab world, adding visitors to his already robust X (formerly Twitter) account, which has more than 9,200 followers. “The increased visibility lets me use my voice to advocate for the things I care about,” he says.

“I need to make this mean something positive, not just for me, but in as broad a way as possible for other people.” — Ubadah Sabbagh

Those causes range from promoting equity and inclusion in science to transforming the American system of doing science for the betterment of science and the scientists themselves. He cofounded the nonprofit Black in Neuro to celebrate and empower Black scholars in neuroscience, and he continues to serve on the board. He is the chair of an advisory committee for the Society for Neuroscience (SfN), recommending ways SfN can better address the needs of its young members, and a member of the Advisory Committee to the National Institutes of Health (NIH) Director working group charged with re-envisioning postdoctoral training. He serves on the advisory board of Community for Rigor, a new NIH initiative that aims to teach scientific rigor at national scale and, in his spare time, he writes articles about the relationship of science and policy for publications including Scientific American and the Washington Post.

Still, there have been obstacles. The same year Sabbagh received the NIH F99/K00 Award, he faced major setbacks in his application to become a citizen. He would not try again until 2021, when he had his PhD in hand and had joined the McGovern Institute.

An MIT postdoc and citizenship

Sabbagh dove into his research in Guoping Feng’s lab with the same vigor and outside-the-box thinking that characterized his previous work. He continues to investigate the thalamus, but in a region that is less involved in processing pure sensory signals, such as light and sound, and more focused on cognitive functions of the brain. He aims to understand how thalamic brain areas orchestrate complex functions we carry out every day, including working memory and cognitive flexibility.

“This is important to understand because when this orchestra goes out of tune it can lead to a range of neurological disorders, including autism spectrum disorder and schizophrenia,” he says. He is also developing new tools for studying the brain using genome editing and viral engineering to expand the toolkit available to neuroscientists.

Microscopic image of mouse brain
Neurons in a transgenic mouse brain labeled by Sabbagh using genome editing technology in the Feng lab. Image: Ubadah Sabbagh

The environment at the McGovern Institute is also a source of inspiration for Sabbagh’s research. “The scale and scope of work being done at McGovern is remarkable. It’s an exciting place for me to be as a neuroscientist,” said Sabbagh. “Besides being intellectually enriching, I’ve found great community here – something that’s important to me wherever I work.”

Returning to the Middle East

Profile of scientist Ubadah Sabbagh speaking at a table.
McGovern postdoc Ubadah Sabbagh at the 2023 Young Arab Pioneers Award ceremony in Abu Dhabi. Photo: Arab Youth Center

While at an advisory meeting at the NIH, Sabbagh learned he had been selected as a Young Arab Pioneer by the Arab Youth Center and was flown the next day to Abu Dhabi for a ceremony overseen by Her Excellency Shamma Al Mazrui, Cabinet Member and Minister of Community Development in the United Arab Emirates. The ceremony recognized 20 Arab youth from around the world in sectors ranging from scientific research to entrepreneurship and community development. Sabbagh’s research “presented a unique portrayal of creative Arab youth and an admirable representation of the values of youth beyond the Arab world,” said Sadeq Jarrar, executive director of the center.

“There I was, among other young Arab leaders, learning firsthand about their efforts, aspirations, and their outlook for the future,” says Sabbagh, who was deeply inspired by the experience.

Just a month earlier, his passport finally secured, Sabbagh had reunited with his family in the Middle East after more than a decade in the United States. “I had been away for so long,” he said, describing the experience as a “cultural reawakening.”

Woman hands man an award on stage.
Ubadah Sabbagh receives a Young Arab Pioneer Award by Her Excellency Shamma Al Mazrui, Cabinet Member and Minister of Community Development in the United Arab Emirates. Photo: Arab Youth Center

Sabbagh saw a gaping need he had not been aware of when he left 14 years earlier, as a teen. “The Middle East had such a glorious intellectual past,” he says. “But for years people have been leaving to get their advanced scientific training, and there is no adequate infrastructure to support them if they want to go back.” He wondered: What if there were a scientific renaissance in the region? How would we build infrastructure to cultivate local minds and local talent? What if the next chapter of the Middle East included being a new nexus of global scientific advancements?

“I felt so inspired,” he says. “I have a longing, someday, to meaningfully give back.”

Unpacking auditory hallucinations

Tamar Regev, the 2022–2024 Poitras Center Postdoctoral Fellow, has identified a new neural system that may shed light on the auditory hallucinations experienced by patients diagnosed with schizophrenia.

Scientist portrait
Tamar Regev is the 2022–2024 Poitras Center Postdoctoral
Fellow in Ev Fedorenko’s lab at the McGovern Institute. Photo: Steph Stevens

“The system appears integral to prosody processing,”says Regev. “‘Prosody’ can be described as the melody of speech — auditory gestures that we use when we’re speaking to signal linguistic, emotional, and social information.” The prosody processing system Regev has uncovered is distinct from the lower-level auditory speech processing system as well as the higher-level language processing system. Regev aims to understand how the prosody system, along with the speech and language processing systems, may be impaired in neuropsychiatric disorders such as schizophrenia, especially when experienced with auditory hallucinations in the form of speech.

“Knowing which neural systems are affected by schizophrenia can lay the groundwork for future research into interventions that target the mechanisms underlying symptoms such as hallucinations,” says Regev. Passionate about bridging gaps between disciplines, she is collaborating with Ann Shinn, MD, MPH, of McLean Hospital’s Schizophrenia and Bipolar Disorder Research Program.

Regev’s graduate work at the Hebrew University of Jerusalem focused on exploring the auditory system with electroencephalography (EEG), which measures electrical activity in the brain using small electrodes attached to the scalp. She came to MIT to study under Evelina Fedorenko, a world leader in researching the cognitive and neural mechanisms underlying language processing. With Fedorenko she has learned to use functional magnetic resonance imaging (fMRI), which reveals the brain’s functional anatomy by measuring small changes in blood flow that occur with brain activity.

“I hope my research will lead to a better understanding of the neural architectures that underlie these disorders—and eventually help us as a society to better understand and accept special populations.”- Tamar Regev

“EEG has very good temporal resolution but poor spatial resolution, while fMRI provides a map of the brain showing where neural signals are coming from,” says Regev. “With fMRI I can connect my work on the auditory system with that on the language system.”

Regev developed a unique fMRI paradigm to do that. While her human subjects are in the scanner, she is comparing brain responses to speech with expressive prosody versus flat prosody to find the role of the prosody system among the auditory, speech, and language regions. She plans to apply her findings to analyze a rich data set drawn from fMRI studies that Fedorenko and Shinn began a few years ago while investigating the neural basis of auditory hallucinations in patients with schizophrenia and bipolar disorder. Regev is exploring how the neural architecture may differ between control subjects and those with and without auditory hallucinations as well as those with schizophrenia and bipolar disorder.

“This is the first time these questions are being asked using the individual-subject approach developed in the Fedorenko lab,” says Regev. The approach provides superior sensitivity, functional resolution, interpretability, and versatility compared with the group analyses of the past. “I hope my research will lead to a better understanding of the neural architectures that underlie these disorders,” says Regev, “and eventually help us as a society to better understand and accept special populations.”

Using the tools of neuroscience to personalize medicine

Profile picture of Sadie Zacharek
Graduate student Sadie Zacharek. Photo: Steph Stevens

From summer internships as an undergraduate studying neuroscience at the University of Notre Dame, Sadie Zacharek developed interests in areas ranging from neuroimaging to developmental psychopathologies, from basic-science research to clinical translation. When she interviewed with John Gabrieli, the Grover Hermann Professor of Health Sciences and Technology and Cognitive Neuroscience, for a position in his lab as a graduate fellow, everything came together.

“The brain provides a window not only into dysfunction but also into response to treatment,” she says. “John and I both wanted to explore how we might use neuroimaging as a step toward personalized medicine.”

Zacharek joined the Gabrieli lab in 2020 and currently holds the Sheldon and Janet Razin’59 Fellowship for 2023-2024. In the Gabrieli lab, she has been designing and helping launch studies focusing on the neural mechanisms driving childhood depression and social anxiety disorder with the aim of developing strategies to predict which treatments will be most effective for individual patients.

Helping children and adults

“Depression in children is hugely understudied,” says Zacharek. “Most of the research has focused on adult and adolescent depression.” But the clinical presentation differs in the two groups, she says. “In children, irritability can be the primary presenting symptom rather than melancholy.” To get to the root of childhood depression, she is exploring both the brain basis of the disorder and how the parent-child relationship might influence symptoms. “Parents help children develop their emotion-regulation skills,” she says. “Knowing the underlying mechanisms could, in family-focused therapy, help them turn a ‘downward spiral’ into irritability, into an ‘upward spiral,’ away from it.”

The studies she is conducting include functional magnetic resonance imaging (fMRI) of children to explore their brain responses to positive and negative stimuli, fMRI of both the child and parent to compare maps of their brains’ functional connectivity, and magnetic resonance spectroscopy to explore the neurochemical environment of both, including quantities of neurometabolites that indicate inflammation (higher levels have been found to correlate with depressive pathology).

“If we could find a normative range for neurochemicals and then see how far someone has deviated in depression, or a neural signature of elevated activity in a brain region, that could serve as a biomarker for future interventions,” she says. “Such a biomarker would be especially relevant for children given that they are less able to articulately convey their symptoms or internal experience.”

“The brain provides a window not only into dysfunction but also into response to treatment.” – Sadie Zacharek

Social anxiety disorder is a chronic and disabling condition that affects about 7.1 percent of U.S. adults. Treatment usually involves cognitive behavior therapy (CBT), and then, if there is limited response, the addition of a selective serotonin reuptake inhibitor (SSRI), as an anxiolytic.

But what if research could reveal the key neurocircuitry of social anxiety disorder as well as changes associated with treatment? That could open the door to predicting treatment outcome.

Zacharek is collecting neuroimaging data, as well as clinical assessments, from participants. The participants diagnosed with social anxiety disorder will then undergo 12 weeks of group CBT, followed by more data collection, and then individual CBT for 12 weeks plus an SSRI for those who do not benefit from the group CBT. The results from those two time points will help determine the best treatment for each person.

“We hope to build a predictive model that could enable clinicians to scan a new patient and select the optimal treatment,” says Zacharek. “John’s many long-standing relationships with clinicians in this area make all of these translational studies possible.”

Nature: An unexpected source of innovative tools to study the brain

This story originally appeared in the Fall 2023 issue of BrainScan.

___

Scientist holds 3D printed phage over a natural background.
Genetic engineer Joseph Kreitz looks to the microscopic world for inspiration in Feng Zhang’s lab at the McGovern Institute. Photo: Steph Steve

In their quest to deepen their understanding of the brain, McGovern scientists take inspiration wherever it comes — and sometimes it comes from surprising sources. To develop new tools for research and innovative strategies for treating disease, they’ve drawn on proteins that organisms have been making for billions of years as well as sophisticated materials engineered for modern technology.

For McGovern investigator Feng Zhang, the natural world provides a rich source of molecules with remarkable and potentially useful functions.

Zhang is one of the pioneers of CRISPR, a programmable system for gene editing that is built from the components of a bacterial adaptive immune system. Scientists worldwide use CRISPR to modify genetic sequences in their labs, and many CRISPR-based therapies, which aim to treat disease through gene editing, are now in development. Meanwhile, Zhang and his team have continued to explore CRISPR-like systems beyond the bacteria in which they were originally discovered.

Turning to nature

This year, the search for evolutionarily related systems led Zhang’s team to a set of enzymes made by more complex organisms, including single-celled algae and hard-shell clams. Like the enzymes that power CRISPR, these newly discovered enzymes, called Fanzors, can be directed to cut DNA at specific sites by programming an RNA molecule as a guide.

Rhiannon Macrae, a scientific advisor in Zhang’s lab, says the discovery was surprising because Fanzors don’t seem to play the same role in immunity that CRISPR systems do. In fact, she says it’s not clear what Fanzors do at all. But as programmable gene editors, Fanzors might have an important advantage over current CRISPR tools — particularly for clinical applications. “Fanzor proteins are much smaller than the workhorse CRISPR tool, Cas9,” Macrae says. “This really matters when you actually want to be able to use one of these tools in a patient, because the bigger the tool, the harder it is to package and deliver to patients’ cells.”

Cryo-EM map of a Fanzor protein (gray, yellow, light blue, and pink) in complex with ωRNA (purple) and its target DNA (red). Non-target DNA strand in blue. Image: Zhang lab

Zhang’s team has thought a lot about how to get therapies to patients’ cells, and size is only one consideration. They’ve also been looking for ways to direct drugs, gene-editing tools, or other therapies to specific cells and tissues in the body. One of the lab’s leading strategies comes from another unexpected natural source: a microscopic syringe produced by certain insect-infecting bacteria.

In their search for an efficient system for targeted drug delivery, Zhang and graduate student Joseph Kreitz first considered the injection systems of bacteria-infecting viruses: needle-like structures that pierce the outer membrane of their host to deliver their own genetic material. But these viral injection systems can’t easily be freed from the rest of the virus.

Then Zhang learned that some bacteria have injection systems of their own, which they release inside their hosts after packing them with toxins. They reengineered the bacterial syringe, devising a delivery system that works on human cells. Their current system can be programmed to inject proteins — including those used for gene editing — directly into specified cell types. With further development, Zhang hopes it will work with other types of therapies, as well.

Magnetic imaging

In McGovern Associate Investigator Alan Jasanoff’s lab, researchers are designing sensors that can track the activity of specific neurons or molecules in the brain, using magnetic resonance imaging (MRI) or related forms of non-invasive imaging. These tools are essential for understanding how the brain’s cells and circuits work together to process information. “We want to give MRI a suite of metaphorical colors: sensitivities that enable us to dissect the different kinds of mechanistically significant contributors to neural activity,” he explains.

Jasanoff can tick off a list of molecules with notable roles in biology and industry that his lab has repurposed to glean more information from brain imaging. These include manganese — a metal once used to tint ancient glass; nitric oxide synthase — the enzyme that causes blushing; and iron oxide nanoparticles — tiny magnets that enable compact data storage inside computers. But Jasanoff says none of these should be considered out of place in the imaging world. “Most are pretty logical choices,” he says. “They all do different things and we use them in pretty different ways, but they are either magnetic or interact with magnetic molecules to serve our purposes for brain imaging.”

Close-up picture of manganese metal
Manganese, a metal that interacts weakly with magnetic fields, is a key component in new MRI sensors being developed in Alan Jasanoff’s lab at the McGovern Institute.

The enzyme nitric oxide synthase, for example, plays an important role in most functional MRI scans. The enzyme produces nitric oxide, which causes blood vessels to expand. This can bring a blush to the cheeks, but in the brain, it increases blood flow to bring more oxygen to busy neurons. MRI can detect this change because it is sensitive to the magnetic properties of blood.

By using blood flow as a proxy for neural activity, functional MRI scans light up active regions of the brain, but they can’t pinpoint the activity of specific cells. So Jasanoff and his team devised a more informative MRI sensor by reengineering nitric oxide synthase. Their modified enzyme, which they call NOSTIC, can be introduced into a select group of cells, where it will produce nitric oxide in response to neural activity — triggering increased blood flow and strengthening the local MRI signal. Researchers can deliver it to specific kinds of brain cells, or they can deliver it exclusively to neurons that communicate directly with one another. Then they can watch for an elevated MRI signal when those cells fire. This lets them see how information flows through the brain and tie specific cells to particular tasks.

Miranda Dawson, a graduate student in Jasanoff’s lab, is using NOSTIC to study the brain circuits that fuel addiction. She’s interested in the involvement of a brain region called the insula, which may mediate the physical sensations that people with addiction experience during drug cravings or withdrawal. With NOSTIC, Dawson can follow how the insula communicates to other parts of the brain as a rat experiences these MITstages of addiction. “We give our sensor to the insula, and then it projects to anatomically connected brain regions,” she explains. “So we’re able to delineate what circuits are being activated at different points in the addiction cycle.”

Scientist with folded arms next to a picture of a brain
Miranda Dawson uses her lab’s novel MRI sensor, NOSTIC, to illuminate the brain circuits involved in fentanyl craving and withdrawal. Photo: Steph Stevens; MRI scan: Nan Li, Souparno Ghosh, Jasanoff lab

Mining biodiversity

McGovern investigators know that good ideas and useful tools can come from anywhere. Sometimes, the key to harnessing those tools is simply recognizing their potential. But there are also opportunities for a more deliberate approach to finding them.

McGovern Investigator Ed Boyden is leading a program that aims to accelerate the discovery of valuable natural products. Called the Biodiversity Network (BioNet), the project is collecting biospecimens from around the world and systematically analyzing them, looking for molecular tools that could be applied to major challenges in science and medicine, from brain research to organ preservation. “The idea behind BioNet,” Boyden explains, “is rather than wait for chance to give us these discoveries, can we go look for them on purpose?”

Making invisible therapy targets visible

The lab of Edward Boyden, the Y. Eva Tan Professor in Neurotechnology, has developed a powerful technology called Expansion Revealing (ExR) that makes visible molecular structures that were previously too hidden to be seen with even the most powerful microscopes. It “reveals” the nanoscale alterations in synapses, neural wiring, and other molecular assemblies using ordinary lab microscopes. It does so this way: Inside a cell, proteins and other molecules are often tightly packed together. These dense clusters can be difficult to image because the fluorescent labels used to make them visible can’t wedge themselves between the molecules. ExR “de-crowds” the molecules by expanding the cell using a chemical process, making the molecules accessible to fluorescent tags.

Jinyoung Kang is a J. Douglas Tan Postdoctoral Fellow in the Boyden and Feng labs. Photo: Steph Stevens

“This technology can be used to answer a lot of biological questions about dysfunction in synaptic proteins, which are involved in neurodegenerative diseases,” says Jinyoung Kang, a J. Douglas Tan Postdoctoral Fellow in the labs of Boyden and Guoping Feng, the James W. (1963) and Patricia T. Poitras Professor of Brain and Cognitive Sciences. “Until now, there has been no tool to visualize synapses very well at nanoscale.”

Over the past year, the Boyden team has been using ExR to explore the underlying mechanisms of brain disorders, including autism spectrum disorder (ASD) and Alzheimer’s disease. Since the method can be applied iteratively, Boyden imagines it may one day succeed in creating a 100-fold magnification of molecular structures.

“Using earlier technology, researchers may be missing entire categories of molecular phenomena, both functional and dysfunctional,” says Boyden. “It’s critical to bring these nanostructures into view so that we can identify potential targets for new therapeutics that can restore functional molecular arrangements.”

The team is applying ExR to the study of mutant-animal-model brain slices to expose complex synapse 3D nanoarchitecture and configuration. Among their questions: How do synapses differ when mutations that cause autism and other neurological conditions are present?

Using the new technology, Kang and her collaborator Menglong Zeng characterized the molecular architecture of excitatory synapses on parvalbumin interneurons, cells that drastically influence the downstream effects of neuronal signaling and ultimately change cognitive behaviors. They discovered condensed AMPAR clustering in parvalbumin interneurons is essential for normal brain function. The next step is to explore their role in the function of parvalbumin interneurons, which are vulnerable to stressors and have been implicated in brain disorders including autism and Alzheimer’s disease.

The researchers are now investigating whether ExR can reveal abnormal protein nanostructures in SHANK3 knockout mice and marmosets. Mutations in the SHANK3 gene lead to one of the most severe types of ASD, Phelan-McDermid syndrome, which accounts for about 2 percent of all ASD patients with intellectual disability.

Researchers uncover new CRISPR-like system in animals that can edit the human genome

A team of researchers led by Feng Zhang at the McGovern Institute and the Broad Institute of MIT and Harvard has uncovered the first programmable RNA-guided system in eukaryotes — organisms that include fungi, plants, and animals.

In a study in Nature, the team describes how the system is based on a protein called Fanzor. They showed that Fanzor proteins use RNA as a guide to target DNA precisely, and that Fanzors can be reprogrammed to edit the genome of human cells. The compact Fanzor systems have the potential to be more easily delivered to cells and tissues as therapeutics than CRISPR/Cas systems, and further refinements to improve their targeting efficiency could make them a valuable new technology for human genome editing.

CRISPR/Cas was first discovered in prokaryotes (bacteria and other single-cell organisms that lack nuclei) and scientists including Zhang’s lab have long wondered whether similar systems exist in eukaryotes. The new study demonstrates that RNA-guided DNA-cutting mechanisms are present across all kingdoms of life.

“This new system is another way to make precise changes in human cells, complementing the genome editing tools we already have.” — Feng Zhang

“CRISPR-based systems are widely used and powerful because they can be easily reprogrammed to target different sites in the genome,” said Zhang, senior author on the study and a core institute member at the Broad, an investigator at MIT’s McGovern Institute, the James and Patricia Poitras Professor of Neuroscience at MIT, and a Howard Hughes Medical Institute investigator. “This new system is another way to make precise changes in human cells, complementing the genome editing tools we already have.”

Searching the domains of life

A major aim of the Zhang lab is to develop genetic medicines using systems that can modulate human cells by targeting specific genes and processes. “A number of years ago, we started to ask, ‘What is there beyond CRISPR, and are there other RNA-programmable systems out there in nature?’” said Zhang.

Feng Zhang with folded arms in lab
McGovern Investigator Feng Zhang in his lab.

Two years ago, Zhang lab members discovered a class of RNA-programmable systems in prokaryotes called OMEGAs, which are often linked with transposable elements, or “jumping genes”, in bacterial genomes and likely gave rise to CRISPR/Cas systems. That work also highlighted similarities between prokaryotic OMEGA systems and Fanzor proteins in eukaryotes, suggesting that the Fanzor enzymes might also use an RNA-guided mechanism to target and cut DNA.

In the new study, the researchers continued their study of RNA-guided systems by isolating Fanzors from fungi, algae, and amoeba species, in addition to a clam known as the Northern Quahog. Co-first author Makoto Saito of the Zhang lab led the biochemical characterization of the Fanzor proteins, showing that they are DNA-cutting endonuclease enzymes that use nearby non-coding RNAs known as ωRNAs to target particular sites in the genome. It is the first time this mechanism has been found in eukaryotes, such as animals.

Unlike CRISPR proteins, Fanzor enzymes are encoded in the eukaryotic genome within transposable elements and the team’s phylogenetic analysis suggests that the Fanzor genes have migrated from bacteria to eukaryotes through so-called horizontal gene transfer.

“These OMEGA systems are more ancestral to CRISPR and they are among the most abundant proteins on the planet, so it makes sense that they have been able to hop back and forth between prokaryotes and eukaryotes,” said Saito.

To explore Fanzor’s potential as a genome editing tool, the researchers demonstrated that it can generate insertions and deletions at targeted genome sites within human cells. The researchers found the Fanzor system to initially be less efficient at snipping DNA than CRISPR/Cas systems, but by systematic engineering, they introduced a combination of mutations into the protein that increased its activity 10-fold. Additionally, unlike some CRISPR systems and the OMEGA protein TnpB, the team found that a fungal-derived Fanzor protein did not exhibit “collateral activity,” where an RNA-guided enzyme cleaves its DNA target as well as degrading nearby DNA or RNA. The results suggest that Fanzors could potentially be developed as efficient genome editors.

Co-first author Peiyu Xu led an effort to analyze the molecular structure of the Fanzor/ωRNA complex and illustrate how it latches onto DNA to cut it. Fanzor shares structural similarities with its prokaryotic counterpart CRISPR-Cas12 protein, but the interaction between the ωRNA and the catalytic domains of Fanzor is more extensive, suggesting that the ωRNA might play a role in the catalytic reactions. “We are excited about these structural insights for helping us further engineer and optimize Fanzor for improved efficiency and precision as a genome editor,” said Xu.

Like CRISPR-based systems, the Fanzor system can be easily reprogrammed to target specific genome sites, and Zhang said it could one day be developed into a powerful new genome editing technology for research and therapeutic applications. The abundance of RNA-guided endonucleases like Fanzors further expands the number of OMEGA systems known across kingdoms of life and suggests that there are more yet to be found.

“Nature is amazing. There’s so much diversity,” said Zhang. “There are probably more RNA-programmable systems out there, and we’re continuing to explore and will hopefully discover more.”

The paper’s other authors include Guilhem Faure, Samantha Maguire, Soumya Kannan, Han Altae-Tran, Sam Vo, AnAn Desimone, and Rhiannon Macrae.

Support for this work was provided by the Howard Hughes Medical Institute; Poitras Center for Psychiatric Disorders Research at MIT; K. Lisa Yang and Hock E. Tan Molecular Therapeutics Center at MIT; Broad Institute Programmable Therapeutics Gift Donors; The Pershing Square Foundation, William Ackman, and Neri Oxman; James and Patricia Poitras; BT Charitable Foundation; Asness Family Foundation; Kenneth C. Griffin; the Phillips family; David Cheng; Robert Metcalfe; and Hugo Shong.

 

Magnetic robots walk, crawl, and swim

MIT scientists have developed tiny, soft-bodied robots that can be controlled with a weak magnet. The robots, formed from rubbery magnetic spirals, can be programmed to walk, crawl, swim—all in response to a simple, easy-to-apply magnetic field.

“This is the first time this has been done, to be able to control three-dimensional locomotion of robots with a one-dimensional magnetic field,” says McGovern associate investigator Polina Anikeeva, whose team reported on the magnetic robots June 3, 2023, in the journal Advanced Materials. “And because they are predominantly composed of polymer and polymers are soft, you don’t need a very large magnetic field to activate them. It’s actually a really tiny magnetic field that drives these robots,” says Anikeeva, who is also the Matoula S. Salapatas Professor in Materials Science and Engineering and a professor of brain and cognitive sciences at MIT, as well as the associate director of MIT’s Research Laboratory of Electronics and director of MIT’s K. Lisa Yang Brain-Body Center.

Portait of MIT scientist Polina Anikeeva
McGovern Institute Associate Investigator Polina Anikeeva in her lab. Photo: Steph Stevens

The new robots are well suited to transport cargo through confined spaces and their rubber bodies are gentle on fragile environments, opening the possibility that the technology could be developed for biomedical applications. Anikeeva and her team have made their robots millimeters long, but she says the same approach could be used to produce much smaller robots.

Engineering magnetic robots

Anikeeva says that until now, magnetic robots have moved in response to moving magnetic fields. She explains that for these models, “if you want your robot to walk, your magnet walks with it. If you want it to rotate, you rotate your magnet.” That limits the settings in which such robots might be deployed. “If you are trying to operate in a really constrained environment, a moving magnet may not be the safest solution. You want to be able to have a stationary instrument that just applies magnetic field to the whole sample,” she explains.

Youngbin Lee, a former graduate student in Anikeeva’s lab, engineered a solution to this problem. The robots he developed in Anikeeva’s lab are not uniformly magnetized. Instead, they are strategically magnetized in different zones and directions so a single magnetic field can enable a movement-driving profile of magnetic forces.

Before they are magnetized, however, the flexible, lightweight bodies of the robots must be fabricated. Lee starts this process with two kinds of rubber, each with a different stiffness. These are sandwiched together, then heated and stretched into a long, thin fiber. Because of the two materials’ different properties, one of the rubbers retains its elasticity through this stretching process, but the other deforms and cannot return to its original size. So when the strain is released, one layer of the fiber contracts, tugging on the other side and pulling the whole thing into a tight coil. Anikeeva says the helical fiber is modeled after the twisty tendrils of a cucumber plant, which spiral when one layer of cells loses water and contracts faster than a second layer.

A third material—one whose particles have the potential to become magnetic—is incorporated in a channel that runs through the rubbery fiber. So once the spiral has been made, a magnetization pattern that enables a particular type of movement can be introduced.

“Youngbin thought very carefully about how to magnetize our robots to make them able to move just as he programmed them to move,” Anikeeva says. “He made calculations to determine how to establish such a profile of forces on it when we apply a magnetic field that it will actually start walking or crawling.”

To form a caterpillar-like crawling robot, for example, the helical fiber is shaped into gentle undulations, and then the body, head, and tail are magnetized so that a magnetic field applied perpendicular to the robot’s plane of motion will cause the body to compress. When the field is reduced to zero, the compression is released, and the crawling robot stretches. Together, these movements propel the robot forward. Another robot in which two foot-like helical fibers are connected with a joint is magnetized in a pattern that enables a movement more like walking.

Biomedical potential

This precise magnetization process generates a program for each robot and ensures that that once the robots are made, they are simple to control. A weak magnetic field activates each robot’s program and drives its particular type of movement. A single magnetic field can even send multiple robots moving in opposite directions, if they have been programmed to do so. The team found that one minor manipulation of the magnetic field has a useful effect: With the flip of a switch to reverse the field, a cargo-carrying robot can be made to gently shake and release its payload.

Anikeeva says she can imagine these soft-bodied robots—whose straightforward production will be easy to scale up—delivering materials through narrow pipes or even inside the human body. For example, they might carry a drug through narrow blood vessels, releasing it exactly where it is needed. She says the magnetically-actuated devices have biomedical potential beyond robots as well, and might one day be incorporated into artificial muscles or materials that support tissue regeneration.

Refining mental health diagnoses

Maedbh King came to MIT to make a difference in mental health. As a postdoctoral fellow in the K. Lisa Yang Integrative Computational Neuroscience (ICoN) Center, she is building computer models aimed at helping clinicians improve diagnosis and treatment, especially for young people with neurodevelopmental and psychiatric disorders.

Tapping two large patient-data sources, King is working to analyze critical biological and behavioral information to better categorize patients’ mental health conditions, including autism spectrum disorder, attention-deficit hyperactivity disorder (ADHD), anxiety, and suicidal thoughts — and to provide more predictive approaches to addressing them. Her strategy reflects the center’s commitment to a holistic understanding of human brain function using theoretical and computa-tional neuroscience.

“Today, treatment decisions for psychiatric disorders are derived entirely from symptoms, which leaves clinicians and patients trying one treatment and, if it doesn’t work, trying another,” says King. “I hope to help change that.”

King grew up in Dublin, Ireland, and studied psychology in college; gained neuroimaging and programming skills while earning a master’s degree from Western University in Canada; and received her doctorate from the University of California, Berkeley, where she built maps and models of the human brain. In fall 2022, King joined the lab of Satrajit Ghosh, a McGovern Institute principal research scientist whose team uses neuroimaging, speech communication, and machine learning to improve assessments and treatments for mental health and neurological disorders.

Big-data insights

King is pursuing several projects using the Healthy Brain Network, a landmark mental health study of children and adolescents in New York City. She and lab colleagues are extracting data from cognitive and other assessments — such as language patterns, favorite school subjects, and family mental illness history — from roughly 4,000 participants to provide more-nuanced understanding of their neurodevelopmental disorders, such as autism or ADHD.

“Computational models are powerful. They can identify patterns that can’t be obtained with the human eye through electronic records,” says King.

With this database, one can develop “very rich clinical profiles of these young people,” including their challenges and adaptive strengths, King explains. “We’re interested in placing these participants within a spectrum of symptoms, rather than just providing a binary label of, ‘has this disorder’ or ‘doesn’t have it.’ It’s an effort to subtype based on these phenotypic assessments.”

In other research, King is developing tools to detect risk factors for suicide among adolescents. Working with psychiatrists at Children’s Hospital of Philadelphia, she is using detailed questionnaires from some 20,000 youths who visited the hospital’s emergency department over several years; about one-tenth had tried to take their own lives. The questionnaires collect information about demographics, lifestyle, relationships, and other aspects of patients’ lives.

“One of the big questions the physicians want to answer is, Are there any risk predictors we can identify that can ultimately prevent, or at least mitigate, future suicide attempts?” King says. “Computational models are powerful. They can identify patterns that can’t be obtained with the human eye through electronic records.”

King is passionate about producing findings to help practitioners, whether they’re clinicians, teachers, parents, or policy makers, and the populations they’re studying. “This applied work,” she says, “should be communicated in a way that can be useful.

When computer vision works more like a brain, it sees more like people do

From cameras to self-driving cars, many of today’s technologies depend on artificial intelligence (AI) to extract meaning from visual information.  Today’s AI technology has artificial neural networks at its core, and most of the time we can trust these AI computer vision systems to see things the way we do — but sometimes they falter. According to MIT and IBM Research scientists, one way to improve computer vision is to instruct the artificial neural networks that they rely on to deliberately mimic the way the brain’s biological neural network processes visual images.

Researchers led by James DiCarlo, the director of MIT’s Quest for Intelligence and member of the MIT-IBM Watson AI Lab, have made a computer vision model more robust by training it to work like a part of the brain that humans and other primates rely on for object recognition. This May, at the International Conference on Learning Representations (ICLR), the team reported that when they trained an artificial neural network using neural activity patterns in the brain’s inferior temporal (IT) cortex, the artificial neural network was more robustly able to identify objects in images than a model that lacked that neural training. And the model’s interpretations of images more closely matched what humans saw, even when images included minor distortions that made the task more difficult.

Comparing neural circuits

Portrait of Professor DiCarlo
McGovern Investigator and Director of MIT Quest for Intelligence, James DiCarlo. Photo: Justin Knight

Many of the artificial neural networks used for computer vision already resemble the multi-layered brain circuits that process visual information in humans and other primates. Like the brain, they use neuron-like units that work together to process information. As they are trained for a particular task, these layered components collectively and progressively process the visual information to complete the task — determining for example, that an image depicts a bear or a car or a tree.

DiCarlo and others previously found that when such deep-learning computer vision systems establish efficient ways to solve visual problems, they end up with artificial circuits that work similarly to the neural circuits that process visual information in our own brains. That is, they turn out to be surprisingly good scientific models of the neural mechanisms underlying primate and human vision.

That resemblance is helping neuroscientists deepen their understanding of the brain. By demonstrating ways visual information can be processed to make sense of images, computational models suggest hypotheses about how the brain might accomplish the same task. As developers continue to refine computer vision models, neuroscientists have found new ideas to explore in their own work.

“As vision systems get better at performing in the real world, some of them turn out to be more human-like in their internal processing. That’s useful from an understanding biology point of view,” says DiCarlo, who is also a professor of brain and cognitive sciences and an investigator at the McGovern Institute.

Engineering more brain-like AI

While their potential is promising, computer vision systems are not yet perfect models of human vision. DiCarlo suspected one way to improve computer vision may be to incorporate specific brain-like features into these models.

To test this idea, he and his collaborators built a computer vision model using neural data previously collected from vision-processing neurons in the monkey IT cortex — a key part of the primate ventral visual pathway involved in the recognition of objects — while the animals viewed various images. More specifically, Joel Dapello, a Harvard graduate student and former MIT-IBM Watson AI Lab intern, and Kohitij Kar, Assistant Professor, Canada Research Chair (Visual Neuroscience) at York University and visiting scientist at MIT, in collaboration with David Cox, IBM Research’s VP for AI Models and IBM director of the MIT-IBM Watson AI Lab, and other researchers at IBM Research and MIT, asked an artificial neural network to emulate the behavior of these primate vision-processing neurons while the network learned to identify objects in a standard computer vision task.

“In effect, we said to the network, ‘please solve this standard computer vision task, but please also make the function of one of your inside simulated “neural” layers be as similar as possible to the function of the corresponding biological neural layer,’” DiCarlo explains. “We asked it to do both of those things as best it could.” This forced the artificial neural circuits to find a different way to process visual information than the standard, computer vision approach, he says.

After training the artificial model with biological data, DiCarlo’s team compared its activity to a similarly-sized neural network model trained without neural data, using the standard approach for computer vision. They found that the new, biologically-informed model IT layer was – as instructed — a better match for IT neural data.  That is, for every image tested, the population of artificial IT neurons in the model responded more similarly to the corresponding population of biological IT neurons.

“Everybody gets something out of the exciting virtuous cycle between natural/biological intelligence and artificial intelligence,” DiCarlo says.

The researchers also found that the model IT was also a better match to IT neural data collected from another monkey, even though the model had never seen data from that animal, and even when that comparison was evaluated on that monkey’s IT responses to new images. This indicated that the team’s new, “neurally-aligned” computer model may be an improved model of the neurobiological function of the primate IT cortex — an interesting finding, given that it was previously unknown whether the amount of neural data that can be currently collected from the primate visual system is capable of directly guiding model development.

With their new computer model in hand, the team asked whether the “IT neural alignment” procedure also leads to any changes in the overall behavioral performance of the model. Indeed, they found that the neurally-aligned model was more human-like in its behavior — it tended to succeed in correctly categorizing objects in images for which humans also succeed, and it tended to fail when humans also fail.

Adversarial attacks

The team also found that the neurally-aligned model was more resistant to “adversarial attacks” that developers use to test computer vision and AI systems.  In computer vision, adversarial attacks introduce small distortions into images that are meant to mislead an artificial neural network.

“Say that you have an image that the model identifies as a cat. Because you have the knowledge of the internal workings of the model, you can then design very small changes in the image so that the model suddenly thinks it’s no longer a cat,” DiCarlo explains.

These minor distortions don’t typically fool humans, but computer vision models struggle with these alterations. A person who looks at the subtly distorted cat still reliably and robustly reports that it’s a cat. But standard computer vision models are more likely to mistake the cat for a dog, or even a tree.

“There must be some internal differences in the way our brains process images that lead to our vision being more resistant to those kinds of attacks,” DiCarlo says. And indeed, the team found that when they made their model more neurally-aligned, it became more robust, correctly identifying more images in the face of adversarial attacks.  The model could still be fooled by stronger “attacks,” but so can people, DiCarlo says. His team is now exploring the limits of adversarial robustness in humans.

A few years ago, DiCarlo’s team found they could also improve a model’s resistance to adversarial attacks by designing the first layer of the artificial network to emulate the early visual processing layer in the brain. One key next step is to combine such approaches — making new models that are simultaneously neurally-aligned at multiple visual processing layers.

The new work is further evidence that an exchange of ideas between neuroscience and computer science can drive progress in both fields. “Everybody gets something out of the exciting virtuous cycle between natural/biological intelligence and artificial intelligence,” DiCarlo says. “In this case, computer vision and AI researchers get new ways to achieve robustness and neuroscientists and cognitive scientists get more accurate mechanistic models of human vision.”

This work was supported by the MIT-IBM Watson AI Lab, Semiconductor Research Corporation, DARPA, the Massachusetts Institute of Technology Shoemaker Fellowship, Office of Naval Research, the Simons Foundation, and Canada Research Chair Program.