What does the future hold for generative AI?

Speaking at the “Generative AI: Shaping the Future” symposium on Nov. 28, the kickoff event of MIT’s Generative AI Week, keynote speaker and iRobot co-founder Rodney Brooks warned attendees against uncritically overestimating the capabilities of this emerging technology, which underpins increasingly powerful tools like OpenAI’s ChatGPT and Google’s Bard.

“Hype leads to hubris, and hubris leads to conceit, and conceit leads to failure,” cautioned Brooks, who is also a professor emeritus at MIT, a former director of the Computer Science and Artificial Intelligence Laboratory (CSAIL), and founder of Robust.AI.

“No one technology has ever surpassed everything else,” he added.

The symposium, which drew hundreds of attendees from academia and industry to the Institute’s Kresge Auditorium, was laced with messages of hope about the opportunities generative AI offers for making the world a better place, including through art and creativity, interspersed with cautionary tales about what could go wrong if these AI tools are not developed responsibly.

Generative AI is a term to describe machine-learning models that learn to generate new material that looks like the data they were trained on. These models have exhibited some incredible capabilities, such as the ability to produce human-like creative writing, translate languages, generate functional computer code, or craft realistic images from text prompts.

In her opening remarks to launch the symposium, MIT President Sally Kornbluth highlighted several projects faculty and students have undertaken to use generative AI to make a positive impact in the world. For example, the work of the Axim Collaborative, an online education initiative launched by MIT and Harvard, includes exploring the educational aspects of generative AI to help underserved students.

The Institute also recently announced seed grants for 27 interdisciplinary faculty research projects centered on how AI will transform people’s lives across society.

In hosting Generative AI Week, MIT hopes to not only showcase this type of innovation, but also generate “collaborative collisions” among attendees, Kornbluth said.

Collaboration involving academics, policymakers, and industry will be critical if we are to safely integrate a rapidly evolving technology like generative AI in ways that are humane and help humans solve problems, she told the audience.

“I honestly cannot think of a challenge more closely aligned with MIT’s mission. It is a profound responsibility, but I have every confidence that we can face it, if we face it head on and if we face it as a community,” she said.

While generative AI holds the potential to help solve some of the planet’s most pressing problems, the emergence of these powerful machine learning models has blurred the distinction between science fiction and reality, said CSAIL Director Daniela Rus in her opening remarks. It is no longer a question of whether we can make machines that produce new content, she said, but how we can use these tools to enhance businesses and ensure sustainability. 

“Today, we will discuss the possibility of a future where generative AI does not just exist as a technological marvel, but stands as a source of hope and a force for good,” said Rus, who is also the Andrew and Erna Viterbi Professor in the Department of Electrical Engineering and Computer Science.

But before the discussion dove deeply into the capabilities of generative AI, attendees were first asked to ponder their humanity, as MIT Professor Joshua Bennett read an original poem.

Bennett, a professor in the MIT Literature Section and Distinguished Chair of the Humanities, was asked to write a poem about what it means to be human, and drew inspiration from his daughter, who was born three weeks ago.

The poem told of his experiences as a boy watching Star Trek with his father and touched on the importance of passing traditions down to the next generation.

In his keynote remarks, Brooks set out to unpack some of the deep, scientific questions surrounding generative AI, as well as explore what the technology can tell us about ourselves.

To begin, he sought to dispel some of the mystery swirling around generative AI tools like ChatGPT by explaining the basics of how this large language model works. ChatGPT, for instance, generates text one word at a time by determining what the next word should be in the context of what it has already written. While a human might write a story by thinking about entire phrases, ChatGPT only focuses on the next word, Brooks explained.

ChatGPT 3.5 is built on a machine-learning model that has 175 billion parameters and has been exposed to billions of pages of text on the web during training. (The newest iteration, ChatGPT 4, is even larger.) It learns correlations between words in this massive corpus of text and uses this knowledge to propose what word might come next when given a prompt.

The model has demonstrated some incredible capabilities, such as the ability to write a sonnet about robots in the style of Shakespeare’s famous Sonnet 18. During his talk, Brooks showcased the sonnet he asked ChatGPT to write side-by-side with his own sonnet.

But while researchers still don’t fully understand exactly how these models work, Brooks assured the audience that generative AI’s seemingly incredible capabilities are not magic, and it doesn’t mean these models can do anything.

His biggest fears about generative AI don’t revolve around models that could someday surpass human intelligence. Rather, he is most worried about researchers who may throw away decades of excellent work that was nearing a breakthrough, just to jump on shiny new advancements in generative AI; venture capital firms that blindly swarm toward technologies that can yield the highest margins; or the possibility that a whole generation of engineers will forget about other forms of software and AI.

At the end of the day, those who believe generative AI can solve the world’s problems and those who believe it will only generate new problems have at least one thing in common: Both groups tend to overestimate the technology, he said.

“What is the conceit with generative AI? The conceit is that it is somehow going to lead to artificial general intelligence. By itself, it is not,” Brooks said.

Following Brooks’ presentation, a group of MIT faculty spoke about their work using generative AI and participated in a panel discussion about future advances, important but underexplored research topics, and the challenges of AI regulation and policy.

The panel consisted of Jacob Andreas, an associate professor in the MIT Department of Electrical Engineering and Computer Science (EECS) and a member of CSAIL; Antonio Torralba, the Delta Electronics Professor of EECS and a member of CSAIL; Ev Fedorenko, an associate professor of brain and cognitive sciences and an investigator at the McGovern Institute for Brain Research at MIT; and Armando Solar-Lezama, a Distinguished Professor of Computing and associate director of CSAIL. It was moderated by William T. Freeman, the Thomas and Gerd Perkins Professor of EECS and a member of CSAIL.

The panelists discussed several potential future research directions around generative AI, including the possibility of integrating perceptual systems, drawing on human senses like touch and smell, rather than focusing primarily on language and images. The researchers also spoke about the importance of engaging with policymakers and the public to ensure generative AI tools are produced and deployed responsibly.

“One of the big risks with generative AI today is the risk of digital snake oil. There is a big risk of a lot of products going out that claim to do miraculous things but in the long run could be very harmful,” Solar-Lezama said.

The morning session concluded with an excerpt from the 1925 science fiction novel “Metropolis,” read by senior Joy Ma, a physics and theater arts major, followed by a roundtable discussion on the future of generative AI. The discussion included Joshua Tenenbaum, a professor in the Department of Brain and Cognitive Sciences and a member of CSAIL; Dina Katabi, the Thuan and Nicole Pham Professor in EECS and a principal investigator in CSAIL and the MIT Jameel Clinic; and Max Tegmark, professor of physics; and was moderated by Daniela Rus.

One focus of the discussion was the possibility of developing generative AI models that can go beyond what we can do as humans, such as tools that can sense someone’s emotions by using electromagnetic signals to understand how a person’s breathing and heart rate are changing.

But one key to integrating AI like this into the real world safely is to ensure that we can trust it, Tegmark said. If we know an AI tool will meet the specifications we insist on, then “we no longer have to be afraid of building really powerful systems that go out and do things for us in the world,” he said.

Tuning the mind to benefit mental health

This story also appears in the Winter 2024 issue of BrainScan.

___

llustration of woman sitting at end of a dock with head down, arms wrapped around her knees.
Mental health is the defining public health crisis of our time, according to U.S. Surgeon General Vivek Murthy, and the nation’s youth is at the
center of this crisis.

Psychiatrists and pediatricians have sounded an alarm. The mental health of youth in the United States is worsening. Youth visits to emergency departments related to depression, anxiety, and behavioral challenges have been on the rise for years. Suicide rates among young people have escalated, too. Researchers have tracked these trends for more than a decade, and the Covid-19 pandemic only exacerbated the situation.

“It’s all over the news, how shockingly common mental health difficulties are,” says John Gabrieli, the Grover Hermann Professor of Health Sciences and Technology at MIT and an investigator at the McGovern Institute. “It’s worsening by every measure.”

Experts worry that our mental health systems are inadequate to meet the growing need. “This has gone from bad to catastrophic, from my perspective,” says Susan Whitfeld-Gabrieli, a professor of psychology at Northeastern University and a research affiliate at the McGovern Institute.

“We really need to come up with novel interventions that target the neural mechanisms that we believe potentiate depression and anxiety.”

Training the brain

One approach may be to help young people learn to modulate some of the relevant brain circuitry themselves. Evidence is accumulating that practicing mindfulness — focusing awareness on the present, typically through meditation — can change patterns of brain activity associated with emotions and mental health.

“There’s been a steady flow of moderate-size studies showing that when you help people gain mindfulness through training programs, you get all kinds of benefits in terms of people feeling less stress, less anxiety, fewer negative emotions, and sometimes more positive ones as well,” says Gabrieli, who is also a professor of brain and cognitive sciences at MIT. “Those are the things you wish for people.”

“If there were a medicine with as much evidence of its effectiveness as mindfulness, it would be flying off the shelves of every pharmacy.”
– John Gabrieli

Researchers have even begun testing mindfulness-based interventions head-to-head against standard treatments for psychiatric disorders. The results of recent studies involving hundreds of adults with anxiety disorders or depression are encouraging. “It’s just as good as the best medicines and the best behavioral treatments that we know a ton about,” Gabrieli says.

Much mindfulness research has focused on adults, but promising data about the benefits of mindfulness training for children and adolescents is emerging as well. In studies supported by the McGovern Institute’s Poitras Center for Psychiatric Disorders Research in 2019 and 2020, Gabrieli and Whitfield-Gabrieli found that sixth-graders in a Boston middle school who participated in eight weeks of mindfulness training experienced reductions in feelings of stress and increases in sustained attention. More recently, Gabrieli and Whitfeld-Gabrieli’s teams have shown how new tools can support mindfulness training and make it accessible to more children and their families — from a smartphone app that can be used anywhere to real-time neurofeedback inside an MRI scanner.

Three people practicing mindfulness in MIT Building 46. Woman on left is leaning on a railing, wearing headphones with eyes closed. Man seated in the center holds a bowl and a wooden spoon. Woman on right is seated with legs crossed and eyes closed.
Isaac Treves (center), a PhD student in the lab of John Gabrieli, is the lead author of two studies which found that mindfulness training may improve children’s mental health. Treves and his co-authors Kimberly Wang (left) and Cindy Li (right) also practice mindfulness in their daily lives. Photo: Steph Stevens

Mindfulness and mental health

Mindfulness is not just a practice, it is a trait — an open, non-judgmental way of attending to experiences that some people exhibit more than others. By assessing individuals’ mindfulness with questionnaires that ask about attention and awareness, researchers have found the trait associates with many measures of mental health. Gabrieli and his team measured mindfulness in children between the ages of eight and ten and found it was highest in those who were most emotionally resilient to the stress they experienced during the Covid-19 pandemic. As the team reported this year in the journal PLOS One, children who were more mindful rated the impact of the pandemic on their own lives lower than other participants in the study. They also reported lower levels of stress, anxiety, and depression.

Illustration of a finger tracing the outline of a hand. There is a circle next to the hand with text that says, "Breathe In, Breathe Out. Children enrolled in John Gabrieli’s mindfulness study learned to trace the outline of their fingers in rhythm with their in-andout breathing pattern. This multisensory breathing technique has been shown to relieve anxiety and relax the body."

Mindfulness doesn’t come naturally to everyone, but brains are malleable, and both children and adults can cultivate mindfulness with training and practice. In their studies of middle schoolers, Gabrieli and Whitfeld-Gabrieli showed that the emotional effects of mindfulness training corresponded to measurable changes in the brain: Functional MRI scans revealed changes in regions involved in stress, negative feelings, and focused attention.

Whitfeld-Gabrieli says if mindfulness training makes kids more resilient, it could be a valuable tool for managing symptoms of anxiety and depression before they become severe. “I think it should be part of the standard school day,” she says. “I think we would have a much happier, healthier society if we could be doing this from the ground up.”

Data from Gabrieli’s lab suggests broadly implementing mindfulness training might even pay off in terms of academic achievement. His team found in a 2019 study that middle school students who reported greater levels of mindfulness had, on average, better grades, better scores on standardized tests, fewer absences, and fewer school suspensions than their peers.

Some schools have begun making mindfulness programs available to their students. But those programs don’t reach everyone, and their type and quality vary tremendously. Indeed, not every study of mindfulness training in schools has found the program to significantly benefit participants, which may be because not every approach to mindfulness training is equally effective.

“This is where I think the science matters,” Gabrieli says. “You have to find out what kinds of supports really work and you have to execute them reasonably. A recent report from Gabrieli’s lab offers encouraging news: mindfulness training doesn’t have to be in-person. Gabrieli and his team found that children can benefit from practicing mindfulness at home with the help of an app.

When the pandemic closed schools in 2020, school-based mindfulness programs came to an abrupt halt. Soon thereafter, a group called Inner Explorer had developed a smartphone app that could teach children mindfulness at home. Gabrieli and his team were eager to find out if this easy-access tool could effectively support children’s emotional well-being.

In October of this year, they reported in the journal Mindfulness that after 40 days of app use, children between the ages of eight and ten reported less stress than they had before beginning mindfulness training. Parents reported that their children were also experiencing fewer negative emotions, such as loneliness and fear.

The outcomes suggest a path toward making evidence-based mindfulness training for children broadly accessible. “Tons of people could do this,” says Gabrieli. “It’s super scalable. It doesn’t cost money; you don’t have to go somewhere. We’re very excited about that.”

Visualizing healthy minds

Mindfulness training may be even more effective when practitioners can visualize what’s happening in their brains. In Whitfeld-Gabrieli’s lab, teenagers have had a chance to slide inside an MRI scanner and watch their brain activity shift in real time as they practiced mindfulness meditation. The visualization they see focuses on the brain’s default mode network (DMN), which is most active when attention is not focused on a particular task. Certain patterns of activity in the DMN have been linked to depression, anxiety, and other psychiatric conditions, and mindfulness training may help break these patterns.

McGovern research affiliate Susan Whitfield-Gabrieli in the Martinos Imaging Center. Photo: Caitlin Cunningham

Whitfeld-Gabrieli explains that when the mind is free to wander, two hubs of the DMN become active. “Typically, that means we’re engaged in some kind of mental time travel,” she says. That might mean reminiscing about the past or planning for the future, but can be more distressing when it turns into obsessive rumination or worry. In people with anxiety, depression, and psychosis, these network hubs are often hyperconnected.

“It’s almost as if they’re hijacked,” Whitfeld-Gabrieli says. “The more they’re correlated, the more psychopathology one might be experiencing. We wanted to unlock that hyperconnectivity for kids who are suffering from depression and anxiety.” She hoped that by replacing thoughts of the past and the future with focus on the present, mindfulness meditation would rein in overactive DMNs, and she wanted a way to encourage kids to do exactly that.

The neurofeedback tool that she and her colleagues created focuses on the DMN as well as separate brain region that is called on during attention-demanding tasks. Activity in those regions is monitored with functional MRI and displayed to users in a game-like visualization. Inside the scanner, participants see how that activity changes as they focus on a meditation or when their mind wanders. As their mind becomes more focused on the present moment, changes in brain activity move a ball toward a target.

Whitfeld-Gabrieli says the real-time feedback was motivating for adolescents who participated in a recent study, who all had histories of anxiety or depression. “They’re training their brain to tune their mind, and they love it,” she says.

MRI images of two brains, one showing an active DMN and the other showing a healthy DMN.
The default mode network (DMN) is a large-scale brain network that is active when a person is not focused on the outside world and the brain is at wakeful rest. The DMN is often over-engaged in adolescents with depression and anxiety, as well as teens at risk for these affective disorders (left). DMN activation and connectivity can be “tuned” to a healthier state through the practice of mindfulness (right).

In March, she and her team reported in Molecular Psychiatry that the neurofeedback tool helped those study participants reduce connectivity in the DMN and engage a more desirable brain state. It’s not the first success the team has had with the approach. Previously, they found that the decreases in DMN connectivity brought about by mindfulness meditation with neurofeedback were associated with reduced hallucinations for patients with schizophrenia. Testing the clinical benefits of the approach in teens is on the horizon; Whitfeld-Gabrieli and her collaborators plan to investigate how mindfulness meditation with real-time neurofeedback affects depression symptoms in an upcoming clinical trial.

Whitfeld-Gabrieli emphasizes that the neurofeedback is a training tool, helping users improve mindfulness techniques they can later call on anytime, anywhere. While that training currently requires time inside an MRI scanner, she says it may be possible create an EEG-based version of the approach, which could be deployed in doctors’ offices and other more accessible settings.

Both Gabrieli and Whitfeld-Gabrieli continue to explore how mindfulness training impacts different aspects of mental health, in both children and adults and with a range of psychiatric conditions. Whitfeld-Gabrieli expects it will be one powerful tool for combating a youth mental health crisis for which there will be no single solution. “I think it’s going to take a village,” she says. “We are all going to have to work together, and we’ll have to come up some really innovative ways to help.”

A new way to see the activity inside a living cell

Living cells are bombarded with many kinds of incoming molecular signal that influence their behavior. Being able to measure those signals and how cells respond to them through downstream molecular signaling networks could help scientists learn much more about how cells work, including what happens as they age or become diseased.

Right now, this kind of comprehensive study is not possible because current techniques for imaging cells are limited to just a handful of different molecule types within a cell at one time. However, MIT researchers have developed an alternative method that allows them to observe up to seven different molecules at a time, and potentially even more than that.

“There are many examples in biology where an event triggers a long downstream cascade of events, which then causes a specific cellular function,” says Edward Boyden, the Y. Eva Tan Professor in Neurotechnology. “How does that occur? It’s arguably one of the fundamental problems of biology, and so we wondered, could you simply watch it happen?”

It’s arguably one of the fundamental problems of biology, and so we wondered, could you simply watch it happen? – Ed Boyden

The new approach makes use of green or red fluorescent molecules that flicker on and off at different rates. By imaging a cell over several seconds, minutes, or hours, and then extracting each of the fluorescent signals using a computational algorithm, the amount of each target protein can be tracked as it changes over time.

Boyden, who is also a professor of biological engineering and of brain and cognitive sciences at MIT, a Howard Hughes Medical Institute investigator, and a member of MIT’s McGovern Institute for Brain Research and Koch Institute for Integrative Cancer Research, as well as the co-director of the K. Lisa Yang Center for Bionics, is the senior author of the study, which appears today in Cell. MIT postdoc Yong Qian is the lead author of the paper.

Fluorescent signals

Labeling molecules inside cells with fluorescent proteins has allowed researchers to learn a great deal about the functions of many cellular molecules. This type of study is often done with green fluorescent protein (GFP), which was first deployed for imaging in the 1990s. Since then, several fluorescent proteins that glow in other colors have been developed for experimental use.

However, a typical light microscope can only distinguish two or three of these colors, allowing researchers only a tiny glimpse of the overall activity that is happening inside a cell. If they could track a greater number of labeled molecules, researchers could measure a brain cell’s response to different neurotransmitters during learning, for example, or investigate the signals that prompt a cancer cell to metastasize.

“Ideally, you would be able to watch the signals in a cell as they fluctuate in real time, and then you could understand how they relate to each other. That would tell you how the cell computes,” Boyden says. “The problem is that you can’t watch very many things at the same time.”

In 2020, Boyden’s lab developed a way to simultaneously image up to five different molecules within a cell, by targeting glowing reporters to distinct locations inside the cell. This approach, known as “spatial multiplexing,” allows researchers to distinguish signals for different molecules even though they may all be fluorescing the same color.

In the new study, the researchers took a different approach: Instead of distinguishing signals based on their physical location, they created fluorescent signals that vary over time. The technique relies on “switchable fluorophores” — fluorescent proteins that turn on and off at a specific rate. For this study, Boyden and his group members identified four green switchable fluorophores, and then engineered two more, all of which turn on and off at different rates. They also identified two red fluorescent proteins that switch at different rates, and engineered one additional red fluorophore.

Using four switchable fluorophores, MIT researchers were able to label and image four different kinases inside these cells (top four rows). In the bottom row, the cell nuclei are labeled in blue.
Image: Courtesy of the researchers

Each of these switchable fluorophores can be used to label a different type of molecule within a living cell, such an enzyme, signaling protein, or part of the cell cytoskeleton. After imaging the cell for several minutes, hours, or even days, the researchers use a computational algorithm to pick out the specific signal from each fluorophore, analogous to how the human ear can pick out different frequencies of sound.

“In a symphony orchestra, you have high-pitched instruments, like the flute, and low-pitched instruments, like a tuba. And in the middle are instruments like the trumpet. They all have different sounds, and our ear sorts them out,” Boyden says.

The mathematical technique that the researchers used to analyze the fluorophore signals is known as linear unmixing. This method can extract different fluorophore signals, similar to how the human ear uses a mathematical model known as a Fourier transform to extract different pitches from a piece of music.

Once this analysis is complete, the researchers can see when and where each of the fluorescently labeled molecules were found in the cell during the entire imaging period. The imaging itself can be done with a simple light microscope, with no specialized equipment required.

Biological phenomena

In this study, the researchers demonstrated their approach by labeling six different molecules involved in the cell division cycle, in mammalian cells. This allowed them to identify patterns in how the levels of enzymes called cyclin-dependent kinases change as a cell progresses through the cell cycle.

The researchers also showed that they could label other types of kinases, which are involved in nearly every aspect of cell signaling, as well as cell structures and organelles such as the cytoskeleton and mitochondria. In addition to their experiments using mammalian cells grown in a lab dish, the researchers showed that this technique could work in the brains of zebrafish larvae.

This method could be useful for observing how cells respond to any kind of input, such as nutrients, immune system factors, hormones, or neurotransmitters, according to the researchers. It could also be used to study how cells respond to changes in gene expression or genetic mutations. All of these factors play important roles in biological phenomena such as growth, aging, cancer, neurodegeneration, and memory formation.

“You could consider all of these phenomena to represent a general class of biological problem, where some short-term event — like eating a nutrient, learning something, or getting an infection — generates a long-term change,” Boyden says.

In addition to pursuing those types of studies, Boyden’s lab is also working on expanding the repertoire of switchable fluorophores so that they can study even more signals within a cell. They also hope to adapt the system so that it could be used in mouse models.

The research was funded by an Alana Fellowship, K. Lisa Yang, John Doerr, Jed McCaleb, James Fickel, Ashar Aziz, the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics at MIT, the Howard Hughes Medical Institute, and the National Institutes of Health.

Search algorithm reveals nearly 200 new kinds of CRISPR systems

Microbial sequence databases contain a wealth of information about enzymes and other molecules that could be adapted for biotechnology. But these databases have grown so large in recent years that they’ve become difficult to search efficiently for enzymes of interest.

Now, scientists at the Broad Institute of MIT and Harvard, the McGovern Institute for Brain Research at MIT, and the National Center for Biotechnology Information (NCBI) at the National Institutes of Health have developed a new search algorithm that has identified 188 kinds of new rare CRISPR systems in bacterial genomes, encompassing thousands of individual systems. The work appears today in Science.

The algorithm, which comes from the lab of CRISPR pioneer Feng Zhang, uses big-data clustering approaches to rapidly search massive amounts of genomic data. The team used their algorithm, called Fast Locality-Sensitive Hashing-based clustering (FLSHclust) to mine three major public databases that contain data from a wide range of unusual bacteria, including ones found in coal mines, breweries, Antarctic lakes, and dog saliva. The scientists found a surprising number and diversity of CRISPR systems, including ones that could make edits to DNA in human cells, others that can target RNA, and many with a variety of other functions.

The new systems could potentially be harnessed to edit mammalian cells with fewer off-target effects than current Cas9 systems. They could also one day be used as diagnostics or serve as molecular records of activity inside cells.

The researchers say their search highlights an unprecedented level of diversity and flexibility of CRISPR and that there are likely many more rare systems yet to be discovered as databases continue to grow.

“Biodiversity is such a treasure trove, and as we continue to sequence more genomes and metagenomic samples, there is a growing need for better tools, like FLSHclust, to search that sequence space to find the molecular gems,” said Zhang, a co-senior author on the study and a core institute member at the Broad.

Zhang is also an investigator at the McGovern Institute for Brain Research at MIT, the James and Patricia Poitras Professor of Neuroscience at MIT with joint appointments in the departments of Brain and Cognitive Sciences and Biological Engineering, and an investigator at the Howard Hughes Medical Institute. Eugene Koonin, a distinguished investigator at the NCBI, is co-senior author on the study as well.

Searching for CRISPR

CRISPR, which stands for Clustered Regularly Interspaced Short Palindromic Repeats, is a bacterial defense system that has been engineered into many tools for genome editing and diagnostics.

To mine databases of protein and nucleic acid sequences for novel CRISPR systems, the researchers developed an algorithm based on an approach borrowed from the big data community. This technique, called locality-sensitive hashing, clusters together objects that are similar but not exactly identical. Using this approach allowed the team to probe billions of protein and DNA sequences — from the NCBI, its Whole Genome Shotgun database, and the Joint Genome Institute — in weeks, whereas previous methods that look for identical objects would have taken months. They designed their algorithm to look for genes associated with CRISPR.

“This new algorithm allows us to parse through data in a time frame that’s short enough that we can actually recover results and make biological hypotheses,” said Soumya Kannan, who is a co-first author on the study. Kannan was a graduate student in Zhang’s lab when the study began and is currently a postdoctoral researcher and Junior Fellow at Harvard University. Han Altae-Tran, a graduate student in Zhang’s lab during the study and currently a postdoctoral researcher at the University of Washington, was the study’s other co-first author.

“This is a testament to what you can do when you improve on the methods for exploration and use as much data as possible,” said Altae-Tran. “It’s really exciting to be able to improve the scale at which we search.”

New systems

In their analysis, Altae-Tran, Kannan, and their colleagues noticed that the thousands of CRISPR systems they found fell into a few existing and many new categories. They studied several of the new systems in greater detail in the lab.

They found several new variants of known Type I CRISPR systems, which use a guide RNA that is 32 base pairs long rather than the 20-nucleotide guide of Cas9. Because of their longer guide RNAs, these Type I systems could potentially be used to develop more precise gene-editing technology that is less prone to off-target editing. Zhang’s team showed that two of these systems could make short edits in the DNA of human cells. And because these Type I systems are similar in size to CRISPR-Cas9, they could likely be delivered to cells in animals or humans using the same gene-delivery technologies being used today for CRISPR.

One of the Type I systems also showed “collateral activity” — broad degradation of nucleic acids after the CRISPR protein binds its target. Scientists have used similar systems to make infectious disease diagnostics such as SHERLOCK, a tool capable of rapidly sensing a single molecule of DNA or RNA. Zhang’s team thinks the new systems could be adapted for diagnostic technologies as well.

The researchers also uncovered new mechanisms of action for some Type IV CRISPR systems, and a Type VII system that precisely targets RNA, which could potentially be used in RNA editing. Other systems could potentially be used as recording tools — a molecular document of when a gene was expressed — or as sensors of specific activity in a living cell.

Mining data

The scientists say their algorithm could aid in the search for other biochemical systems. “This search algorithm could be used by anyone who wants to work with these large databases for studying how proteins evolve or discovering new genes,” Altae-Tran said.

The researchers add that their findings illustrate not only how diverse CRISPR systems are, but also that most are rare and only found in unusual bacteria. “Some of these microbial systems were exclusively found in water from coal mines,” Kannan said. “If someone hadn’t been interested in that, we may never have seen those systems. Broadening our sampling diversity is really important to continue expanding the diversity of what we can discover.”

This work was supported by the Howard Hughes Medical Institute; K. Lisa Yang and Hock E. Tan Molecular Therapeutics Center at MIT; Broad Institute Programmable Therapeutics Gift Donors; The Pershing Square Foundation, William Ackman and Neri Oxman; James and Patricia Poitras; BT Charitable Foundation; Asness Family Foundation; Kenneth C. Griffin; the Phillips family; David Cheng; and Robert Metcalfe.

The brain may learn about the world the same way some computational models do

To make our way through the world, our brain must develop an intuitive understanding of the physical world around us, which we then use to interpret sensory information coming into the brain.

How does the brain develop that intuitive understanding? Many scientists believe that it may use a process similar to what’s known as “self-supervised learning.” This type of machine learning, originally developed as a way to create more efficient models for computer vision, allows computational models to learn about visual scenes based solely on the similarities and differences between them, with no labels or other information.

A pair of studies from researchers at the K. Lisa Yang Integrative Computational Neuroscience (ICoN) Center at MIT offers new evidence supporting this hypothesis. The researchers found that when they trained models known as neural networks using a particular type of self-supervised learning, the resulting models generated activity patterns very similar to those seen in the brains of animals that were performing the same tasks as the models.

The findings suggest that these models are able to learn representations of the physical world that they can use to make accurate predictions about what will happen in that world, and that the mammalian brain may be using the same strategy, the researchers say.

“The theme of our work is that AI designed to help build better robots ends up also being a framework to better understand the brain more generally,” says Aran Nayebi, a postdoc in the ICoN Center. “We can’t say if it’s the whole brain yet, but across scales and disparate brain areas, our results seem to be suggestive of an organizing principle.”

Nayebi is the lead author of one of the studies, co-authored with Rishi Rajalingham, a former MIT postdoc now at Meta Reality Labs, and senior authors Mehrdad Jazayeri, an associate professor of brain and cognitive sciences and a member of the McGovern Institute for Brain Research; and Robert Yang, an assistant professor of brain and cognitive sciences and an associate member of the McGovern Institute. Ila Fiete, director of the ICoN Center, a professor of brain and cognitive sciences, and an associate member of the McGovern Institute, is the senior author of the other study, which was co-led by Mikail Khona, an MIT graduate student, and Rylan Schaeffer, a former senior research associate at MIT.

Both studies will be presented at the 2023 Conference on Neural Information Processing Systems (NeurIPS) in December.

Modeling the physical world

Early models of computer vision mainly relied on supervised learning. Using this approach, models are trained to classify images that are each labeled with a name — cat, car, etc. The resulting models work well, but this type of training requires a great deal of human-labeled data.

To create a more efficient alternative, in recent years researchers have turned to models built through a technique known as contrastive self-supervised learning. This type of learning allows an algorithm to learn to classify objects based on how similar they are to each other, with no external labels provided.

“This is a very powerful method because you can now leverage very large modern data sets, especially videos, and really unlock their potential,” Nayebi says. “A lot of the modern AI that you see now, especially in the last couple years with ChatGPT and GPT-4, is a result of training a self-supervised objective function on a large-scale dataset to obtain a very flexible representation.”

These types of models, also called neural networks, consist of thousands or millions of processing units connected to each other. Each node has connections of varying strengths to other nodes in the network. As the network analyzes huge amounts of data, the strengths of those connections change as the network learns to perform the desired task.

As the model performs a particular task, the activity patterns of different units within the network can be measured. Each unit’s activity can be represented as a firing pattern, similar to the firing patterns of neurons in the brain. Previous work from Nayebi and others has shown that self-supervised models of vision generate activity similar to that seen in the visual processing system of mammalian brains.

In both of the new NeurIPS studies, the researchers set out to explore whether self-supervised computational models of other cognitive functions might also show similarities to the mammalian brain. In the study led by Nayebi, the researchers trained self-supervised models to predict the future state of their environment across hundreds of thousands of naturalistic videos depicting everyday scenarios.

“For the last decade or so, the dominant method to build neural network models in cognitive neuroscience is to train these networks on individual cognitive tasks. But models trained this way rarely generalize to other tasks,” Yang says. “Here we test whether we can build models for some aspect of cognition by first training on naturalistic data using self-supervised learning, then evaluating in lab settings.”

Once the model was trained, the researchers had it generalize to a task they call “Mental-Pong.” This is similar to the video game Pong, where a player moves a paddle to hit a ball traveling across the screen. In the Mental-Pong version, the ball disappears shortly before hitting the paddle, so the player has to estimate its trajectory in order to hit the ball.

The researchers found that the model was able to track the hidden ball’s trajectory with accuracy similar to that of neurons in the mammalian brain, which had been shown in a previous study by Rajalingham and Jazayeri to simulate its trajectory — a cognitive phenomenon known as “mental simulation.” Furthermore, the neural activation patterns seen within the model were similar to those seen in the brains of animals as they played the game — specifically, in a part of the brain called the dorsomedial frontal cortex. No other class of computational model has been able to match the biological data as closely as this one, the researchers say.

“There are many efforts in the machine learning community to create artificial intelligence,” Jazayeri says. “The relevance of these models to neurobiology hinges on their ability to additionally capture the inner workings of the brain. The fact that Aran’s model predicts neural data is really important as it suggests that we may be getting closer to building artificial systems that emulate natural intelligence.”

Navigating the world

The study led by Khona, Schaeffer, and Fiete focused on a type of specialized neurons known as grid cells. These cells, located in the entorhinal cortex, help animals to navigate, working together with place cells located in the hippocampus.

While place cells fire whenever an animal is in a specific location, grid cells fire only when the animal is at one of the vertices of a triangular lattice. Groups of grid cells create overlapping lattices of different sizes, which allows them to encode a large number of positions using a relatively small number of cells.

In recent studies, researchers have trained supervised neural networks to mimic grid cell function by predicting an animal’s next location based on its starting point and velocity, a task known as path integration. However, these models hinged on access to privileged information about absolute space at all times — information that the animal does not have.

Inspired by the striking coding properties of the multiperiodic grid-cell code for space, the MIT team trained a contrastive self-supervised model to both perform this same path integration task and represent space efficiently while doing so. For the training data, they used sequences of velocity inputs. The model learned to distinguish positions based on whether they were similar or different — nearby positions generated similar codes, but further positions generated more different codes.

“It’s similar to training models on images, where if two images are both heads of cats, their codes should be similar, but if one is the head of a cat and one is a truck, then you want their codes to repel,” Khona says. “We’re taking that same idea but applying it to spatial trajectories.”

Once the model was trained, the researchers found that the activation patterns of the nodes within the model formed several lattice patterns with different periods, very similar to those formed by grid cells in the brain.

“What excites me about this work is that it makes connections between mathematical work on the striking information-theoretic properties of the grid cell code and the computation of path integration,” Fiete says. “While the mathematical work was analytic — what properties does the grid cell code possess? — the approach of optimizing coding efficiency through self-supervised learning and obtaining grid-like tuning is synthetic: It shows what properties might be necessary and sufficient to explain why the brain has grid cells.”

The research was funded by the K. Lisa Yang ICoN Center, the National Institutes of Health, the Simons Foundation, the McKnight Foundation, the McGovern Institute, and the Helen Hay Whitney Foundation.

A multifunctional tool for cognitive neuroscience

A team of researchers at MIT’s McGovern and Picower Institutes has advanced the clinical potential of a thin, flexible fiber designed to simultaneously monitor and manipulate neural activity at targeted sites in the brain. The collaborative team improved upon an earlier model of the multifunctional fiber, developed in the lab of McGovern Institute Associate Investigator Polina Anikeeva, to explore dynamic changes to neural signaling as large animals engage in a working memory task. The results appear Oct. 6 in Science Advances.

The new device, developed by Indie Garwood, who recently received her PhD in the Harvard-MIT Program in Health Sciences and Technology, includes four microelectrodes for detecting neural activity and two microfluidic channels through which drugs can be delivered. This means scientists can deliver a drug that alters neural signaling within a particular part of the brain, then monitor the consequences for local brain activity. This technology was a collaborative effort between Anikeeva, who is also the Matoula S. Salapatas Professor in Materials Science and Engineering and a professor of brain and cognitive sciences, and Picower Institute Investigators Emery Brown and Earl Miller, who jointly supervised Garwood to develop a multifunctional neurotechnology for larger and translational animal models, which are necessary to investigate the neural circuits that underlie high-level cognitive functions.  With further development and testing, similar devices might one day be deployed to diagnose or treat brain disorders in human patients.

Brown is the Edward Hood Taplin Professor of Medical Engineering and Computational Neuroscience in the Picower Institute, the Institute for Medical Engineering and Science, and the Department of Brain and Cognitive Sciences, as well as an anesthesiologist at Massachusetts General Hospital and Harvard Medical School. Miller is the Picower Professor of Neuroscience and a professor of brain and cognitive sciences at MIT.

The new multifunctional fiber is not the first produced by Anikeeva and her team. An earlier model engineered in their lab has already reached the neuroscience community, whose members use it to simultaneously monitor and manipulate neural activity in the brains of mice and rats. But for studies in larger animals, the existing tools for delivering drugs to the brains were rigid, bulky devices, which were both fragile and prone to causing tissue damage. A better tool was needed, both to advance cognitive neuroscience research and to set the stage for developing devices that can deliver drugs directly to the brains of patients and monitor the effects.

Like the devices that Anikeeva’s team designed for rodent studies, the new tool is created by first assembling a larger version of the fiber—a preform cylinder with multiple channels that is then heated and stretched until it is thin and long. As the channels narrow, microelectrodes are incorporated into to the fiber. The final step is to link the electrodes in the fiber to a connector that will relay data collected inside the brain to a unit in the lab.

The final device is long enough to access areas deep in the brain of a large animal. It is built to withstand rigorous sterilization procedures and to stay in place even in an active animal. And it integrates directly with experimental systems that cognitive neuroscientists already use in their labs. “We really wanted this to be something that we could easily hand somebody and they’re going to know how to implement it in their system,” says Garwood, who led development of the device as a graduate student in Anikeeva’s lab.

Once the new device was developed, Garwood and colleagues in the Miller and Brown labs put it to work.  They used the tool to study changes in neural activity as an animal completed a task requiring working memory. The fluid channels in the fiber were used to deliver small amounts of GABA, a neurotransmitter that dampens neuronal activity, to the animal’s premotor cortex, a part of the brain that helps plan movement. At the same time, the device recorded electrical activity from individual neurons, as well as broader patterns of activity in this part of the brain. By monitoring these signals over time, the team learned how neural circuits adapted to the local inhibition they had applied. In another experiment, the team used the device to record neural activity from the putamen, a region deep in the brain involved in reward processing and motivation.

The data collected by the device was extensive and complex, tracking changes that unfolded in the brain over seconds to hours. Interpreting those data required the team to devise new methods of data analysis, which Garwood worked on closely with the Brown lab. Garwood says these methods will be shared with users of the new devices, providing “a roadmap for extracting all of these rich dynamics that you can get out of them.”

These successes, the researchers say, are an important step toward the development of tools to modulate and manipulate neuronal activity in the human brain to benefit patients. For example, they say, a multifunctional fiber might one day be used to more accurately pinpoint the origin of seizures in people with epilepsy, by testing the effects of activating or inhibiting specific brain cells.

 

Soft optical fibers block pain while moving and stretching with the body

Scientists have a new tool to precisely illuminate the roots of nerve pain.

Engineers at MIT have developed soft and implantable fibers that can deliver light to major nerves through the body. When these nerves are genetically manipulated to respond to light, the fibers can send pulses of light to the nerves to inhibit pain. The optical fibers are flexible and stretch with the body.

The new fibers are meant as an experimental tool that can be used by scientists to explore the causes and potential treatments for peripheral nerve disorders in animal models. Peripheral nerve pain can occur when nerves outside the brain and spinal cord are damaged, resulting in tingling, numbness, and pain in affected limbs. Peripheral neuropathy is estimated to affect more than 20 million people in the United States.

“Current devices used to study nerve disorders are made of stiff materials that constrain movement, so that we can’t really study spinal cord injury and recovery if pain is involved,” says Siyuan Rao, assistant professor of biomedical engineering at the University of Massachusetts at Amherst, who carried out part of the work as a postdoc at MIT. “Our fibers can adapt to natural motion and do their work while not limiting the motion of the subject. That can give us more precise information.”

“Now, people have a tool to study the diseases related to the peripheral nervous system, in very dynamic, natural, and unconstrained conditions,” adds Xinyue Liu PhD ’22, who is now an assistant professor at Michigan State University (MSU).

Details of their team’s new fibers are reported today in a study appearing in Nature Methods. Rao’s and Liu’s MIT co-authors include Atharva Sahasrabudhe, a graduate student in chemistry; Xuanhe Zhao, professor of mechanical engineering and civil and environmental engineering; and Polina Anikeeva, professor of materials science and engineering, along with others at MSU, UMass-Amherst, Harvard Medical School, and the National Institutes of Health.

Beyond the brain

The new study grew out of the team’s desire to expand the use of optogenetics beyond the brain. Optogenetics is a technique by which nerves are genetically engineered to respond to light. Exposure to that light can then either activate or inhibit the nerve, which can give scientists information about how the nerve works and interacts with its surroundings.

Neuroscientists have applied optogenetics in animals to precisely trace the neural pathways underlying a range of brain disorders, including addiction, Parkinson’s disease, and mood and sleep disorders — information that has led to targeted therapies for these conditions.

To date, optogenetics has been primarily employed in the brain, an area that lacks pain receptors, which allows for the relatively painless implantation of rigid devices. However, the rigid devices can still damage neural tissues. The MIT team wondered whether the technique could be expanded to nerves outside the brain. Just as with the brain and spinal cord, nerves in the peripheral system can experience a range of impairment, including sciatica, motor neuron disease, and general numbness and pain.

Optogenetics could help neuroscientists identify specific causes of peripheral nerve conditions as well as test therapies to alleviate them. But the main hurdle to implementing the technique beyond the brain is motion. Peripheral nerves experience constant pushing and pulling from the surrounding muscles and tissues. If rigid silicon devices were used in the periphery, they would constrain an animal’s natural movement and potentially cause tissue damage.

Crystals and light

The researchers looked to develop an alternative that could work and move with the body. Their new design is a soft, stretchable, transparent fiber made from hydrogel — a rubbery, biocompatible mix of polymers and water, the ratio of which they tuned to create tiny, nanoscale crystals of polymers scattered throughout a more Jell-O-like solution.

The fiber embodies two layers — a core and an outer shell or “cladding.” The team mixed the solutions of each layer to generate a specific crystal arrangement. This arrangement gave each layer a specific, different refractive index, and together the layers kept any light traveling through the fiber from escaping or scattering away.

The team tested the optical fibers in mice whose nerves were genetically modified to respond to blue light that would excite neural activity or yellow light that would inhibit their activity. They found that even with the implanted fiber in place, mice were able to run freely on a wheel. After two months of wheel exercises, amounting to some 30,000 cycles, the researchers found the fiber was still robust and resistant to fatigue, and could also transmit light efficiently to trigger muscle contraction.

The team then turned on a yellow laser and ran it through the implanted fiber. Using standard laboratory procedures for assessing pain inhibition, they observed that the mice were much less sensitive to pain than rodents that were not stimulated with light. The fibers were able to significantly inhibit sciatic pain in those light-stimulated mice.

The researchers see the fibers as a new tool that can help scientists identify the roots of pain and other peripheral nerve disorders.

“We are focusing on the fiber as a new neuroscience technology,” Liu says. “We hope to help dissect mechanisms underlying pain in the peripheral nervous system. With time, our technology may help identify novel mechanistic therapies for chronic pain and other debilitating conditions such as nerve degeneration or injury.”

This research was supported, in part, by the National Institutes of Health, the National Science Foundation, the U.S. Army Research Office, the McGovern Institute for Brain Research, the Hock E. Tan and K. Lisa Yang Center for Autism Research, the K. Lisa Yang Brain-Body Center, and the Brain and Behavior Research Foundation.

Ariel Furst and Fan Wang receive 2023 National Institutes of Health awards

The National Institutes of Health (NIH) has awarded grants to MIT’s Ariel Furst and Fan Wang, through its High-Risk, High-Reward Research program. The NIH High-Risk, High-Reward Research program awarded 85 new research grants to support exceptionally creative scientists pursuing highly innovative behavioral and biomedical research projects.

Ariel Furst was selected as the recipient of the NIH Director’s New Innovator Award, which has supported unusually innovative research since 2007. Recipients are early-career investigators who are within 10 years of their final degree or clinical residency and have not yet received a research project grant or equivalent NIH grant.

Furst, the Paul M. Cook Career Development Assistant Professor of Chemical Engineering at MIT, invents technologies to improve human and environmental health by increasing equitable access to resources. Her lab develops transformative technologies to solve problems related to health care and sustainability by harnessing the inherent capabilities of biological molecules and cells. She is passionate about STEM outreach and increasing the participation of underrepresented groups in engineering.

After completing her PhD at Caltech, where she developed noninvasive diagnostics for colorectal cancer, Furst became an A. O. Beckman Postdoctoral Fellow at the University of California at Berkeley. There she developed sensors to monitor environmental pollutants. In 2022, Furst was awarded the MIT UROP Outstanding Faculty Mentor Award for her work with undergraduate researchers. She is a now a 2023 Marion Milligan Mason Awardee, a CIFAR Azrieli Global Scholar for Bio-Inspired Solar Energy, and an ARO Early Career Grantee. She is also a co-founder of the regenerative agriculture company, Seia Bio.

Fan Wang received the Pioneer Award, which has been challenging researchers at all career levels to pursue new directions and develop groundbreaking, high impact approaches to a broad area of biomedical and behavioral sciences since 2004.

Wang, a professor in the Department of Brain and Cognitive Sciences and an investigator in the McGovern Institute for Brain Research, is uncovering the neural circuit mechanisms that govern bodily sensations, like touch, pain, and posture, as well as the mechanisms that control sensorimotor behaviors. Researchers in the Wang lab aim to generate an integrated understanding of the sensation-perception-action process, hoping to find better treatments for diseases like chronic pain, addiction, and movement disorders. Wang’s lab uses genetic, viral, in vivo large-scale electrophysiology and imaging techniques to gain traction in these pursuits.

Wang obtained her PhD at Columbia University, working with Professor Richard Axel. She conducted her postdoctoral work at Stanford University with Mark Tessier-Lavigne, and then subsequently joined Duke University as faculty in 2003. Wang was later appointed as the Morris N. Broad Distinguished Professor of Neurobiology at the Duke University School of Medicine. In January 2023, she joined the faculty of the MIT School of Science and the McGovern Institute.

The High-Risk, High-Reward Research program is funded through the NIH Common Fund, which supports a series of exceptionally high-impact programs that cross NIH Institutes and Centers.

“The HRHR program is a pillar for innovation here at NIH, providing support to transformational research, with advances in biomedical and behavioral science,” says Robert W. Eisinger, acting director of the Division of Program Coordination, Planning, and Strategic Initiatives, which oversees the NIH Common Fund. “These awards align with the Common Fund’s mandate to support science expected to have exceptionally high and broadly applicable impact.”

NIH issued eight Pioneer Awards, 58 New Innovator Awards, six Transformative Research Awards, and 13 Early Independence Awards in 2023. Funding for the awards comes from the NIH Common Fund; the National Institute of General Medical Sciences; the National Institute of Mental Health; the National Library of Medicine; the National Institute on Aging; the National Heart, Lung, and Blood Institute; and the Office of Dietary Supplements.

Study: Deep neural networks don’t see the world the way we do

Human sensory systems are very good at recognizing objects that we see or words that we hear, even if the object is upside down or the word is spoken by a voice we’ve never heard.

Computational models known as deep neural networks can be trained to do the same thing, correctly identifying an image of a dog regardless of what color its fur is, or a word regardless of the pitch of the speaker’s voice. However, a new study from MIT neuroscientists has found that these models often also respond the same way to images or words that have no resemblance to the target.

When these neural networks were used to generate an image or a word that they responded to in the same way as a specific natural input, such as a picture of a bear, most of them generated images or sounds that were unrecognizable to human observers. This suggests that these models build up their own idiosyncratic “invariances” — meaning that they respond the same way to stimuli with very different features.

The findings offer a new way for researchers to evaluate how well these models mimic the organization of human sensory perception, says Josh McDermott, an associate professor of brain and cognitive sciences at MIT and a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds, and Machines.

“This paper shows that you can use these models to derive unnatural signals that end up being very diagnostic of the representations in the model,” says McDermott, who is the senior author of the study. “This test should become part of a battery of tests that we as a field are using to evaluate models.”

Jenelle Feather PhD ’22, who is now a research fellow at the Flatiron Institute Center for Computational Neuroscience, is the lead author of the open-access paper, which appears today in Nature Neuroscience. Guillaume Leclerc, an MIT graduate student, and Aleksander Mądry, the Cadence Design Systems Professor of Computing at MIT, are also authors of the paper.

Different perceptions

In recent years, researchers have trained deep neural networks that can analyze millions of inputs (sounds or images) and learn common features that allow them to classify a target word or object roughly as accurately as humans do. These models are currently regarded as the leading models of biological sensory systems.

It is believed that when the human sensory system performs this kind of classification, it learns to disregard features that aren’t relevant to an object’s core identity, such as how much light is shining on it or what angle it’s being viewed from. This is known as invariance, meaning that objects are perceived to be the same even if they show differences in those less important features.

“Classically, the way that we have thought about sensory systems is that they build up invariances to all those sources of variation that different examples of the same thing can have,” Feather says. “An organism has to recognize that they’re the same thing even though they show up as very different sensory signals.”

The researchers wondered if deep neural networks that are trained to perform classification tasks might develop similar invariances. To try to answer that question, they used these models to generate stimuli that produce the same kind of response within the model as an example stimulus given to the model by the researchers.

They term these stimuli “model metamers,” reviving an idea from classical perception research whereby stimuli that are indistinguishable to a system can be used to diagnose its invariances. The concept of metamers was originally developed in the study of human perception to describe colors that look identical even though they are made up of different wavelengths of light.

To their surprise, the researchers found that most of the images and sounds produced in this way looked and sounded nothing like the examples that the models were originally given. Most of the images were a jumble of random-looking pixels, and the sounds resembled unintelligible noise. When researchers showed the images to human observers, in most cases the humans did not classify the images synthesized by the models in the same category as the original target example.

“They’re really not recognizable at all by humans. They don’t look or sound natural and they don’t have interpretable features that a person could use to classify an object or word,” Feather says.

The findings suggest that the models have somehow developed their own invariances that are different from those found in human perceptual systems. This causes the models to perceive pairs of stimuli as being the same despite their being wildly different to a human.

Idiosyncratic invariances

The researchers found the same effect across many different vision and auditory models. However, each of these models appeared to develop their own unique invariances. When metamers from one model were shown to another model, the metamers were just as unrecognizable to the second model as they were to human observers.

“The key inference from that is that these models seem to have what we call idiosyncratic invariances,” McDermott says. “They have learned to be invariant to these particular dimensions in the stimulus space, and it’s model-specific, so other models don’t have those same invariances.”

The researchers also found that they could induce a model’s metamers to be more recognizable to humans by using an approach called adversarial training. This approach was originally developed to combat another limitation of object recognition models, which is that introducing tiny, almost imperceptible changes to an image can cause the model to misrecognize it.

The researchers found that adversarial training, which involves including some of these slightly altered images in the training data, yielded models whose metamers were more recognizable to humans, though they were still not as recognizable as the original stimuli. This improvement appears to be independent of the training’s effect on the models’ ability to resist adversarial attacks, the researchers say.

“This particular form of training has a big effect, but we don’t really know why it has that effect,” Feather says. “That’s an area for future research.”

Analyzing the metamers produced by computational models could be a useful tool to help evaluate how closely a computational model mimics the underlying organization of human sensory perception systems, the researchers say.

“This is a behavioral test that you can run on a given model to see whether the invariances are shared between the model and human observers,” Feather says. “It could also be used to evaluate how idiosyncratic the invariances are within a given model, which could help uncover potential ways to improve our models in the future.”

The research was funded by the National Science Foundation, the National Institutes of Health, a Department of Energy Computational Science Graduate Fellowship, and a Friends of the McGovern Institute Fellowship.

New cellular census maps the complexity of a primate brain

A new atlas developed by researchers at MIT’s McGovern Institute and Harvard Medical School catalogs a diverse array of brain cells throughout the marmoset brain. The atlas helps establish marmosets—small monkeys whose brains share many functional and structural features with the human brain—as a valuable model for neuroscience research.

Data from more than two million brain cells are included in the atlas, which spans 18 regions of the marmoset brain. A research team led by Guoping Feng, associate director of the McGovern Institute and member of the Broad Institute of Harvard and MIT, Harvard biologist and member of the Broad Institute of Harvard and MIT Steven McCarroll, and Princeton neurobiologist Fenna Krienen classified each cell according to its particular pattern of genetic activity, providing an important reference for studies of the marmoset brain. The team’s analysis, reported October 13, 2023, in the journal Science Advances, also reveals the profound influence of a cell’s developmental origin on its identity in the primate brain.

Regional variation in neocortical cell types and expression patterns. Image courtesy of the researchers.

Cellular diversity

Brains are made up of a tremendous diversity of cells. Neurons with dramatically different gene expression, shapes, and activities work together to process information and drive behavior, supported by an assortment of immune cells and other cell types. Scientists have only recently begun to catalog this cellular diversity—first in mice, and now in primates.

The marmoset is a quick-breeding monkey whose small brain has many of features similar to those that enable higher cognitive processes in humans. Feng says neuroscientists have begun turning to marmosets as a research model in recent years because new gene editing technology has made it easier to modify the animal’s DNA, so scientists can now study the genetic factors that shape marmosets’ brains and behavior. Feng, McCarroll, Krienen and others hope these animals will offer insights into how primate brains handle complex decision-making, social interactions, and other higher brain functions that are difficult to study in mice. Likewise, Feng says, the monkeys will help scientists investigate the impact of genetic mutations associated with brain disorders and explore potential therapeutic strategies.

To make marmosets a practical model for neuroscience, scientists need to understand the fundamental composition of their brains. Feng and McCarroll’s team have begun that characterization with their cell census, which was supported by the National Institutes of Health’s Brain Research Through Advancing Innovative Neurotechnologies (BRAIN) Initiative’s Cell Census Network (BICCN), as part a larger effort to map cellular features in the brains of mice, non-human primates, and humans. It is an essential first step in the creation of a comprehensive atlas charting the molecular, anatomical, and functional features of cells in the marmoset brain.

“Hopefully, when the BRAIN Initiative is complete, we will have a very complete map of these cells: where they are located, their abundance, their functional properties,” says Feng. “This not only gives you knowledge of the normal brain, but you can also look at what aspects change in diseases of the brain. So it’s a really powerful database.”

To catalog the diversity of cells in the marmoset brain, the researchers undertook an expansive analysis of the molecular contents of 2.4 million brain cells from adult marmosets. For each of these cells, they analyzed the complete set of RNA copies of its genes that the cell had produced, known as the cell’s transcriptome. Because the transcriptome captures patterns of genetic activity inside a cell, it is an indication of the cell’s function and can be used to assess cellular identity.

Gene expression across neural populations. Image courtesy of the researchers.

The team’s analysis is one of the first to compare patterns of gene activity in cells from disparate regions of the marmoset brain. Doing so yielded surprising insights into the factors that shape brain cells’ transcriptomic identities. “What we found is that the cell’s transcriptome contains breadcrumbs that link back to the developmental origin of that cell type,” says Krienen, who led the cellular census as a postdoctoral researcher in McCarroll’s lab. That suggests that comparing cells’ transcriptomes can help scientists figure out how primate brains are assembled, which might lead to insights into neurodevelopmental disorders, she says.

The team also learned that a cell’s location in the brain was critical to shaping its transcriptomic identity. For example, Krienen says, “it turns out that an inhibitory neuron in the cortex doesn’t look very anything like an inhibitory neuron in the thalamus, probably because they have distinct embryonic origins.”

Expanding the cell census

This new picture of cellular diversity in the marmoset brain will help researchers understand how genetic perturbations affect different brain cells and interpret the results of future experiments. Importantly, Krienen says, it could help researchers pinpoint exactly which cells are affected in brain disorders, and how the effects of a disease might localize to specific brain regions.

Krienen, McCarroll, and Feng went beyond their initial survey of cellular diversity with analyses of specific subsets of cells, charting the spatial distribution of interneurons in a key region of the prefrontal cortex and visualizing the shapes of several molecularly-defined cell types. Now, they have begun expanding their cell census beyond the 18 brain structures represented in the reported work. As part of the BRAIN Initiative’s Brain Cell Atlas Network (BICAN), the team will profile cells throughout the entire adult marmoset brain, including multiple data types in their analysis. Building on cell census data, NIH BRAIN Initiative has also launched BRAIN CONNECTS projects to map cellular connectivity in the brain.

This work was supported by the National Institutes of Health, the National Science Foundation, MathWorks, MIT, Harvard Medical School, the Broad Institute’s Stanley Center for Psychiatric Research, the Hock E. Tan and K. Lisa Yang Center for Autism Research at MIT, the Poitras Center for Psychiatric Disorders Research at MIT, and the McGovern Institute for Brain Research at MIT.