Mapping healthy cells’ connections in the brain

Portrait of scientist in a suit and tie.
McGovern Institute Principal Research Scientist Ian Wickersham. Photo: Caitlin Cunningham

A new tool developed by researchers at MIT’s McGovern Institute gives neuroscientists the power to find connected neurons within the brain’s tangled network of cells, and then follow or manipulate those neurons over a prolonged period. Its development, led by Principal Research Scientist Ian Wickersham, transforms a powerful tool for exploring the anatomy of the brain into a sophisticated system for studying brain function.

Wickersham and colleagues have designed their system to enable long-term analysis and experiments on groups of neurons that reach through the brain to signal to select groups of cells. It is described in the January 11, 2024, issue of the journal Nature Neuroscience. “This second-generation system will allow imaging, recording, and control of identified networks of synaptically-connected neurons in the context of behavioral studies and other experimental designs lasting weeks, months, or years,” Wickersham says.

The system builds on an approach to anatomical tracing that Wickersham developed in 2007, as a graduate student in Edward Callaway’s lab at the Salk Institute for Biological Studies. Its key is a modified version of a rabies virus, whose natural—and deadly—life cycle involves traveling through the brain’s neural network.

Viral tracing

The rabies virus is useful for tracing neuronal connections because once it has infected the nervous system, it spreads through the neural network by co-opting the very junctions that neurons use to communicate with one another. Hopping across those junctions, or synapses, the virus can pass from cell to cell. Traveling in the opposite direction of neuronal signals, it reaches the brain, where it continues to spread.

Labeled illustration of rabies virus
Simplified illustration of rabies virus. Image: istockphoto

To use the rabies virus to identify specific connections within the brain, Wickersham modified it to limit its spread. His original tracing system uses a rabies virus that lacks an essential gene. When researchers deliver the modified virus to the neurons whose connections they want to map, they also instruct those neurons to make the protein encoded by the virus’s missing gene. That allows the virus to replicate and travel across the synapses that link an infected cell to others in the network. Once it is inside a new cell, the virus is deprived of the critical protein and can go no farther.

Under a microscope, a fluorescent protein delivered by the modified virus lights up, exposing infected cells: those to which the virus was originally delivered as well as any neurons that send it direct inputs. Because the virus crosses only one synapse after leaving the cell it originally infected, the technique is known as monosynaptic tracing.

Labs around the world now use this method to identify which brain cells send signals to a particular set of neurons. But while the virus used in the original system can’t spread through the brain like a natural rabies virus, it still sickens the cells it does infect. Infected cells usually die in about two weeks, and that has limited scientists’ ability to conduct further studies of the cells whose connections they trace. “If you want to then go on to manipulate those connected populations of cells, you have a very short time window,” Wickersham says.

Reducing toxicity

To keep cells healthy after monosynaptic tracing, Wickersham, postdoctoral researcher Lei Jin, and colleagues devised a new approach. They began by deleting a second gene from the modified virus they use to label cells. That gene encodes an enzyme the rabies virus needs to produce the proteins encoded in its own genome. As with the original system, neurons are instructed to create the virus’s missing proteins, equipping the virus to replicate inside those cells. In this case, this is done in mice that have been genetically modified to produce the second deleted viral gene in specific sets of neurons.

Brightly colored neurons under a microscope.
The initially-infected “starter cells” at the injection site in the substantia nigra, pars compacta. Blue: tyrosine hydroxylase immunostaining, showing dopaminergic cells; green: enhanced green fluorescent protein showing neurons able to be initially infected with the rabies virus; red: the red fluorescent protein tdTomato, reporting the presence of the second-generation rabies virus. Image: Ian Wickersham, Lei Jin

To limit toxicity, Wickersham and his team built in a control that allows researchers to switch off cells’ production of viral proteins once the virus has had time to replicate and begin its spread to connected neurons. With those proteins no longer available to support the viral life cycle, the tracing tool is rendered virtually harmless. After following mice for up to 10 weeks, the researchers detected minimal toxicity in neurons where monosynaptic tracing was initiated. And, Wickersham says, “as far as we can tell, the trans-synaptically labeled cells are completely unscathed.”

Neurons illuminated in red under a microscope
Transsynaptically labeled cells in the striatum, which provides input to the dopaminergic cells of the substantia nigra. These cells show no morphological abnormalities or any other indication of toxicity five weeks after the rabies virus injection. Image: Ian Wickersham, Lei Jin

That means neuroscientists can now pair monosynaptic tracing with many of neuroscience’s most powerful tools for functional studies. To facilitate those experiments, Wickersham’s team encoded enzymes called recombinases into their connection-tracing rabies virus, which enables the introduction of genetically encoded research tools to targeted cells. After tracing cells’ connections, researchers will be able to manipulate those neurons, follow their activity, and explore their contributions to animal behavior. Such experiments will deepen scientists’ understanding of the inputs select groups of neurons receive from elsewhere in the brain, as well as the cells that are sending those signals.

Jin, who is now a principal investigator at Lingang Laboratory in Shanghai, says colleagues are already eager to begin working with the new non-toxic tracing system. Meanwhile, Wickersham’s group has already started experimenting with a third-generation system, which they hope will improve efficiency and be even more powerful.

The promise of gene therapy

Portrait of Bob Desimone wearing a suit and tie.
McGovern Institute Director Robert Desimone. Photo: Steph Stevens

As we start 2024, I hope you can join me in celebrating a historic recent advance: the FDA approval of Casgevy, a bold new treatment for devastating sickle cell disease and the world’s first approved CRISPR gene therapy.

Developed by Vertex Pharmaceuticals and CRISPR Therapeutics, we are proud to share that this pioneering therapy licenses the CRISPR discoveries of McGovern scientist and Poitras Professor of Neuroscience Feng Zhang.

It is amazing to think that Feng’s breakthrough work adapting CRISPR-Cas9 for genome editing in eukaryotic cells was published only 11 years ago today in Science.

Incredibly, CRISPR-Cas9 rapidly transitioned from proof-of-concept experiments to an approved treatment in just over a decade.

McGovern scientists are determined to maintain the momentum!

 

Incredibly, CRISPR-Cas9 rapidly transitioned from proof-of-concept experiments to an approved treatment in just over a decade.

Our labs are creating new gene therapies that are already in clinical trials or preparing to enroll patients in trials. For instance, Feng Zhang’s team has developed therapies currently in clinical trials for lymphoblastic leukemia and beta thalassemia, while another McGovern researcher, Guoping Feng, the Poitras Professor of Brain and Cognitive Sciences at MIT, has made advancements that lay the groundwork for a new gene therapy to treat a severe form of autism spectrum disorder. It is expected to enter clinical trials later this year. Moreover, McGovern fellows Omar Abudayyeh and Jonathan Gootenberg created programmable genomic tools that are now licensed for use in monogenic liver diseases and autoimmune disorders.

These exciting innovations stem from your steadfast support of our high-risk, high-reward research. Your generosity is enabling our scientists to pursue basic research in other areas with potential therapeutic applications in the future, such as mechanisms of pain, addiction, the connections between the brain and gut, the workings of memory and attention, and the bi-directional influence of artificial intelligence on brain research. All of this fundamental research is being fueled by major new advances in technology, many of them developed here.

As we enter a new year filled with anticipation following our inaugural gene therapy, I want to express my heartfelt gratitude for your invaluable support in advancing our research programs. Your role in pushing our research to new heights is valued by all faculty, students, and researchers at the McGovern Institute. We can’t wait to share our continued progress with you.

Thank you again for partnering with us to make great scientific achievements possible.

With appreciation and best wishes,

Robert Desimone, PhD
Director, McGovern Institute
Doris and Don Berkey Professor of Neuroscience, MIT

Complex, unfamiliar sentences make the brain’s language network work harder

With help from an artificial language network, MIT neuroscientists have discovered what kind of sentences are most likely to fire up the brain’s key language processing centers.

The new study reveals that sentences that are more complex, either because of unusual grammar or unexpected meaning, generate stronger responses in these language processing centers. Sentences that are very straightforward barely engage these regions, and nonsensical sequences of words don’t do much for them either.

For example, the researchers found this brain network was most active when reading unusual sentences such as “Buy sell signals remains a particular,” taken from a publicly available language dataset called C4. However, it went quiet when reading something very straightforward, such as “We were sitting on the couch.”

“The input has to be language-like enough to engage the system,” says Evelina Fedorenko, Associate Professor of Neuroscience at MIT and a member of MIT’s McGovern Institute for Brain Research. “And then within that space, if things are really easy to process, then you don’t have much of a response. But if things get difficult, or surprising, if there’s an unusual construction or an unusual set of words that you’re maybe not very familiar with, then the network has to work harder.”

Fedorenko is the senior author of the study, which appears today in Nature Human Behavior. MIT graduate student Greta Tuckute is the lead author of the paper.

Processing language

In this study, the researchers focused on language-processing regions found in the left hemisphere of the brain, which includes Broca’s area as well as other parts of the left frontal and temporal lobes of the brain.

“This language network is highly selective to language, but it’s been harder to actually figure out what is going on in these language regions,” Tuckute says. “We wanted to discover what kinds of sentences, what kinds of linguistic input, drive the left hemisphere language network.”

The researchers began by compiling a set of 1,000 sentences taken from a wide variety of sources — fiction, transcriptions of spoken words, web text, and scientific articles, among many others.

Five human participants read each of the sentences while the researchers measured their language network activity using functional magnetic resonance imaging (fMRI). The researchers then fed those same 1,000 sentences into a large language model — a model similar to ChatGPT, which learns to generate and understand language from predicting the next word in huge amounts of text — and measured the activation patterns of the model in response to each sentence.

Once they had all of those data, the researchers trained a mapping model, known as an “encoding model,” which relates the activation patterns seen in the human brain with those observed in the artificial language model. Once trained, the model could predict how the human language network would respond to any new sentence based on how the artificial language network responded to these 1,000 sentences.

The researchers then used the encoding model to identify 500 new sentences that would generate maximal activity in the human brain (the “drive” sentences), as well as sentences that would elicit minimal activity in the brain’s language network (the “suppress” sentences).

In a group of three new human participants, the researchers found these new sentences did indeed drive and suppress brain activity as predicted.

“This ‘closed-loop’ modulation of brain activity during language processing is novel,” Tuckute says. “Our study shows that the model we’re using (that maps between language-model activations and brain responses) is accurate enough to do this. This is the first demonstration of this approach in brain areas implicated in higher-level cognition, such as the language network.”

Linguistic complexity

To figure out what made certain sentences drive activity more than others, the researchers analyzed the sentences based on 11 different linguistic properties, including grammaticality, plausibility, emotional valence (positive or negative), and how easy it is to visualize the sentence content.

For each of those properties, the researchers asked participants from crowd-sourcing platforms to rate the sentences. They also used a computational technique to quantify each sentence’s “surprisal,” or how uncommon it is compared to other sentences.

This analysis revealed that sentences with higher surprisal generate higher responses in the brain. This is consistent with previous studies showing people have more difficulty processing sentences with higher surprisal, the researchers say.

Another linguistic property that correlated with the language network’s responses was linguistic complexity, which is measured by how much a sentence adheres to the rules of English grammar and how plausible it is, meaning how much sense the content makes, apart from the grammar.

Sentences at either end of the spectrum — either extremely simple, or so complex that they make no sense at all — evoked very little activation in the language network. The largest responses came from sentences that make some sense but require work to figure them out, such as “Jiffy Lube of — of therapies, yes,” which came from the Corpus of Contemporary American English dataset.

“We found that the sentences that elicit the highest brain response have a weird grammatical thing and/or a weird meaning,” Fedorenko says. “There’s something slightly unusual about these sentences.”

The researchers now plan to see if they can extend these findings in speakers of languages other than English. They also hope to explore what type of stimuli may activate language processing regions in the brain’s right hemisphere.

The research was funded by an Amazon Fellowship from the Science Hub, an International Doctoral Fellowship from the American Association of University Women, the MIT-IBM Watson AI Lab, the National Institutes of Health, the McGovern Institute, the Simons Center for the Social Brain, and MIT’s Department of Brain and Cognitive Sciences.

Deep neural networks show promise as models of human hearing

Computational models that mimic the structure and function of the human auditory system could help researchers design better hearing aids, cochlear implants, and brain-machine interfaces. A new study from MIT has found that modern computational models derived from machine learning are moving closer to this goal.

In the largest study yet of deep neural networks that have been trained to perform auditory tasks, the MIT team showed that most of these models generate internal representations that share properties of representations seen in the human brain when people are listening to the same sounds.

The study also offers insight into how to best train this type of model: The researchers found that models trained on auditory input including background noise more closely mimic the activation patterns of the human auditory cortex.

“What sets this study apart is it is the most comprehensive comparison of these kinds of models to the auditory system so far. The study suggests that models that are derived from machine learning are a step in the right direction, and it gives us some clues as to what tends to make them better models of the brain,” says Josh McDermott, an associate professor of brain and cognitive sciences at MIT, a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds, and Machines, and the senior author of the study.

MIT graduate student Greta Tuckute and Jenelle Feather PhD ’22 are the lead authors of the open-access paper, which appears today in PLOS Biology.

Models of hearing

Deep neural networks are computational models that consists of many layers of information-processing units that can be trained on huge volumes of data to perform specific tasks. This type of model has become widely used in many applications, and neuroscientists have begun to explore the possibility that these systems can also be used to describe how the human brain performs certain tasks.

“These models that are built with machine learning are able to mediate behaviors on a scale that really wasn’t possible with previous types of models, and that has led to interest in whether or not the representations in the models might capture things that are happening in the brain,” Tuckute says.

When a neural network is performing a task, its processing units generate activation patterns in response to each audio input it receives, such as a word or other type of sound. Those model representations of the input can be compared to the activation patterns seen in fMRI brain scans of people listening to the same input.

In 2018, McDermott and then-graduate student Alexander Kell reported that when they trained a neural network to perform auditory tasks (such as recognizing words from an audio signal), the internal representations generated by the model showed similarity to those seen in fMRI scans of people listening to the same sounds.

Since then, these types of models have become widely used, so McDermott’s research group set out to evaluate a larger set of models, to see if the ability to approximate the neural representations seen in the human brain is a general trait of these models.

For this study, the researchers analyzed nine publicly available deep neural network models that had been trained to perform auditory tasks, and they also created 14 models of their own, based on two different architectures. Most of these models were trained to perform a single task — recognizing words, identifying the speaker, recognizing environmental sounds, and identifying musical genre — while two of them were trained to perform multiple tasks.

When the researchers presented these models with natural sounds that had been used as stimuli in human fMRI experiments, they found that the internal model representations tended to exhibit similarity with those generated by the human brain. The models whose representations were most similar to those seen in the brain were models that had been trained on more than one task and had been trained on auditory input that included background noise.

“If you train models in noise, they give better brain predictions than if you don’t, which is intuitively reasonable because a lot of real-world hearing involves hearing in noise, and that’s plausibly something the auditory system is adapted to,” Feather says.

Hierarchical processing

The new study also supports the idea that the human auditory cortex has some degree of hierarchical organization, in which processing is divided into stages that support distinct computational functions. As in the 2018 study, the researchers found that representations generated in earlier stages of the model most closely resemble those seen in the primary auditory cortex, while representations generated in later model stages more closely resemble those generated in brain regions beyond the primary cortex.

Additionally, the researchers found that models that had been trained on different tasks were better at replicating different aspects of audition. For example, models trained on a speech-related task more closely resembled speech-selective areas.

“Even though the model has seen the exact same training data and the architecture is the same, when you optimize for one particular task, you can see that it selectively explains specific tuning properties in the brain,” Tuckute says.

McDermott’s lab now plans to make use of their findings to try to develop models that are even more successful at reproducing human brain responses. In addition to helping scientists learn more about how the brain may be organized, such models could also be used to help develop better hearing aids, cochlear implants, and brain-machine interfaces.

“A goal of our field is to end up with a computer model that can predict brain responses and behavior. We think that if we are successful in reaching that goal, it will open a lot of doors,” McDermott says.

The research was funded by the National Institutes of Health, an Amazon Fellowship from the Science Hub, an International Doctoral Fellowship from the American Association of University Women, an MIT Friends of McGovern Institute Fellowship, a fellowship from the K. Lisa Yang Integrative Computational Neuroscience (ICoN) Center at MIT, and a Department of Energy Computational Science Graduate Fellowship.

Season’s Greetings from the McGovern Institute

This year’s holiday greeting (video above) was inspired by research conducted in John Gabrieli’s lab, which found that practicing mindfulness reduced children’s stress levels and negative emotions during the pandemic. These findings contribute to a growing body of evidence that practicing mindfulness can change patterns of brain activity associated with emotions and mental health.

Coloring is one form of mindfulness, or focusing awareness on the present. Visit our postcard collection to download and color your own brain-themed postcards and may the spirit of mindfulness bring you peace in the year ahead!

Video credits:
Joseph Laney (illustration)
JR Narrows, Space Lute (sound design)
Jacob Pryor (animation)

A mindful McGovern community

Mindfulness is the practice of maintaining a state of complete awareness of one’s thoughts, emotions, or experiences on a moment-to-moment basis. McGovern researchers have shown that practicing mindfulness reduces anxiety and supports emotional resilience.

In a survey distributed to the McGovern Institute community, 57% of the 74 researchers, faculty, and staff who responded, said that they practice mindfulness as a way to reduce anxiety and stress.

Here are a few of their stories.

Fernanda De La Torre

Portrait of a smiling woman leaning back against a railing.
MIT graduate student Fernanda De La Torre. Photo: Steph Stevens

Fernanda De La Torre is a graduate student in MIT’s Department of Brain and Cognitive Sciences, where she is advised by Josh McDermott.

Originally from Mexico, De La Torre took an unconventional path to her education in the United States, where she completed her undergraduate studies in computer science and math at Kansas State University. In 2019, she came to MIT as a postbaccalaureate student in the lab of Tomaso Poggio where she began working on deep-learning theory, an area of machine learning focused on how artificial neural networks modeled on the brain can learn to recognize patterns and learn.

A recent recipient of the prestigious Paul and Daisy Soros Fellowship for New Americans, De La Torre now studies multisensory integration during speech perception using deep learning models in Josh McDermott’s lab.

What kind of mindfulness do you practice, how often, and why?

Metta meditation is the type of meditation I come back to the most. I practice 2-3 times per week. Sometimes by joining Nikki Mirghafori’s Zoom calls or listening to her and other teachers’ recordings on AudioDharma. I practice because when I observe the patterns of my thoughts, I remember the importance of compassion, including self-compassion. In my experience, I find metta meditation is a wonderful way to cultivate the two: observation and compassion. 

When and why did you start practicing mindfulness?

My first meditation practice was as a first-year post-baccalaureate student here at BCS. Gal Raz (also pictured above) carried a lot of peace and attributed it to meditation; this sparked my curiosity. I started practicing more frequently last summer, after realizing my mental health was not in a good place.

How does mindfulness benefit your research at MIT?

This is hard to answer because I think the benefits of meditation are hard to measure. I find that meditation helps me stay centered and healthy, which can indirectly help the research I do. More directly, some of my initial grad school pursuits were fueled by thoughts during meditation but I ended up feeling that a lot of these concepts are hard to explore using non-philosophical approaches. So I think meditation is mainly a practice that helps my health, my relationships with others, and my relationship with work (this last one I find most challenging and personally unresolved). 

Adam Eisen

MIT graduate student Adam Eisen.

Adam Eisen is a graduate student in MIT’s Department of Brain and Cognitive Sciences, where he is co-advised by Ila Fiete (McGovern Institute) and Earl Miller (Picower Institute).

Eisen completed his undergraduate degree in Applied Mathematics & Computer Engineering at Queen’s University in Toronto, Canada. Prior to joining MIT, Eisen built computer vision algorithms at the solar aerial inspection company Heliolytics and worked on developing machine learning tools to predict disease outcomes from genetics at The Hospital for Sick Children.

Today, in the Fiete and Miller labs, Eisen develops tools for analyzing the flow of neural activity, and applies them to understand changes in neural states (such as from consciousness to anesthetic-induced unconsciousness).

What kind of mindfulness do you practice, how often, and why?

I mostly practice simple sitting meditation centered on awareness of senses and breathing. On a good week, I meditate about 3-5 times. The reason I practice are the benefits to my general experience of living. Whenever I’m in a prolonged period of consistent meditation, I’m shocked by how much more awareness I have about thoughts, feelings and sensations that are arising in my mind throughout the day. I’m also amazed by how much easier it is to watch my mind and body react to the context around me, without slipping into the usual patterns and habits. I also find mindful benefits in doing yoga, running and playing music, but the core is really centered on meditation practice.

When and why did you start practicing mindfulness?

I’ve been interested in mindfulness and meditation since undergrad as a path to investigating the nature of mind and thought – an interest which also led me into my PhD. I started practicing meditation more seriously at the start of the pandemic to get more first hand experience with what I had been learning about. I find meditation is one of those things where knowledge and theory can support the practice, but without the experiential component it’s very hard to really start to build an understanding of the core concepts at play.

How does mindfulness benefit your research at MIT?

Mindfulness has definitely informed the kinds of things I’m interested in studying and the questions I’d like to ask – largely in relation to the nature of conscious awareness and the flow of thoughts. Outside of that, I’d like to think that mindfulness benefits my general well-being and spiritual balance, which enables me to do better research.

 

Sugandha Sharma

Woman clasping hands in a yoga pose, looking directly into the camera.
MIT graduate student Sugandha Sharma. Photo: Steph Stevens

Sugandha (Su) Sharma is a graduate student in MIT’s Department of Brain and Cognitive Sciences (BCS), where she is co-advised by Ila Fiete (McGovern Institute) and Josh Tenenbaum (BCS).

Prior to joining MIT, she studied theoretical neuroscience at the University of Waterloo where she built neural models of context dependent decision making in the prefrontal cortex and spiking neuron models of bayesian inference, based on online learning of priors from life experience.

Today, in the Fiete and Tenenbaum labs, she studies the computational and theoretical principles underlying cognition and intelligence in the human brain.  She is currently exploring the coding principles in the hippocampal circuits implicated in spatial navigation, and their role in cognitive computations like structure learning and relational reasoning.

When did you start practicing mindfulness?

When I first learned to meditate, I was challenged to practice it every day for at least 3 months in a row. I took up the challenge, and by the end of it, the results were profound. My whole perspective towards life changed. It made me more empathetic — I could step in other people’s shoes and be mindful of their situations and feelings;  my focus shifted from myself to the big picture — it made me realize how insignificant my life was on the grand scale of the universe, and how it was worthless to be caught up in small things that I was usually worrying about. It somehow also brought selflessness to me. This experience hooked me to meditation and mindfulness for life!

What kind of mindfulness do you practice and why?

I practice mindfulness because it brings awareness. It helps me to be aware of myself, my thoughts, my actions, and my surroundings at each moment in my life, thus helping me stay in and enjoy the present moment. Awareness is of utmost importance since an aware mind always does the right thing. Imagine that you are angry, in that moment you have lost awareness of yourself. The moment you become aware of yourself; anger goes away. This is why sometimes counting helps to combat anger. If you start counting, that gives you time to think and become aware of yourself and your actions.

Meditating — sitting with my eyes closed and just observing (being aware of) my thoughts — is a yogic technique that helps me clear the noise in my mind and calm it down making it easier for me to be mindful not only while meditating, but also in general after I am done meditating. Over time, the thoughts vanish, and the mind becomes blank (noiseless). For this reason, practicing meditation regularly makes it easier for me to be mindful all the time.

An added advantage of yoga and meditation is that it helps combat stress by relaxing the mind and body. Many people don’t know what to do when they are stressed, but I am grateful to have this toolkit of yoga and meditation to deal with stressful situations in my life. They help me calm my mind in stressful situations and ensure that instead of reacting to a situation, I instead act mindfully and appropriately to make it right.

K. Lisa Yang Postbaccalaureate Program names new scholars

Funded by philanthropist Lisa Yang, the K. Lisa Yang Postbaccalaureate Scholar Program provides two years of paid laboratory experience, mentorship, and education to recent college graduates from backgrounds underrepresented in neuroscience. This year, two young researchers in McGovern Institute labs, Joseph Itiat and Sam Merrow, are the recipients of the Yang postbac program.

Itiat moved to the United States from Nigeria in 2019 to pursue a degree in psychology and cognitive neuroscience at Temple University. Today, he is a Yang postbac in John Gabrieli’s lab studying the relationship between learning and value processes and their influence on future-oriented decision-making. Ultimately, Itiat hopes to develop models that map the underlying mechanisms driving these processes.

“Being African, with limited research experience and little representation in the domain of neuroscience research,” Itiat says, “I chose to pursue a postbaccalaureate
research program to prepare me for a top graduate school and a career in cognitive neuroscience.”

Merrow first fell in love with science while working at the Barrow Neurological Institute in Arizona during high school. After graduating from Simmons University in Boston, Massachusetts, Merrow joined Guoping Feng’s lab as a Yang postbac to pursue research on glial cells and brain disorders. “As a queer, nonbinary, LatinX person, I have not met anyone like me in my field, nor have I had role models that hold a similar identity to myself,” says Merrow.

“My dream is to one day become a professor, where I will be able to show others that science is for anyone.”

Previous Yang postbacs include Alex Negron, Zoe Pearce, Ajani Stewart, and Maya Taliaferro.

What does the future hold for generative AI?

Speaking at the “Generative AI: Shaping the Future” symposium on Nov. 28, the kickoff event of MIT’s Generative AI Week, keynote speaker and iRobot co-founder Rodney Brooks warned attendees against uncritically overestimating the capabilities of this emerging technology, which underpins increasingly powerful tools like OpenAI’s ChatGPT and Google’s Bard.

“Hype leads to hubris, and hubris leads to conceit, and conceit leads to failure,” cautioned Brooks, who is also a professor emeritus at MIT, a former director of the Computer Science and Artificial Intelligence Laboratory (CSAIL), and founder of Robust.AI.

“No one technology has ever surpassed everything else,” he added.

The symposium, which drew hundreds of attendees from academia and industry to the Institute’s Kresge Auditorium, was laced with messages of hope about the opportunities generative AI offers for making the world a better place, including through art and creativity, interspersed with cautionary tales about what could go wrong if these AI tools are not developed responsibly.

Generative AI is a term to describe machine-learning models that learn to generate new material that looks like the data they were trained on. These models have exhibited some incredible capabilities, such as the ability to produce human-like creative writing, translate languages, generate functional computer code, or craft realistic images from text prompts.

In her opening remarks to launch the symposium, MIT President Sally Kornbluth highlighted several projects faculty and students have undertaken to use generative AI to make a positive impact in the world. For example, the work of the Axim Collaborative, an online education initiative launched by MIT and Harvard, includes exploring the educational aspects of generative AI to help underserved students.

The Institute also recently announced seed grants for 27 interdisciplinary faculty research projects centered on how AI will transform people’s lives across society.

In hosting Generative AI Week, MIT hopes to not only showcase this type of innovation, but also generate “collaborative collisions” among attendees, Kornbluth said.

Collaboration involving academics, policymakers, and industry will be critical if we are to safely integrate a rapidly evolving technology like generative AI in ways that are humane and help humans solve problems, she told the audience.

“I honestly cannot think of a challenge more closely aligned with MIT’s mission. It is a profound responsibility, but I have every confidence that we can face it, if we face it head on and if we face it as a community,” she said.

While generative AI holds the potential to help solve some of the planet’s most pressing problems, the emergence of these powerful machine learning models has blurred the distinction between science fiction and reality, said CSAIL Director Daniela Rus in her opening remarks. It is no longer a question of whether we can make machines that produce new content, she said, but how we can use these tools to enhance businesses and ensure sustainability. 

“Today, we will discuss the possibility of a future where generative AI does not just exist as a technological marvel, but stands as a source of hope and a force for good,” said Rus, who is also the Andrew and Erna Viterbi Professor in the Department of Electrical Engineering and Computer Science.

But before the discussion dove deeply into the capabilities of generative AI, attendees were first asked to ponder their humanity, as MIT Professor Joshua Bennett read an original poem.

Bennett, a professor in the MIT Literature Section and Distinguished Chair of the Humanities, was asked to write a poem about what it means to be human, and drew inspiration from his daughter, who was born three weeks ago.

The poem told of his experiences as a boy watching Star Trek with his father and touched on the importance of passing traditions down to the next generation.

In his keynote remarks, Brooks set out to unpack some of the deep, scientific questions surrounding generative AI, as well as explore what the technology can tell us about ourselves.

To begin, he sought to dispel some of the mystery swirling around generative AI tools like ChatGPT by explaining the basics of how this large language model works. ChatGPT, for instance, generates text one word at a time by determining what the next word should be in the context of what it has already written. While a human might write a story by thinking about entire phrases, ChatGPT only focuses on the next word, Brooks explained.

ChatGPT 3.5 is built on a machine-learning model that has 175 billion parameters and has been exposed to billions of pages of text on the web during training. (The newest iteration, ChatGPT 4, is even larger.) It learns correlations between words in this massive corpus of text and uses this knowledge to propose what word might come next when given a prompt.

The model has demonstrated some incredible capabilities, such as the ability to write a sonnet about robots in the style of Shakespeare’s famous Sonnet 18. During his talk, Brooks showcased the sonnet he asked ChatGPT to write side-by-side with his own sonnet.

But while researchers still don’t fully understand exactly how these models work, Brooks assured the audience that generative AI’s seemingly incredible capabilities are not magic, and it doesn’t mean these models can do anything.

His biggest fears about generative AI don’t revolve around models that could someday surpass human intelligence. Rather, he is most worried about researchers who may throw away decades of excellent work that was nearing a breakthrough, just to jump on shiny new advancements in generative AI; venture capital firms that blindly swarm toward technologies that can yield the highest margins; or the possibility that a whole generation of engineers will forget about other forms of software and AI.

At the end of the day, those who believe generative AI can solve the world’s problems and those who believe it will only generate new problems have at least one thing in common: Both groups tend to overestimate the technology, he said.

“What is the conceit with generative AI? The conceit is that it is somehow going to lead to artificial general intelligence. By itself, it is not,” Brooks said.

Following Brooks’ presentation, a group of MIT faculty spoke about their work using generative AI and participated in a panel discussion about future advances, important but underexplored research topics, and the challenges of AI regulation and policy.

The panel consisted of Jacob Andreas, an associate professor in the MIT Department of Electrical Engineering and Computer Science (EECS) and a member of CSAIL; Antonio Torralba, the Delta Electronics Professor of EECS and a member of CSAIL; Ev Fedorenko, an associate professor of brain and cognitive sciences and an investigator at the McGovern Institute for Brain Research at MIT; and Armando Solar-Lezama, a Distinguished Professor of Computing and associate director of CSAIL. It was moderated by William T. Freeman, the Thomas and Gerd Perkins Professor of EECS and a member of CSAIL.

The panelists discussed several potential future research directions around generative AI, including the possibility of integrating perceptual systems, drawing on human senses like touch and smell, rather than focusing primarily on language and images. The researchers also spoke about the importance of engaging with policymakers and the public to ensure generative AI tools are produced and deployed responsibly.

“One of the big risks with generative AI today is the risk of digital snake oil. There is a big risk of a lot of products going out that claim to do miraculous things but in the long run could be very harmful,” Solar-Lezama said.

The morning session concluded with an excerpt from the 1925 science fiction novel “Metropolis,” read by senior Joy Ma, a physics and theater arts major, followed by a roundtable discussion on the future of generative AI. The discussion included Joshua Tenenbaum, a professor in the Department of Brain and Cognitive Sciences and a member of CSAIL; Dina Katabi, the Thuan and Nicole Pham Professor in EECS and a principal investigator in CSAIL and the MIT Jameel Clinic; and Max Tegmark, professor of physics; and was moderated by Daniela Rus.

One focus of the discussion was the possibility of developing generative AI models that can go beyond what we can do as humans, such as tools that can sense someone’s emotions by using electromagnetic signals to understand how a person’s breathing and heart rate are changing.

But one key to integrating AI like this into the real world safely is to ensure that we can trust it, Tegmark said. If we know an AI tool will meet the specifications we insist on, then “we no longer have to be afraid of building really powerful systems that go out and do things for us in the world,” he said.

Tuning the mind to benefit mental health

This story also appears in the Winter 2024 issue of BrainScan.

___

llustration of woman sitting at end of a dock with head down, arms wrapped around her knees.
Mental health is the defining public health crisis of our time, according to U.S. Surgeon General Vivek Murthy, and the nation’s youth is at the
center of this crisis.

Psychiatrists and pediatricians have sounded an alarm. The mental health of youth in the United States is worsening. Youth visits to emergency departments related to depression, anxiety, and behavioral challenges have been on the rise for years. Suicide rates among young people have escalated, too. Researchers have tracked these trends for more than a decade, and the Covid-19 pandemic only exacerbated the situation.

“It’s all over the news, how shockingly common mental health difficulties are,” says John Gabrieli, the Grover Hermann Professor of Health Sciences and Technology at MIT and an investigator at the McGovern Institute. “It’s worsening by every measure.”

Experts worry that our mental health systems are inadequate to meet the growing need. “This has gone from bad to catastrophic, from my perspective,” says Susan Whitfeld-Gabrieli, a professor of psychology at Northeastern University and a research affiliate at the McGovern Institute.

“We really need to come up with novel interventions that target the neural mechanisms that we believe potentiate depression and anxiety.”

Training the brain

One approach may be to help young people learn to modulate some of the relevant brain circuitry themselves. Evidence is accumulating that practicing mindfulness — focusing awareness on the present, typically through meditation — can change patterns of brain activity associated with emotions and mental health.

“There’s been a steady flow of moderate-size studies showing that when you help people gain mindfulness through training programs, you get all kinds of benefits in terms of people feeling less stress, less anxiety, fewer negative emotions, and sometimes more positive ones as well,” says Gabrieli, who is also a professor of brain and cognitive sciences at MIT. “Those are the things you wish for people.”

“If there were a medicine with as much evidence of its effectiveness as mindfulness, it would be flying off the shelves of every pharmacy.”
– John Gabrieli

Researchers have even begun testing mindfulness-based interventions head-to-head against standard treatments for psychiatric disorders. The results of recent studies involving hundreds of adults with anxiety disorders or depression are encouraging. “It’s just as good as the best medicines and the best behavioral treatments that we know a ton about,” Gabrieli says.

Much mindfulness research has focused on adults, but promising data about the benefits of mindfulness training for children and adolescents is emerging as well. In studies supported by the McGovern Institute’s Poitras Center for Psychiatric Disorders Research in 2019 and 2020, Gabrieli and Whitfield-Gabrieli found that sixth-graders in a Boston middle school who participated in eight weeks of mindfulness training experienced reductions in feelings of stress and increases in sustained attention. More recently, Gabrieli and Whitfeld-Gabrieli’s teams have shown how new tools can support mindfulness training and make it accessible to more children and their families — from a smartphone app that can be used anywhere to real-time neurofeedback inside an MRI scanner.

Three people practicing mindfulness in MIT Building 46. Woman on left is leaning on a railing, wearing headphones with eyes closed. Man seated in the center holds a bowl and a wooden spoon. Woman on right is seated with legs crossed and eyes closed.
Isaac Treves (center), a PhD student in the lab of John Gabrieli, is the lead author of two studies which found that mindfulness training may improve children’s mental health. Treves and his co-authors Kimberly Wang (left) and Cindy Li (right) also practice mindfulness in their daily lives. Photo: Steph Stevens

Mindfulness and mental health

Mindfulness is not just a practice, it is a trait — an open, non-judgmental way of attending to experiences that some people exhibit more than others. By assessing individuals’ mindfulness with questionnaires that ask about attention and awareness, researchers have found the trait associates with many measures of mental health. Gabrieli and his team measured mindfulness in children between the ages of eight and ten and found it was highest in those who were most emotionally resilient to the stress they experienced during the Covid-19 pandemic. As the team reported this year in the journal PLOS One, children who were more mindful rated the impact of the pandemic on their own lives lower than other participants in the study. They also reported lower levels of stress, anxiety, and depression.

Illustration of a finger tracing the outline of a hand. There is a circle next to the hand with text that says, "Breathe In, Breathe Out. Children enrolled in John Gabrieli’s mindfulness study learned to trace the outline of their fingers in rhythm with their in-andout breathing pattern. This multisensory breathing technique has been shown to relieve anxiety and relax the body."

Mindfulness doesn’t come naturally to everyone, but brains are malleable, and both children and adults can cultivate mindfulness with training and practice. In their studies of middle schoolers, Gabrieli and Whitfeld-Gabrieli showed that the emotional effects of mindfulness training corresponded to measurable changes in the brain: Functional MRI scans revealed changes in regions involved in stress, negative feelings, and focused attention.

Whitfeld-Gabrieli says if mindfulness training makes kids more resilient, it could be a valuable tool for managing symptoms of anxiety and depression before they become severe. “I think it should be part of the standard school day,” she says. “I think we would have a much happier, healthier society if we could be doing this from the ground up.”

Data from Gabrieli’s lab suggests broadly implementing mindfulness training might even pay off in terms of academic achievement. His team found in a 2019 study that middle school students who reported greater levels of mindfulness had, on average, better grades, better scores on standardized tests, fewer absences, and fewer school suspensions than their peers.

Some schools have begun making mindfulness programs available to their students. But those programs don’t reach everyone, and their type and quality vary tremendously. Indeed, not every study of mindfulness training in schools has found the program to significantly benefit participants, which may be because not every approach to mindfulness training is equally effective.

“This is where I think the science matters,” Gabrieli says. “You have to find out what kinds of supports really work and you have to execute them reasonably. A recent report from Gabrieli’s lab offers encouraging news: mindfulness training doesn’t have to be in-person. Gabrieli and his team found that children can benefit from practicing mindfulness at home with the help of an app.

When the pandemic closed schools in 2020, school-based mindfulness programs came to an abrupt halt. Soon thereafter, a group called Inner Explorer had developed a smartphone app that could teach children mindfulness at home. Gabrieli and his team were eager to find out if this easy-access tool could effectively support children’s emotional well-being.

In October of this year, they reported in the journal Mindfulness that after 40 days of app use, children between the ages of eight and ten reported less stress than they had before beginning mindfulness training. Parents reported that their children were also experiencing fewer negative emotions, such as loneliness and fear.

The outcomes suggest a path toward making evidence-based mindfulness training for children broadly accessible. “Tons of people could do this,” says Gabrieli. “It’s super scalable. It doesn’t cost money; you don’t have to go somewhere. We’re very excited about that.”

Visualizing healthy minds

Mindfulness training may be even more effective when practitioners can visualize what’s happening in their brains. In Whitfeld-Gabrieli’s lab, teenagers have had a chance to slide inside an MRI scanner and watch their brain activity shift in real time as they practiced mindfulness meditation. The visualization they see focuses on the brain’s default mode network (DMN), which is most active when attention is not focused on a particular task. Certain patterns of activity in the DMN have been linked to depression, anxiety, and other psychiatric conditions, and mindfulness training may help break these patterns.

McGovern research affiliate Susan Whitfield-Gabrieli in the Martinos Imaging Center. Photo: Caitlin Cunningham

Whitfeld-Gabrieli explains that when the mind is free to wander, two hubs of the DMN become active. “Typically, that means we’re engaged in some kind of mental time travel,” she says. That might mean reminiscing about the past or planning for the future, but can be more distressing when it turns into obsessive rumination or worry. In people with anxiety, depression, and psychosis, these network hubs are often hyperconnected.

“It’s almost as if they’re hijacked,” Whitfeld-Gabrieli says. “The more they’re correlated, the more psychopathology one might be experiencing. We wanted to unlock that hyperconnectivity for kids who are suffering from depression and anxiety.” She hoped that by replacing thoughts of the past and the future with focus on the present, mindfulness meditation would rein in overactive DMNs, and she wanted a way to encourage kids to do exactly that.

The neurofeedback tool that she and her colleagues created focuses on the DMN as well as separate brain region that is called on during attention-demanding tasks. Activity in those regions is monitored with functional MRI and displayed to users in a game-like visualization. Inside the scanner, participants see how that activity changes as they focus on a meditation or when their mind wanders. As their mind becomes more focused on the present moment, changes in brain activity move a ball toward a target.

Whitfeld-Gabrieli says the real-time feedback was motivating for adolescents who participated in a recent study, who all had histories of anxiety or depression. “They’re training their brain to tune their mind, and they love it,” she says.

MRI images of two brains, one showing an active DMN and the other showing a healthy DMN.
The default mode network (DMN) is a large-scale brain network that is active when a person is not focused on the outside world and the brain is at wakeful rest. The DMN is often over-engaged in adolescents with depression and anxiety, as well as teens at risk for these affective disorders (left). DMN activation and connectivity can be “tuned” to a healthier state through the practice of mindfulness (right).

In March, she and her team reported in Molecular Psychiatry that the neurofeedback tool helped those study participants reduce connectivity in the DMN and engage a more desirable brain state. It’s not the first success the team has had with the approach. Previously, they found that the decreases in DMN connectivity brought about by mindfulness meditation with neurofeedback were associated with reduced hallucinations for patients with schizophrenia. Testing the clinical benefits of the approach in teens is on the horizon; Whitfeld-Gabrieli and her collaborators plan to investigate how mindfulness meditation with real-time neurofeedback affects depression symptoms in an upcoming clinical trial.

Whitfeld-Gabrieli emphasizes that the neurofeedback is a training tool, helping users improve mindfulness techniques they can later call on anytime, anywhere. While that training currently requires time inside an MRI scanner, she says it may be possible create an EEG-based version of the approach, which could be deployed in doctors’ offices and other more accessible settings.

Both Gabrieli and Whitfeld-Gabrieli continue to explore how mindfulness training impacts different aspects of mental health, in both children and adults and with a range of psychiatric conditions. Whitfeld-Gabrieli expects it will be one powerful tool for combating a youth mental health crisis for which there will be no single solution. “I think it’s going to take a village,” she says. “We are all going to have to work together, and we’ll have to come up some really innovative ways to help.”

A new way to see the activity inside a living cell

Living cells are bombarded with many kinds of incoming molecular signal that influence their behavior. Being able to measure those signals and how cells respond to them through downstream molecular signaling networks could help scientists learn much more about how cells work, including what happens as they age or become diseased.

Right now, this kind of comprehensive study is not possible because current techniques for imaging cells are limited to just a handful of different molecule types within a cell at one time. However, MIT researchers have developed an alternative method that allows them to observe up to seven different molecules at a time, and potentially even more than that.

“There are many examples in biology where an event triggers a long downstream cascade of events, which then causes a specific cellular function,” says Edward Boyden, the Y. Eva Tan Professor in Neurotechnology. “How does that occur? It’s arguably one of the fundamental problems of biology, and so we wondered, could you simply watch it happen?”

It’s arguably one of the fundamental problems of biology, and so we wondered, could you simply watch it happen? – Ed Boyden

The new approach makes use of green or red fluorescent molecules that flicker on and off at different rates. By imaging a cell over several seconds, minutes, or hours, and then extracting each of the fluorescent signals using a computational algorithm, the amount of each target protein can be tracked as it changes over time.

Boyden, who is also a professor of biological engineering and of brain and cognitive sciences at MIT, a Howard Hughes Medical Institute investigator, and a member of MIT’s McGovern Institute for Brain Research and Koch Institute for Integrative Cancer Research, as well as the co-director of the K. Lisa Yang Center for Bionics, is the senior author of the study, which appears today in Cell. MIT postdoc Yong Qian is the lead author of the paper.

Fluorescent signals

Labeling molecules inside cells with fluorescent proteins has allowed researchers to learn a great deal about the functions of many cellular molecules. This type of study is often done with green fluorescent protein (GFP), which was first deployed for imaging in the 1990s. Since then, several fluorescent proteins that glow in other colors have been developed for experimental use.

However, a typical light microscope can only distinguish two or three of these colors, allowing researchers only a tiny glimpse of the overall activity that is happening inside a cell. If they could track a greater number of labeled molecules, researchers could measure a brain cell’s response to different neurotransmitters during learning, for example, or investigate the signals that prompt a cancer cell to metastasize.

“Ideally, you would be able to watch the signals in a cell as they fluctuate in real time, and then you could understand how they relate to each other. That would tell you how the cell computes,” Boyden says. “The problem is that you can’t watch very many things at the same time.”

In 2020, Boyden’s lab developed a way to simultaneously image up to five different molecules within a cell, by targeting glowing reporters to distinct locations inside the cell. This approach, known as “spatial multiplexing,” allows researchers to distinguish signals for different molecules even though they may all be fluorescing the same color.

In the new study, the researchers took a different approach: Instead of distinguishing signals based on their physical location, they created fluorescent signals that vary over time. The technique relies on “switchable fluorophores” — fluorescent proteins that turn on and off at a specific rate. For this study, Boyden and his group members identified four green switchable fluorophores, and then engineered two more, all of which turn on and off at different rates. They also identified two red fluorescent proteins that switch at different rates, and engineered one additional red fluorophore.

Using four switchable fluorophores, MIT researchers were able to label and image four different kinases inside these cells (top four rows). In the bottom row, the cell nuclei are labeled in blue.
Image: Courtesy of the researchers

Each of these switchable fluorophores can be used to label a different type of molecule within a living cell, such an enzyme, signaling protein, or part of the cell cytoskeleton. After imaging the cell for several minutes, hours, or even days, the researchers use a computational algorithm to pick out the specific signal from each fluorophore, analogous to how the human ear can pick out different frequencies of sound.

“In a symphony orchestra, you have high-pitched instruments, like the flute, and low-pitched instruments, like a tuba. And in the middle are instruments like the trumpet. They all have different sounds, and our ear sorts them out,” Boyden says.

The mathematical technique that the researchers used to analyze the fluorophore signals is known as linear unmixing. This method can extract different fluorophore signals, similar to how the human ear uses a mathematical model known as a Fourier transform to extract different pitches from a piece of music.

Once this analysis is complete, the researchers can see when and where each of the fluorescently labeled molecules were found in the cell during the entire imaging period. The imaging itself can be done with a simple light microscope, with no specialized equipment required.

Biological phenomena

In this study, the researchers demonstrated their approach by labeling six different molecules involved in the cell division cycle, in mammalian cells. This allowed them to identify patterns in how the levels of enzymes called cyclin-dependent kinases change as a cell progresses through the cell cycle.

The researchers also showed that they could label other types of kinases, which are involved in nearly every aspect of cell signaling, as well as cell structures and organelles such as the cytoskeleton and mitochondria. In addition to their experiments using mammalian cells grown in a lab dish, the researchers showed that this technique could work in the brains of zebrafish larvae.

This method could be useful for observing how cells respond to any kind of input, such as nutrients, immune system factors, hormones, or neurotransmitters, according to the researchers. It could also be used to study how cells respond to changes in gene expression or genetic mutations. All of these factors play important roles in biological phenomena such as growth, aging, cancer, neurodegeneration, and memory formation.

“You could consider all of these phenomena to represent a general class of biological problem, where some short-term event — like eating a nutrient, learning something, or getting an infection — generates a long-term change,” Boyden says.

In addition to pursuing those types of studies, Boyden’s lab is also working on expanding the repertoire of switchable fluorophores so that they can study even more signals within a cell. They also hope to adapt the system so that it could be used in mouse models.

The research was funded by an Alana Fellowship, K. Lisa Yang, John Doerr, Jed McCaleb, James Fickel, Ashar Aziz, the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics at MIT, the Howard Hughes Medical Institute, and the National Institutes of Health.