Deep neural networks show promise as models of human hearing

Computational models that mimic the structure and function of the human auditory system could help researchers design better hearing aids, cochlear implants, and brain-machine interfaces. A new study from MIT has found that modern computational models derived from machine learning are moving closer to this goal.

In the largest study yet of deep neural networks that have been trained to perform auditory tasks, the MIT team showed that most of these models generate internal representations that share properties of representations seen in the human brain when people are listening to the same sounds.

The study also offers insight into how to best train this type of model: The researchers found that models trained on auditory input including background noise more closely mimic the activation patterns of the human auditory cortex.

“What sets this study apart is it is the most comprehensive comparison of these kinds of models to the auditory system so far. The study suggests that models that are derived from machine learning are a step in the right direction, and it gives us some clues as to what tends to make them better models of the brain,” says Josh McDermott, an associate professor of brain and cognitive sciences at MIT, a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds, and Machines, and the senior author of the study.

MIT graduate student Greta Tuckute and Jenelle Feather PhD ’22 are the lead authors of the open-access paper, which appears today in PLOS Biology.

Models of hearing

Deep neural networks are computational models that consists of many layers of information-processing units that can be trained on huge volumes of data to perform specific tasks. This type of model has become widely used in many applications, and neuroscientists have begun to explore the possibility that these systems can also be used to describe how the human brain performs certain tasks.

“These models that are built with machine learning are able to mediate behaviors on a scale that really wasn’t possible with previous types of models, and that has led to interest in whether or not the representations in the models might capture things that are happening in the brain,” Tuckute says.

When a neural network is performing a task, its processing units generate activation patterns in response to each audio input it receives, such as a word or other type of sound. Those model representations of the input can be compared to the activation patterns seen in fMRI brain scans of people listening to the same input.

In 2018, McDermott and then-graduate student Alexander Kell reported that when they trained a neural network to perform auditory tasks (such as recognizing words from an audio signal), the internal representations generated by the model showed similarity to those seen in fMRI scans of people listening to the same sounds.

Since then, these types of models have become widely used, so McDermott’s research group set out to evaluate a larger set of models, to see if the ability to approximate the neural representations seen in the human brain is a general trait of these models.

For this study, the researchers analyzed nine publicly available deep neural network models that had been trained to perform auditory tasks, and they also created 14 models of their own, based on two different architectures. Most of these models were trained to perform a single task — recognizing words, identifying the speaker, recognizing environmental sounds, and identifying musical genre — while two of them were trained to perform multiple tasks.

When the researchers presented these models with natural sounds that had been used as stimuli in human fMRI experiments, they found that the internal model representations tended to exhibit similarity with those generated by the human brain. The models whose representations were most similar to those seen in the brain were models that had been trained on more than one task and had been trained on auditory input that included background noise.

“If you train models in noise, they give better brain predictions than if you don’t, which is intuitively reasonable because a lot of real-world hearing involves hearing in noise, and that’s plausibly something the auditory system is adapted to,” Feather says.

Hierarchical processing

The new study also supports the idea that the human auditory cortex has some degree of hierarchical organization, in which processing is divided into stages that support distinct computational functions. As in the 2018 study, the researchers found that representations generated in earlier stages of the model most closely resemble those seen in the primary auditory cortex, while representations generated in later model stages more closely resemble those generated in brain regions beyond the primary cortex.

Additionally, the researchers found that models that had been trained on different tasks were better at replicating different aspects of audition. For example, models trained on a speech-related task more closely resembled speech-selective areas.

“Even though the model has seen the exact same training data and the architecture is the same, when you optimize for one particular task, you can see that it selectively explains specific tuning properties in the brain,” Tuckute says.

McDermott’s lab now plans to make use of their findings to try to develop models that are even more successful at reproducing human brain responses. In addition to helping scientists learn more about how the brain may be organized, such models could also be used to help develop better hearing aids, cochlear implants, and brain-machine interfaces.

“A goal of our field is to end up with a computer model that can predict brain responses and behavior. We think that if we are successful in reaching that goal, it will open a lot of doors,” McDermott says.

The research was funded by the National Institutes of Health, an Amazon Fellowship from the Science Hub, an International Doctoral Fellowship from the American Association of University Women, an MIT Friends of McGovern Institute Fellowship, a fellowship from the K. Lisa Yang Integrative Computational Neuroscience (ICoN) Center at MIT, and a Department of Energy Computational Science Graduate Fellowship.

Season’s Greetings from the McGovern Institute

This year’s holiday greeting (video above) was inspired by research conducted in John Gabrieli’s lab, which found that practicing mindfulness reduced children’s stress levels and negative emotions during the pandemic. These findings contribute to a growing body of evidence that practicing mindfulness can change patterns of brain activity associated with emotions and mental health.

Coloring is one form of mindfulness, or focusing awareness on the present. Visit our postcard collection to download and color your own brain-themed postcards and may the spirit of mindfulness bring you peace in the year ahead!

Video credits:
Joseph Laney (illustration)
JR Narrows, Space Lute (sound design)
Jacob Pryor (animation)

A mindful McGovern community

Mindfulness is the practice of maintaining a state of complete awareness of one’s thoughts, emotions, or experiences on a moment-to-moment basis. McGovern researchers have shown that practicing mindfulness reduces anxiety and supports emotional resilience.

In a survey distributed to the McGovern Institute community, 57% of the 74 researchers, faculty, and staff who responded, said that they practice mindfulness as a way to reduce anxiety and stress.

Here are a few of their stories.

Fernanda De La Torre

Portrait of a smiling woman leaning back against a railing.
MIT graduate student Fernanda De La Torre. Photo: Steph Stevens

Fernanda De La Torre is a graduate student in MIT’s Department of Brain and Cognitive Sciences, where she is advised by Josh McDermott.

Originally from Mexico, De La Torre took an unconventional path to her education in the United States, where she completed her undergraduate studies in computer science and math at Kansas State University. In 2019, she came to MIT as a postbaccalaureate student in the lab of Tomaso Poggio where she began working on deep-learning theory, an area of machine learning focused on how artificial neural networks modeled on the brain can learn to recognize patterns and learn.

A recent recipient of the prestigious Paul and Daisy Soros Fellowship for New Americans, De La Torre now studies multisensory integration during speech perception using deep learning models in Josh McDermott’s lab.

What kind of mindfulness do you practice, how often, and why?

Metta meditation is the type of meditation I come back to the most. I practice 2-3 times per week. Sometimes by joining Nikki Mirghafori’s Zoom calls or listening to her and other teachers’ recordings on AudioDharma. I practice because when I observe the patterns of my thoughts, I remember the importance of compassion, including self-compassion. In my experience, I find metta meditation is a wonderful way to cultivate the two: observation and compassion. 

When and why did you start practicing mindfulness?

My first meditation practice was as a first-year post-baccalaureate student here at BCS. Gal Raz (also pictured above) carried a lot of peace and attributed it to meditation; this sparked my curiosity. I started practicing more frequently last summer, after realizing my mental health was not in a good place.

How does mindfulness benefit your research at MIT?

This is hard to answer because I think the benefits of meditation are hard to measure. I find that meditation helps me stay centered and healthy, which can indirectly help the research I do. More directly, some of my initial grad school pursuits were fueled by thoughts during meditation but I ended up feeling that a lot of these concepts are hard to explore using non-philosophical approaches. So I think meditation is mainly a practice that helps my health, my relationships with others, and my relationship with work (this last one I find most challenging and personally unresolved). 

Adam Eisen

MIT graduate student Adam Eisen.

Adam Eisen is a graduate student in MIT’s Department of Brain and Cognitive Sciences, where he is co-advised by Ila Fiete (McGovern Institute) and Earl Miller (Picower Institute).

Eisen completed his undergraduate degree in Applied Mathematics & Computer Engineering at Queen’s University in Toronto, Canada. Prior to joining MIT, Eisen built computer vision algorithms at the solar aerial inspection company Heliolytics and worked on developing machine learning tools to predict disease outcomes from genetics at The Hospital for Sick Children.

Today, in the Fiete and Miller labs, Eisen develops tools for analyzing the flow of neural activity, and applies them to understand changes in neural states (such as from consciousness to anesthetic-induced unconsciousness).

What kind of mindfulness do you practice, how often, and why?

I mostly practice simple sitting meditation centered on awareness of senses and breathing. On a good week, I meditate about 3-5 times. The reason I practice are the benefits to my general experience of living. Whenever I’m in a prolonged period of consistent meditation, I’m shocked by how much more awareness I have about thoughts, feelings and sensations that are arising in my mind throughout the day. I’m also amazed by how much easier it is to watch my mind and body react to the context around me, without slipping into the usual patterns and habits. I also find mindful benefits in doing yoga, running and playing music, but the core is really centered on meditation practice.

When and why did you start practicing mindfulness?

I’ve been interested in mindfulness and meditation since undergrad as a path to investigating the nature of mind and thought – an interest which also led me into my PhD. I started practicing meditation more seriously at the start of the pandemic to get more first hand experience with what I had been learning about. I find meditation is one of those things where knowledge and theory can support the practice, but without the experiential component it’s very hard to really start to build an understanding of the core concepts at play.

How does mindfulness benefit your research at MIT?

Mindfulness has definitely informed the kinds of things I’m interested in studying and the questions I’d like to ask – largely in relation to the nature of conscious awareness and the flow of thoughts. Outside of that, I’d like to think that mindfulness benefits my general well-being and spiritual balance, which enables me to do better research.

 

Sugandha Sharma

Woman clasping hands in a yoga pose, looking directly into the camera.
MIT graduate student Sugandha Sharma. Photo: Steph Stevens

Sugandha (Su) Sharma is a graduate student in MIT’s Department of Brain and Cognitive Sciences (BCS), where she is co-advised by Ila Fiete (McGovern Institute) and Josh Tenenbaum (BCS).

Prior to joining MIT, she studied theoretical neuroscience at the University of Waterloo where she built neural models of context dependent decision making in the prefrontal cortex and spiking neuron models of bayesian inference, based on online learning of priors from life experience.

Today, in the Fiete and Tenenbaum labs, she studies the computational and theoretical principles underlying cognition and intelligence in the human brain.  She is currently exploring the coding principles in the hippocampal circuits implicated in spatial navigation, and their role in cognitive computations like structure learning and relational reasoning.

When did you start practicing mindfulness?

When I first learned to meditate, I was challenged to practice it every day for at least 3 months in a row. I took up the challenge, and by the end of it, the results were profound. My whole perspective towards life changed. It made me more empathetic — I could step in other people’s shoes and be mindful of their situations and feelings;  my focus shifted from myself to the big picture — it made me realize how insignificant my life was on the grand scale of the universe, and how it was worthless to be caught up in small things that I was usually worrying about. It somehow also brought selflessness to me. This experience hooked me to meditation and mindfulness for life!

What kind of mindfulness do you practice and why?

I practice mindfulness because it brings awareness. It helps me to be aware of myself, my thoughts, my actions, and my surroundings at each moment in my life, thus helping me stay in and enjoy the present moment. Awareness is of utmost importance since an aware mind always does the right thing. Imagine that you are angry, in that moment you have lost awareness of yourself. The moment you become aware of yourself; anger goes away. This is why sometimes counting helps to combat anger. If you start counting, that gives you time to think and become aware of yourself and your actions.

Meditating — sitting with my eyes closed and just observing (being aware of) my thoughts — is a yogic technique that helps me clear the noise in my mind and calm it down making it easier for me to be mindful not only while meditating, but also in general after I am done meditating. Over time, the thoughts vanish, and the mind becomes blank (noiseless). For this reason, practicing meditation regularly makes it easier for me to be mindful all the time.

An added advantage of yoga and meditation is that it helps combat stress by relaxing the mind and body. Many people don’t know what to do when they are stressed, but I am grateful to have this toolkit of yoga and meditation to deal with stressful situations in my life. They help me calm my mind in stressful situations and ensure that instead of reacting to a situation, I instead act mindfully and appropriately to make it right.

K. Lisa Yang Postbaccalaureate Program names new scholars

Funded by philanthropist Lisa Yang, the K. Lisa Yang Postbaccalaureate Scholar Program provides two years of paid laboratory experience, mentorship, and education to recent college graduates from backgrounds underrepresented in neuroscience. This year, two young researchers in McGovern Institute labs, Joseph Itiat and Sam Merrow, are the recipients of the Yang postbac program.

Itiat moved to the United States from Nigeria in 2019 to pursue a degree in psychology and cognitive neuroscience at Temple University. Today, he is a Yang postbac in John Gabrieli’s lab studying the relationship between learning and value processes and their influence on future-oriented decision-making. Ultimately, Itiat hopes to develop models that map the underlying mechanisms driving these processes.

“Being African, with limited research experience and little representation in the domain of neuroscience research,” Itiat says, “I chose to pursue a postbaccalaureate
research program to prepare me for a top graduate school and a career in cognitive neuroscience.”

Merrow first fell in love with science while working at the Barrow Neurological Institute in Arizona during high school. After graduating from Simmons University in Boston, Massachusetts, Merrow joined Guoping Feng’s lab as a Yang postbac to pursue research on glial cells and brain disorders. “As a queer, nonbinary, LatinX person, I have not met anyone like me in my field, nor have I had role models that hold a similar identity to myself,” says Merrow.

“My dream is to one day become a professor, where I will be able to show others that science is for anyone.”

Previous Yang postbacs include Alex Negron, Zoe Pearce, Ajani Stewart, and Maya Taliaferro.

What does the future hold for generative AI?

Speaking at the “Generative AI: Shaping the Future” symposium on Nov. 28, the kickoff event of MIT’s Generative AI Week, keynote speaker and iRobot co-founder Rodney Brooks warned attendees against uncritically overestimating the capabilities of this emerging technology, which underpins increasingly powerful tools like OpenAI’s ChatGPT and Google’s Bard.

“Hype leads to hubris, and hubris leads to conceit, and conceit leads to failure,” cautioned Brooks, who is also a professor emeritus at MIT, a former director of the Computer Science and Artificial Intelligence Laboratory (CSAIL), and founder of Robust.AI.

“No one technology has ever surpassed everything else,” he added.

The symposium, which drew hundreds of attendees from academia and industry to the Institute’s Kresge Auditorium, was laced with messages of hope about the opportunities generative AI offers for making the world a better place, including through art and creativity, interspersed with cautionary tales about what could go wrong if these AI tools are not developed responsibly.

Generative AI is a term to describe machine-learning models that learn to generate new material that looks like the data they were trained on. These models have exhibited some incredible capabilities, such as the ability to produce human-like creative writing, translate languages, generate functional computer code, or craft realistic images from text prompts.

In her opening remarks to launch the symposium, MIT President Sally Kornbluth highlighted several projects faculty and students have undertaken to use generative AI to make a positive impact in the world. For example, the work of the Axim Collaborative, an online education initiative launched by MIT and Harvard, includes exploring the educational aspects of generative AI to help underserved students.

The Institute also recently announced seed grants for 27 interdisciplinary faculty research projects centered on how AI will transform people’s lives across society.

In hosting Generative AI Week, MIT hopes to not only showcase this type of innovation, but also generate “collaborative collisions” among attendees, Kornbluth said.

Collaboration involving academics, policymakers, and industry will be critical if we are to safely integrate a rapidly evolving technology like generative AI in ways that are humane and help humans solve problems, she told the audience.

“I honestly cannot think of a challenge more closely aligned with MIT’s mission. It is a profound responsibility, but I have every confidence that we can face it, if we face it head on and if we face it as a community,” she said.

While generative AI holds the potential to help solve some of the planet’s most pressing problems, the emergence of these powerful machine learning models has blurred the distinction between science fiction and reality, said CSAIL Director Daniela Rus in her opening remarks. It is no longer a question of whether we can make machines that produce new content, she said, but how we can use these tools to enhance businesses and ensure sustainability. 

“Today, we will discuss the possibility of a future where generative AI does not just exist as a technological marvel, but stands as a source of hope and a force for good,” said Rus, who is also the Andrew and Erna Viterbi Professor in the Department of Electrical Engineering and Computer Science.

But before the discussion dove deeply into the capabilities of generative AI, attendees were first asked to ponder their humanity, as MIT Professor Joshua Bennett read an original poem.

Bennett, a professor in the MIT Literature Section and Distinguished Chair of the Humanities, was asked to write a poem about what it means to be human, and drew inspiration from his daughter, who was born three weeks ago.

The poem told of his experiences as a boy watching Star Trek with his father and touched on the importance of passing traditions down to the next generation.

In his keynote remarks, Brooks set out to unpack some of the deep, scientific questions surrounding generative AI, as well as explore what the technology can tell us about ourselves.

To begin, he sought to dispel some of the mystery swirling around generative AI tools like ChatGPT by explaining the basics of how this large language model works. ChatGPT, for instance, generates text one word at a time by determining what the next word should be in the context of what it has already written. While a human might write a story by thinking about entire phrases, ChatGPT only focuses on the next word, Brooks explained.

ChatGPT 3.5 is built on a machine-learning model that has 175 billion parameters and has been exposed to billions of pages of text on the web during training. (The newest iteration, ChatGPT 4, is even larger.) It learns correlations between words in this massive corpus of text and uses this knowledge to propose what word might come next when given a prompt.

The model has demonstrated some incredible capabilities, such as the ability to write a sonnet about robots in the style of Shakespeare’s famous Sonnet 18. During his talk, Brooks showcased the sonnet he asked ChatGPT to write side-by-side with his own sonnet.

But while researchers still don’t fully understand exactly how these models work, Brooks assured the audience that generative AI’s seemingly incredible capabilities are not magic, and it doesn’t mean these models can do anything.

His biggest fears about generative AI don’t revolve around models that could someday surpass human intelligence. Rather, he is most worried about researchers who may throw away decades of excellent work that was nearing a breakthrough, just to jump on shiny new advancements in generative AI; venture capital firms that blindly swarm toward technologies that can yield the highest margins; or the possibility that a whole generation of engineers will forget about other forms of software and AI.

At the end of the day, those who believe generative AI can solve the world’s problems and those who believe it will only generate new problems have at least one thing in common: Both groups tend to overestimate the technology, he said.

“What is the conceit with generative AI? The conceit is that it is somehow going to lead to artificial general intelligence. By itself, it is not,” Brooks said.

Following Brooks’ presentation, a group of MIT faculty spoke about their work using generative AI and participated in a panel discussion about future advances, important but underexplored research topics, and the challenges of AI regulation and policy.

The panel consisted of Jacob Andreas, an associate professor in the MIT Department of Electrical Engineering and Computer Science (EECS) and a member of CSAIL; Antonio Torralba, the Delta Electronics Professor of EECS and a member of CSAIL; Ev Fedorenko, an associate professor of brain and cognitive sciences and an investigator at the McGovern Institute for Brain Research at MIT; and Armando Solar-Lezama, a Distinguished Professor of Computing and associate director of CSAIL. It was moderated by William T. Freeman, the Thomas and Gerd Perkins Professor of EECS and a member of CSAIL.

The panelists discussed several potential future research directions around generative AI, including the possibility of integrating perceptual systems, drawing on human senses like touch and smell, rather than focusing primarily on language and images. The researchers also spoke about the importance of engaging with policymakers and the public to ensure generative AI tools are produced and deployed responsibly.

“One of the big risks with generative AI today is the risk of digital snake oil. There is a big risk of a lot of products going out that claim to do miraculous things but in the long run could be very harmful,” Solar-Lezama said.

The morning session concluded with an excerpt from the 1925 science fiction novel “Metropolis,” read by senior Joy Ma, a physics and theater arts major, followed by a roundtable discussion on the future of generative AI. The discussion included Joshua Tenenbaum, a professor in the Department of Brain and Cognitive Sciences and a member of CSAIL; Dina Katabi, the Thuan and Nicole Pham Professor in EECS and a principal investigator in CSAIL and the MIT Jameel Clinic; and Max Tegmark, professor of physics; and was moderated by Daniela Rus.

One focus of the discussion was the possibility of developing generative AI models that can go beyond what we can do as humans, such as tools that can sense someone’s emotions by using electromagnetic signals to understand how a person’s breathing and heart rate are changing.

But one key to integrating AI like this into the real world safely is to ensure that we can trust it, Tegmark said. If we know an AI tool will meet the specifications we insist on, then “we no longer have to be afraid of building really powerful systems that go out and do things for us in the world,” he said.

Tuning the mind to benefit mental health

This story also appears in the Winter 2024 issue of BrainScan.

___

llustration of woman sitting at end of a dock with head down, arms wrapped around her knees.
Mental health is the defining public health crisis of our time, according to U.S. Surgeon General Vivek Murthy, and the nation’s youth is at the
center of this crisis.

Psychiatrists and pediatricians have sounded an alarm. The mental health of youth in the United States is worsening. Youth visits to emergency departments related to depression, anxiety, and behavioral challenges have been on the rise for years. Suicide rates among young people have escalated, too. Researchers have tracked these trends for more than a decade, and the Covid-19 pandemic only exacerbated the situation.

“It’s all over the news, how shockingly common mental health difficulties are,” says John Gabrieli, the Grover Hermann Professor of Health Sciences and Technology at MIT and an investigator at the McGovern Institute. “It’s worsening by every measure.”

Experts worry that our mental health systems are inadequate to meet the growing need. “This has gone from bad to catastrophic, from my perspective,” says Susan Whitfeld-Gabrieli, a professor of psychology at Northeastern University and a research affiliate at the McGovern Institute.

“We really need to come up with novel interventions that target the neural mechanisms that we believe potentiate depression and anxiety.”

Training the brain

One approach may be to help young people learn to modulate some of the relevant brain circuitry themselves. Evidence is accumulating that practicing mindfulness — focusing awareness on the present, typically through meditation — can change patterns of brain activity associated with emotions and mental health.

“There’s been a steady flow of moderate-size studies showing that when you help people gain mindfulness through training programs, you get all kinds of benefits in terms of people feeling less stress, less anxiety, fewer negative emotions, and sometimes more positive ones as well,” says Gabrieli, who is also a professor of brain and cognitive sciences at MIT. “Those are the things you wish for people.”

“If there were a medicine with as much evidence of its effectiveness as mindfulness, it would be flying off the shelves of every pharmacy.”
– John Gabrieli

Researchers have even begun testing mindfulness-based interventions head-to-head against standard treatments for psychiatric disorders. The results of recent studies involving hundreds of adults with anxiety disorders or depression are encouraging. “It’s just as good as the best medicines and the best behavioral treatments that we know a ton about,” Gabrieli says.

Much mindfulness research has focused on adults, but promising data about the benefits of mindfulness training for children and adolescents is emerging as well. In studies supported by the McGovern Institute’s Poitras Center for Psychiatric Disorders Research in 2019 and 2020, Gabrieli and Whitfield-Gabrieli found that sixth-graders in a Boston middle school who participated in eight weeks of mindfulness training experienced reductions in feelings of stress and increases in sustained attention. More recently, Gabrieli and Whitfeld-Gabrieli’s teams have shown how new tools can support mindfulness training and make it accessible to more children and their families — from a smartphone app that can be used anywhere to real-time neurofeedback inside an MRI scanner.

Three people practicing mindfulness in MIT Building 46. Woman on left is leaning on a railing, wearing headphones with eyes closed. Man seated in the center holds a bowl and a wooden spoon. Woman on right is seated with legs crossed and eyes closed.
Isaac Treves (center), a PhD student in the lab of John Gabrieli, is the lead author of two studies which found that mindfulness training may improve children’s mental health. Treves and his co-authors Kimberly Wang (left) and Cindy Li (right) also practice mindfulness in their daily lives. Photo: Steph Stevens

Mindfulness and mental health

Mindfulness is not just a practice, it is a trait — an open, non-judgmental way of attending to experiences that some people exhibit more than others. By assessing individuals’ mindfulness with questionnaires that ask about attention and awareness, researchers have found the trait associates with many measures of mental health. Gabrieli and his team measured mindfulness in children between the ages of eight and ten and found it was highest in those who were most emotionally resilient to the stress they experienced during the Covid-19 pandemic. As the team reported this year in the journal PLOS One, children who were more mindful rated the impact of the pandemic on their own lives lower than other participants in the study. They also reported lower levels of stress, anxiety, and depression.

Illustration of a finger tracing the outline of a hand. There is a circle next to the hand with text that says, "Breathe In, Breathe Out. Children enrolled in John Gabrieli’s mindfulness study learned to trace the outline of their fingers in rhythm with their in-andout breathing pattern. This multisensory breathing technique has been shown to relieve anxiety and relax the body."

Mindfulness doesn’t come naturally to everyone, but brains are malleable, and both children and adults can cultivate mindfulness with training and practice. In their studies of middle schoolers, Gabrieli and Whitfeld-Gabrieli showed that the emotional effects of mindfulness training corresponded to measurable changes in the brain: Functional MRI scans revealed changes in regions involved in stress, negative feelings, and focused attention.

Whitfeld-Gabrieli says if mindfulness training makes kids more resilient, it could be a valuable tool for managing symptoms of anxiety and depression before they become severe. “I think it should be part of the standard school day,” she says. “I think we would have a much happier, healthier society if we could be doing this from the ground up.”

Data from Gabrieli’s lab suggests broadly implementing mindfulness training might even pay off in terms of academic achievement. His team found in a 2019 study that middle school students who reported greater levels of mindfulness had, on average, better grades, better scores on standardized tests, fewer absences, and fewer school suspensions than their peers.

Some schools have begun making mindfulness programs available to their students. But those programs don’t reach everyone, and their type and quality vary tremendously. Indeed, not every study of mindfulness training in schools has found the program to significantly benefit participants, which may be because not every approach to mindfulness training is equally effective.

“This is where I think the science matters,” Gabrieli says. “You have to find out what kinds of supports really work and you have to execute them reasonably. A recent report from Gabrieli’s lab offers encouraging news: mindfulness training doesn’t have to be in-person. Gabrieli and his team found that children can benefit from practicing mindfulness at home with the help of an app.

When the pandemic closed schools in 2020, school-based mindfulness programs came to an abrupt halt. Soon thereafter, a group called Inner Explorer had developed a smartphone app that could teach children mindfulness at home. Gabrieli and his team were eager to find out if this easy-access tool could effectively support children’s emotional well-being.

In October of this year, they reported in the journal Mindfulness that after 40 days of app use, children between the ages of eight and ten reported less stress than they had before beginning mindfulness training. Parents reported that their children were also experiencing fewer negative emotions, such as loneliness and fear.

The outcomes suggest a path toward making evidence-based mindfulness training for children broadly accessible. “Tons of people could do this,” says Gabrieli. “It’s super scalable. It doesn’t cost money; you don’t have to go somewhere. We’re very excited about that.”

Visualizing healthy minds

Mindfulness training may be even more effective when practitioners can visualize what’s happening in their brains. In Whitfeld-Gabrieli’s lab, teenagers have had a chance to slide inside an MRI scanner and watch their brain activity shift in real time as they practiced mindfulness meditation. The visualization they see focuses on the brain’s default mode network (DMN), which is most active when attention is not focused on a particular task. Certain patterns of activity in the DMN have been linked to depression, anxiety, and other psychiatric conditions, and mindfulness training may help break these patterns.

McGovern research affiliate Susan Whitfield-Gabrieli in the Martinos Imaging Center. Photo: Caitlin Cunningham

Whitfeld-Gabrieli explains that when the mind is free to wander, two hubs of the DMN become active. “Typically, that means we’re engaged in some kind of mental time travel,” she says. That might mean reminiscing about the past or planning for the future, but can be more distressing when it turns into obsessive rumination or worry. In people with anxiety, depression, and psychosis, these network hubs are often hyperconnected.

“It’s almost as if they’re hijacked,” Whitfeld-Gabrieli says. “The more they’re correlated, the more psychopathology one might be experiencing. We wanted to unlock that hyperconnectivity for kids who are suffering from depression and anxiety.” She hoped that by replacing thoughts of the past and the future with focus on the present, mindfulness meditation would rein in overactive DMNs, and she wanted a way to encourage kids to do exactly that.

The neurofeedback tool that she and her colleagues created focuses on the DMN as well as separate brain region that is called on during attention-demanding tasks. Activity in those regions is monitored with functional MRI and displayed to users in a game-like visualization. Inside the scanner, participants see how that activity changes as they focus on a meditation or when their mind wanders. As their mind becomes more focused on the present moment, changes in brain activity move a ball toward a target.

Whitfeld-Gabrieli says the real-time feedback was motivating for adolescents who participated in a recent study, who all had histories of anxiety or depression. “They’re training their brain to tune their mind, and they love it,” she says.

MRI images of two brains, one showing an active DMN and the other showing a healthy DMN.
The default mode network (DMN) is a large-scale brain network that is active when a person is not focused on the outside world and the brain is at wakeful rest. The DMN is often over-engaged in adolescents with depression and anxiety, as well as teens at risk for these affective disorders (left). DMN activation and connectivity can be “tuned” to a healthier state through the practice of mindfulness (right).

In March, she and her team reported in Molecular Psychiatry that the neurofeedback tool helped those study participants reduce connectivity in the DMN and engage a more desirable brain state. It’s not the first success the team has had with the approach. Previously, they found that the decreases in DMN connectivity brought about by mindfulness meditation with neurofeedback were associated with reduced hallucinations for patients with schizophrenia. Testing the clinical benefits of the approach in teens is on the horizon; Whitfeld-Gabrieli and her collaborators plan to investigate how mindfulness meditation with real-time neurofeedback affects depression symptoms in an upcoming clinical trial.

Whitfeld-Gabrieli emphasizes that the neurofeedback is a training tool, helping users improve mindfulness techniques they can later call on anytime, anywhere. While that training currently requires time inside an MRI scanner, she says it may be possible create an EEG-based version of the approach, which could be deployed in doctors’ offices and other more accessible settings.

Both Gabrieli and Whitfeld-Gabrieli continue to explore how mindfulness training impacts different aspects of mental health, in both children and adults and with a range of psychiatric conditions. Whitfeld-Gabrieli expects it will be one powerful tool for combating a youth mental health crisis for which there will be no single solution. “I think it’s going to take a village,” she says. “We are all going to have to work together, and we’ll have to come up some really innovative ways to help.”

A new way to see the activity inside a living cell

Living cells are bombarded with many kinds of incoming molecular signal that influence their behavior. Being able to measure those signals and how cells respond to them through downstream molecular signaling networks could help scientists learn much more about how cells work, including what happens as they age or become diseased.

Right now, this kind of comprehensive study is not possible because current techniques for imaging cells are limited to just a handful of different molecule types within a cell at one time. However, MIT researchers have developed an alternative method that allows them to observe up to seven different molecules at a time, and potentially even more than that.

“There are many examples in biology where an event triggers a long downstream cascade of events, which then causes a specific cellular function,” says Edward Boyden, the Y. Eva Tan Professor in Neurotechnology. “How does that occur? It’s arguably one of the fundamental problems of biology, and so we wondered, could you simply watch it happen?”

It’s arguably one of the fundamental problems of biology, and so we wondered, could you simply watch it happen? – Ed Boyden

The new approach makes use of green or red fluorescent molecules that flicker on and off at different rates. By imaging a cell over several seconds, minutes, or hours, and then extracting each of the fluorescent signals using a computational algorithm, the amount of each target protein can be tracked as it changes over time.

Boyden, who is also a professor of biological engineering and of brain and cognitive sciences at MIT, a Howard Hughes Medical Institute investigator, and a member of MIT’s McGovern Institute for Brain Research and Koch Institute for Integrative Cancer Research, as well as the co-director of the K. Lisa Yang Center for Bionics, is the senior author of the study, which appears today in Cell. MIT postdoc Yong Qian is the lead author of the paper.

Fluorescent signals

Labeling molecules inside cells with fluorescent proteins has allowed researchers to learn a great deal about the functions of many cellular molecules. This type of study is often done with green fluorescent protein (GFP), which was first deployed for imaging in the 1990s. Since then, several fluorescent proteins that glow in other colors have been developed for experimental use.

However, a typical light microscope can only distinguish two or three of these colors, allowing researchers only a tiny glimpse of the overall activity that is happening inside a cell. If they could track a greater number of labeled molecules, researchers could measure a brain cell’s response to different neurotransmitters during learning, for example, or investigate the signals that prompt a cancer cell to metastasize.

“Ideally, you would be able to watch the signals in a cell as they fluctuate in real time, and then you could understand how they relate to each other. That would tell you how the cell computes,” Boyden says. “The problem is that you can’t watch very many things at the same time.”

In 2020, Boyden’s lab developed a way to simultaneously image up to five different molecules within a cell, by targeting glowing reporters to distinct locations inside the cell. This approach, known as “spatial multiplexing,” allows researchers to distinguish signals for different molecules even though they may all be fluorescing the same color.

In the new study, the researchers took a different approach: Instead of distinguishing signals based on their physical location, they created fluorescent signals that vary over time. The technique relies on “switchable fluorophores” — fluorescent proteins that turn on and off at a specific rate. For this study, Boyden and his group members identified four green switchable fluorophores, and then engineered two more, all of which turn on and off at different rates. They also identified two red fluorescent proteins that switch at different rates, and engineered one additional red fluorophore.

Using four switchable fluorophores, MIT researchers were able to label and image four different kinases inside these cells (top four rows). In the bottom row, the cell nuclei are labeled in blue.
Image: Courtesy of the researchers

Each of these switchable fluorophores can be used to label a different type of molecule within a living cell, such an enzyme, signaling protein, or part of the cell cytoskeleton. After imaging the cell for several minutes, hours, or even days, the researchers use a computational algorithm to pick out the specific signal from each fluorophore, analogous to how the human ear can pick out different frequencies of sound.

“In a symphony orchestra, you have high-pitched instruments, like the flute, and low-pitched instruments, like a tuba. And in the middle are instruments like the trumpet. They all have different sounds, and our ear sorts them out,” Boyden says.

The mathematical technique that the researchers used to analyze the fluorophore signals is known as linear unmixing. This method can extract different fluorophore signals, similar to how the human ear uses a mathematical model known as a Fourier transform to extract different pitches from a piece of music.

Once this analysis is complete, the researchers can see when and where each of the fluorescently labeled molecules were found in the cell during the entire imaging period. The imaging itself can be done with a simple light microscope, with no specialized equipment required.

Biological phenomena

In this study, the researchers demonstrated their approach by labeling six different molecules involved in the cell division cycle, in mammalian cells. This allowed them to identify patterns in how the levels of enzymes called cyclin-dependent kinases change as a cell progresses through the cell cycle.

The researchers also showed that they could label other types of kinases, which are involved in nearly every aspect of cell signaling, as well as cell structures and organelles such as the cytoskeleton and mitochondria. In addition to their experiments using mammalian cells grown in a lab dish, the researchers showed that this technique could work in the brains of zebrafish larvae.

This method could be useful for observing how cells respond to any kind of input, such as nutrients, immune system factors, hormones, or neurotransmitters, according to the researchers. It could also be used to study how cells respond to changes in gene expression or genetic mutations. All of these factors play important roles in biological phenomena such as growth, aging, cancer, neurodegeneration, and memory formation.

“You could consider all of these phenomena to represent a general class of biological problem, where some short-term event — like eating a nutrient, learning something, or getting an infection — generates a long-term change,” Boyden says.

In addition to pursuing those types of studies, Boyden’s lab is also working on expanding the repertoire of switchable fluorophores so that they can study even more signals within a cell. They also hope to adapt the system so that it could be used in mouse models.

The research was funded by an Alana Fellowship, K. Lisa Yang, John Doerr, Jed McCaleb, James Fickel, Ashar Aziz, the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics at MIT, the Howard Hughes Medical Institute, and the National Institutes of Health.

Search algorithm reveals nearly 200 new kinds of CRISPR systems

Microbial sequence databases contain a wealth of information about enzymes and other molecules that could be adapted for biotechnology. But these databases have grown so large in recent years that they’ve become difficult to search efficiently for enzymes of interest.

Now, scientists at the Broad Institute of MIT and Harvard, the McGovern Institute for Brain Research at MIT, and the National Center for Biotechnology Information (NCBI) at the National Institutes of Health have developed a new search algorithm that has identified 188 kinds of new rare CRISPR systems in bacterial genomes, encompassing thousands of individual systems. The work appears today in Science.

The algorithm, which comes from the lab of CRISPR pioneer Feng Zhang, uses big-data clustering approaches to rapidly search massive amounts of genomic data. The team used their algorithm, called Fast Locality-Sensitive Hashing-based clustering (FLSHclust) to mine three major public databases that contain data from a wide range of unusual bacteria, including ones found in coal mines, breweries, Antarctic lakes, and dog saliva. The scientists found a surprising number and diversity of CRISPR systems, including ones that could make edits to DNA in human cells, others that can target RNA, and many with a variety of other functions.

The new systems could potentially be harnessed to edit mammalian cells with fewer off-target effects than current Cas9 systems. They could also one day be used as diagnostics or serve as molecular records of activity inside cells.

The researchers say their search highlights an unprecedented level of diversity and flexibility of CRISPR and that there are likely many more rare systems yet to be discovered as databases continue to grow.

“Biodiversity is such a treasure trove, and as we continue to sequence more genomes and metagenomic samples, there is a growing need for better tools, like FLSHclust, to search that sequence space to find the molecular gems,” said Zhang, a co-senior author on the study and a core institute member at the Broad.

Zhang is also an investigator at the McGovern Institute for Brain Research at MIT, the James and Patricia Poitras Professor of Neuroscience at MIT with joint appointments in the departments of Brain and Cognitive Sciences and Biological Engineering, and an investigator at the Howard Hughes Medical Institute. Eugene Koonin, a distinguished investigator at the NCBI, is co-senior author on the study as well.

Searching for CRISPR

CRISPR, which stands for Clustered Regularly Interspaced Short Palindromic Repeats, is a bacterial defense system that has been engineered into many tools for genome editing and diagnostics.

To mine databases of protein and nucleic acid sequences for novel CRISPR systems, the researchers developed an algorithm based on an approach borrowed from the big data community. This technique, called locality-sensitive hashing, clusters together objects that are similar but not exactly identical. Using this approach allowed the team to probe billions of protein and DNA sequences — from the NCBI, its Whole Genome Shotgun database, and the Joint Genome Institute — in weeks, whereas previous methods that look for identical objects would have taken months. They designed their algorithm to look for genes associated with CRISPR.

“This new algorithm allows us to parse through data in a time frame that’s short enough that we can actually recover results and make biological hypotheses,” said Soumya Kannan, who is a co-first author on the study. Kannan was a graduate student in Zhang’s lab when the study began and is currently a postdoctoral researcher and Junior Fellow at Harvard University. Han Altae-Tran, a graduate student in Zhang’s lab during the study and currently a postdoctoral researcher at the University of Washington, was the study’s other co-first author.

“This is a testament to what you can do when you improve on the methods for exploration and use as much data as possible,” said Altae-Tran. “It’s really exciting to be able to improve the scale at which we search.”

New systems

In their analysis, Altae-Tran, Kannan, and their colleagues noticed that the thousands of CRISPR systems they found fell into a few existing and many new categories. They studied several of the new systems in greater detail in the lab.

They found several new variants of known Type I CRISPR systems, which use a guide RNA that is 32 base pairs long rather than the 20-nucleotide guide of Cas9. Because of their longer guide RNAs, these Type I systems could potentially be used to develop more precise gene-editing technology that is less prone to off-target editing. Zhang’s team showed that two of these systems could make short edits in the DNA of human cells. And because these Type I systems are similar in size to CRISPR-Cas9, they could likely be delivered to cells in animals or humans using the same gene-delivery technologies being used today for CRISPR.

One of the Type I systems also showed “collateral activity” — broad degradation of nucleic acids after the CRISPR protein binds its target. Scientists have used similar systems to make infectious disease diagnostics such as SHERLOCK, a tool capable of rapidly sensing a single molecule of DNA or RNA. Zhang’s team thinks the new systems could be adapted for diagnostic technologies as well.

The researchers also uncovered new mechanisms of action for some Type IV CRISPR systems, and a Type VII system that precisely targets RNA, which could potentially be used in RNA editing. Other systems could potentially be used as recording tools — a molecular document of when a gene was expressed — or as sensors of specific activity in a living cell.

Mining data

The scientists say their algorithm could aid in the search for other biochemical systems. “This search algorithm could be used by anyone who wants to work with these large databases for studying how proteins evolve or discovering new genes,” Altae-Tran said.

The researchers add that their findings illustrate not only how diverse CRISPR systems are, but also that most are rare and only found in unusual bacteria. “Some of these microbial systems were exclusively found in water from coal mines,” Kannan said. “If someone hadn’t been interested in that, we may never have seen those systems. Broadening our sampling diversity is really important to continue expanding the diversity of what we can discover.”

This work was supported by the Howard Hughes Medical Institute; K. Lisa Yang and Hock E. Tan Molecular Therapeutics Center at MIT; Broad Institute Programmable Therapeutics Gift Donors; The Pershing Square Foundation, William Ackman and Neri Oxman; James and Patricia Poitras; BT Charitable Foundation; Asness Family Foundation; Kenneth C. Griffin; the Phillips family; David Cheng; and Robert Metcalfe.

The brain may learn about the world the same way some computational models do

To make our way through the world, our brain must develop an intuitive understanding of the physical world around us, which we then use to interpret sensory information coming into the brain.

How does the brain develop that intuitive understanding? Many scientists believe that it may use a process similar to what’s known as “self-supervised learning.” This type of machine learning, originally developed as a way to create more efficient models for computer vision, allows computational models to learn about visual scenes based solely on the similarities and differences between them, with no labels or other information.

A pair of studies from researchers at the K. Lisa Yang Integrative Computational Neuroscience (ICoN) Center at MIT offers new evidence supporting this hypothesis. The researchers found that when they trained models known as neural networks using a particular type of self-supervised learning, the resulting models generated activity patterns very similar to those seen in the brains of animals that were performing the same tasks as the models.

The findings suggest that these models are able to learn representations of the physical world that they can use to make accurate predictions about what will happen in that world, and that the mammalian brain may be using the same strategy, the researchers say.

“The theme of our work is that AI designed to help build better robots ends up also being a framework to better understand the brain more generally,” says Aran Nayebi, a postdoc in the ICoN Center. “We can’t say if it’s the whole brain yet, but across scales and disparate brain areas, our results seem to be suggestive of an organizing principle.”

Nayebi is the lead author of one of the studies, co-authored with Rishi Rajalingham, a former MIT postdoc now at Meta Reality Labs, and senior authors Mehrdad Jazayeri, an associate professor of brain and cognitive sciences and a member of the McGovern Institute for Brain Research; and Robert Yang, an assistant professor of brain and cognitive sciences and an associate member of the McGovern Institute. Ila Fiete, director of the ICoN Center, a professor of brain and cognitive sciences, and an associate member of the McGovern Institute, is the senior author of the other study, which was co-led by Mikail Khona, an MIT graduate student, and Rylan Schaeffer, a former senior research associate at MIT.

Both studies will be presented at the 2023 Conference on Neural Information Processing Systems (NeurIPS) in December.

Modeling the physical world

Early models of computer vision mainly relied on supervised learning. Using this approach, models are trained to classify images that are each labeled with a name — cat, car, etc. The resulting models work well, but this type of training requires a great deal of human-labeled data.

To create a more efficient alternative, in recent years researchers have turned to models built through a technique known as contrastive self-supervised learning. This type of learning allows an algorithm to learn to classify objects based on how similar they are to each other, with no external labels provided.

“This is a very powerful method because you can now leverage very large modern data sets, especially videos, and really unlock their potential,” Nayebi says. “A lot of the modern AI that you see now, especially in the last couple years with ChatGPT and GPT-4, is a result of training a self-supervised objective function on a large-scale dataset to obtain a very flexible representation.”

These types of models, also called neural networks, consist of thousands or millions of processing units connected to each other. Each node has connections of varying strengths to other nodes in the network. As the network analyzes huge amounts of data, the strengths of those connections change as the network learns to perform the desired task.

As the model performs a particular task, the activity patterns of different units within the network can be measured. Each unit’s activity can be represented as a firing pattern, similar to the firing patterns of neurons in the brain. Previous work from Nayebi and others has shown that self-supervised models of vision generate activity similar to that seen in the visual processing system of mammalian brains.

In both of the new NeurIPS studies, the researchers set out to explore whether self-supervised computational models of other cognitive functions might also show similarities to the mammalian brain. In the study led by Nayebi, the researchers trained self-supervised models to predict the future state of their environment across hundreds of thousands of naturalistic videos depicting everyday scenarios.

“For the last decade or so, the dominant method to build neural network models in cognitive neuroscience is to train these networks on individual cognitive tasks. But models trained this way rarely generalize to other tasks,” Yang says. “Here we test whether we can build models for some aspect of cognition by first training on naturalistic data using self-supervised learning, then evaluating in lab settings.”

Once the model was trained, the researchers had it generalize to a task they call “Mental-Pong.” This is similar to the video game Pong, where a player moves a paddle to hit a ball traveling across the screen. In the Mental-Pong version, the ball disappears shortly before hitting the paddle, so the player has to estimate its trajectory in order to hit the ball.

The researchers found that the model was able to track the hidden ball’s trajectory with accuracy similar to that of neurons in the mammalian brain, which had been shown in a previous study by Rajalingham and Jazayeri to simulate its trajectory — a cognitive phenomenon known as “mental simulation.” Furthermore, the neural activation patterns seen within the model were similar to those seen in the brains of animals as they played the game — specifically, in a part of the brain called the dorsomedial frontal cortex. No other class of computational model has been able to match the biological data as closely as this one, the researchers say.

“There are many efforts in the machine learning community to create artificial intelligence,” Jazayeri says. “The relevance of these models to neurobiology hinges on their ability to additionally capture the inner workings of the brain. The fact that Aran’s model predicts neural data is really important as it suggests that we may be getting closer to building artificial systems that emulate natural intelligence.”

Navigating the world

The study led by Khona, Schaeffer, and Fiete focused on a type of specialized neurons known as grid cells. These cells, located in the entorhinal cortex, help animals to navigate, working together with place cells located in the hippocampus.

While place cells fire whenever an animal is in a specific location, grid cells fire only when the animal is at one of the vertices of a triangular lattice. Groups of grid cells create overlapping lattices of different sizes, which allows them to encode a large number of positions using a relatively small number of cells.

In recent studies, researchers have trained supervised neural networks to mimic grid cell function by predicting an animal’s next location based on its starting point and velocity, a task known as path integration. However, these models hinged on access to privileged information about absolute space at all times — information that the animal does not have.

Inspired by the striking coding properties of the multiperiodic grid-cell code for space, the MIT team trained a contrastive self-supervised model to both perform this same path integration task and represent space efficiently while doing so. For the training data, they used sequences of velocity inputs. The model learned to distinguish positions based on whether they were similar or different — nearby positions generated similar codes, but further positions generated more different codes.

“It’s similar to training models on images, where if two images are both heads of cats, their codes should be similar, but if one is the head of a cat and one is a truck, then you want their codes to repel,” Khona says. “We’re taking that same idea but applying it to spatial trajectories.”

Once the model was trained, the researchers found that the activation patterns of the nodes within the model formed several lattice patterns with different periods, very similar to those formed by grid cells in the brain.

“What excites me about this work is that it makes connections between mathematical work on the striking information-theoretic properties of the grid cell code and the computation of path integration,” Fiete says. “While the mathematical work was analytic — what properties does the grid cell code possess? — the approach of optimizing coding efficiency through self-supervised learning and obtaining grid-like tuning is synthetic: It shows what properties might be necessary and sufficient to explain why the brain has grid cells.”

The research was funded by the K. Lisa Yang ICoN Center, the National Institutes of Health, the Simons Foundation, the McKnight Foundation, the McGovern Institute, and the Helen Hay Whitney Foundation.

A multifunctional tool for cognitive neuroscience

A team of researchers at MIT’s McGovern and Picower Institutes has advanced the clinical potential of a thin, flexible fiber designed to simultaneously monitor and manipulate neural activity at targeted sites in the brain. The collaborative team improved upon an earlier model of the multifunctional fiber, developed in the lab of McGovern Institute Associate Investigator Polina Anikeeva, to explore dynamic changes to neural signaling as large animals engage in a working memory task. The results appear Oct. 6 in Science Advances.

The new device, developed by Indie Garwood, who recently received her PhD in the Harvard-MIT Program in Health Sciences and Technology, includes four microelectrodes for detecting neural activity and two microfluidic channels through which drugs can be delivered. This means scientists can deliver a drug that alters neural signaling within a particular part of the brain, then monitor the consequences for local brain activity. This technology was a collaborative effort between Anikeeva, who is also the Matoula S. Salapatas Professor in Materials Science and Engineering and a professor of brain and cognitive sciences, and Picower Institute Investigators Emery Brown and Earl Miller, who jointly supervised Garwood to develop a multifunctional neurotechnology for larger and translational animal models, which are necessary to investigate the neural circuits that underlie high-level cognitive functions.  With further development and testing, similar devices might one day be deployed to diagnose or treat brain disorders in human patients.

Brown is the Edward Hood Taplin Professor of Medical Engineering and Computational Neuroscience in the Picower Institute, the Institute for Medical Engineering and Science, and the Department of Brain and Cognitive Sciences, as well as an anesthesiologist at Massachusetts General Hospital and Harvard Medical School. Miller is the Picower Professor of Neuroscience and a professor of brain and cognitive sciences at MIT.

The new multifunctional fiber is not the first produced by Anikeeva and her team. An earlier model engineered in their lab has already reached the neuroscience community, whose members use it to simultaneously monitor and manipulate neural activity in the brains of mice and rats. But for studies in larger animals, the existing tools for delivering drugs to the brains were rigid, bulky devices, which were both fragile and prone to causing tissue damage. A better tool was needed, both to advance cognitive neuroscience research and to set the stage for developing devices that can deliver drugs directly to the brains of patients and monitor the effects.

Like the devices that Anikeeva’s team designed for rodent studies, the new tool is created by first assembling a larger version of the fiber—a preform cylinder with multiple channels that is then heated and stretched until it is thin and long. As the channels narrow, microelectrodes are incorporated into to the fiber. The final step is to link the electrodes in the fiber to a connector that will relay data collected inside the brain to a unit in the lab.

The final device is long enough to access areas deep in the brain of a large animal. It is built to withstand rigorous sterilization procedures and to stay in place even in an active animal. And it integrates directly with experimental systems that cognitive neuroscientists already use in their labs. “We really wanted this to be something that we could easily hand somebody and they’re going to know how to implement it in their system,” says Garwood, who led development of the device as a graduate student in Anikeeva’s lab.

Once the new device was developed, Garwood and colleagues in the Miller and Brown labs put it to work.  They used the tool to study changes in neural activity as an animal completed a task requiring working memory. The fluid channels in the fiber were used to deliver small amounts of GABA, a neurotransmitter that dampens neuronal activity, to the animal’s premotor cortex, a part of the brain that helps plan movement. At the same time, the device recorded electrical activity from individual neurons, as well as broader patterns of activity in this part of the brain. By monitoring these signals over time, the team learned how neural circuits adapted to the local inhibition they had applied. In another experiment, the team used the device to record neural activity from the putamen, a region deep in the brain involved in reward processing and motivation.

The data collected by the device was extensive and complex, tracking changes that unfolded in the brain over seconds to hours. Interpreting those data required the team to devise new methods of data analysis, which Garwood worked on closely with the Brown lab. Garwood says these methods will be shared with users of the new devices, providing “a roadmap for extracting all of these rich dynamics that you can get out of them.”

These successes, the researchers say, are an important step toward the development of tools to modulate and manipulate neuronal activity in the human brain to benefit patients. For example, they say, a multifunctional fiber might one day be used to more accurately pinpoint the origin of seizures in people with epilepsy, by testing the effects of activating or inhibiting specific brain cells.