Complex, unfamiliar sentences make the brain’s language network work harder

With help from an artificial language network, MIT neuroscientists have discovered what kind of sentences are most likely to fire up the brain’s key language processing centers.

The new study reveals that sentences that are more complex, either because of unusual grammar or unexpected meaning, generate stronger responses in these language processing centers. Sentences that are very straightforward barely engage these regions, and nonsensical sequences of words don’t do much for them either.

For example, the researchers found this brain network was most active when reading unusual sentences such as “Buy sell signals remains a particular,” taken from a publicly available language dataset called C4. However, it went quiet when reading something very straightforward, such as “We were sitting on the couch.”

“The input has to be language-like enough to engage the system,” says Evelina Fedorenko, Associate Professor of Neuroscience at MIT and a member of MIT’s McGovern Institute for Brain Research. “And then within that space, if things are really easy to process, then you don’t have much of a response. But if things get difficult, or surprising, if there’s an unusual construction or an unusual set of words that you’re maybe not very familiar with, then the network has to work harder.”

Fedorenko is the senior author of the study, which appears today in Nature Human Behavior. MIT graduate student Greta Tuckute is the lead author of the paper.

Processing language

In this study, the researchers focused on language-processing regions found in the left hemisphere of the brain, which includes Broca’s area as well as other parts of the left frontal and temporal lobes of the brain.

“This language network is highly selective to language, but it’s been harder to actually figure out what is going on in these language regions,” Tuckute says. “We wanted to discover what kinds of sentences, what kinds of linguistic input, drive the left hemisphere language network.”

The researchers began by compiling a set of 1,000 sentences taken from a wide variety of sources — fiction, transcriptions of spoken words, web text, and scientific articles, among many others.

Five human participants read each of the sentences while the researchers measured their language network activity using functional magnetic resonance imaging (fMRI). The researchers then fed those same 1,000 sentences into a large language model — a model similar to ChatGPT, which learns to generate and understand language from predicting the next word in huge amounts of text — and measured the activation patterns of the model in response to each sentence.

Once they had all of those data, the researchers trained a mapping model, known as an “encoding model,” which relates the activation patterns seen in the human brain with those observed in the artificial language model. Once trained, the model could predict how the human language network would respond to any new sentence based on how the artificial language network responded to these 1,000 sentences.

The researchers then used the encoding model to identify 500 new sentences that would generate maximal activity in the human brain (the “drive” sentences), as well as sentences that would elicit minimal activity in the brain’s language network (the “suppress” sentences).

In a group of three new human participants, the researchers found these new sentences did indeed drive and suppress brain activity as predicted.

“This ‘closed-loop’ modulation of brain activity during language processing is novel,” Tuckute says. “Our study shows that the model we’re using (that maps between language-model activations and brain responses) is accurate enough to do this. This is the first demonstration of this approach in brain areas implicated in higher-level cognition, such as the language network.”

Linguistic complexity

To figure out what made certain sentences drive activity more than others, the researchers analyzed the sentences based on 11 different linguistic properties, including grammaticality, plausibility, emotional valence (positive or negative), and how easy it is to visualize the sentence content.

For each of those properties, the researchers asked participants from crowd-sourcing platforms to rate the sentences. They also used a computational technique to quantify each sentence’s “surprisal,” or how uncommon it is compared to other sentences.

This analysis revealed that sentences with higher surprisal generate higher responses in the brain. This is consistent with previous studies showing people have more difficulty processing sentences with higher surprisal, the researchers say.

Another linguistic property that correlated with the language network’s responses was linguistic complexity, which is measured by how much a sentence adheres to the rules of English grammar and how plausible it is, meaning how much sense the content makes, apart from the grammar.

Sentences at either end of the spectrum — either extremely simple, or so complex that they make no sense at all — evoked very little activation in the language network. The largest responses came from sentences that make some sense but require work to figure them out, such as “Jiffy Lube of — of therapies, yes,” which came from the Corpus of Contemporary American English dataset.

“We found that the sentences that elicit the highest brain response have a weird grammatical thing and/or a weird meaning,” Fedorenko says. “There’s something slightly unusual about these sentences.”

The researchers now plan to see if they can extend these findings in speakers of languages other than English. They also hope to explore what type of stimuli may activate language processing regions in the brain’s right hemisphere.

The research was funded by an Amazon Fellowship from the Science Hub, an International Doctoral Fellowship from the American Association of University Women, the MIT-IBM Watson AI Lab, the National Institutes of Health, the McGovern Institute, the Simons Center for the Social Brain, and MIT’s Department of Brain and Cognitive Sciences.

K. Lisa Yang Postbaccalaureate Program names new scholars

Funded by philanthropist Lisa Yang, the K. Lisa Yang Postbaccalaureate Scholar Program provides two years of paid laboratory experience, mentorship, and education to recent college graduates from backgrounds underrepresented in neuroscience. This year, two young researchers in McGovern Institute labs, Joseph Itiat and Sam Merrow, are the recipients of the Yang postbac program.

Itiat moved to the United States from Nigeria in 2019 to pursue a degree in psychology and cognitive neuroscience at Temple University. Today, he is a Yang postbac in John Gabrieli’s lab studying the relationship between learning and value processes and their influence on future-oriented decision-making. Ultimately, Itiat hopes to develop models that map the underlying mechanisms driving these processes.

“Being African, with limited research experience and little representation in the domain of neuroscience research,” Itiat says, “I chose to pursue a postbaccalaureate
research program to prepare me for a top graduate school and a career in cognitive neuroscience.”

Merrow first fell in love with science while working at the Barrow Neurological Institute in Arizona during high school. After graduating from Simmons University in Boston, Massachusetts, Merrow joined Guoping Feng’s lab as a Yang postbac to pursue research on glial cells and brain disorders. “As a queer, nonbinary, LatinX person, I have not met anyone like me in my field, nor have I had role models that hold a similar identity to myself,” says Merrow.

“My dream is to one day become a professor, where I will be able to show others that science is for anyone.”

Previous Yang postbacs include Alex Negron, Zoe Pearce, Ajani Stewart, and Maya Taliaferro.

What does the future hold for generative AI?

Speaking at the “Generative AI: Shaping the Future” symposium on Nov. 28, the kickoff event of MIT’s Generative AI Week, keynote speaker and iRobot co-founder Rodney Brooks warned attendees against uncritically overestimating the capabilities of this emerging technology, which underpins increasingly powerful tools like OpenAI’s ChatGPT and Google’s Bard.

“Hype leads to hubris, and hubris leads to conceit, and conceit leads to failure,” cautioned Brooks, who is also a professor emeritus at MIT, a former director of the Computer Science and Artificial Intelligence Laboratory (CSAIL), and founder of Robust.AI.

“No one technology has ever surpassed everything else,” he added.

The symposium, which drew hundreds of attendees from academia and industry to the Institute’s Kresge Auditorium, was laced with messages of hope about the opportunities generative AI offers for making the world a better place, including through art and creativity, interspersed with cautionary tales about what could go wrong if these AI tools are not developed responsibly.

Generative AI is a term to describe machine-learning models that learn to generate new material that looks like the data they were trained on. These models have exhibited some incredible capabilities, such as the ability to produce human-like creative writing, translate languages, generate functional computer code, or craft realistic images from text prompts.

In her opening remarks to launch the symposium, MIT President Sally Kornbluth highlighted several projects faculty and students have undertaken to use generative AI to make a positive impact in the world. For example, the work of the Axim Collaborative, an online education initiative launched by MIT and Harvard, includes exploring the educational aspects of generative AI to help underserved students.

The Institute also recently announced seed grants for 27 interdisciplinary faculty research projects centered on how AI will transform people’s lives across society.

In hosting Generative AI Week, MIT hopes to not only showcase this type of innovation, but also generate “collaborative collisions” among attendees, Kornbluth said.

Collaboration involving academics, policymakers, and industry will be critical if we are to safely integrate a rapidly evolving technology like generative AI in ways that are humane and help humans solve problems, she told the audience.

“I honestly cannot think of a challenge more closely aligned with MIT’s mission. It is a profound responsibility, but I have every confidence that we can face it, if we face it head on and if we face it as a community,” she said.

While generative AI holds the potential to help solve some of the planet’s most pressing problems, the emergence of these powerful machine learning models has blurred the distinction between science fiction and reality, said CSAIL Director Daniela Rus in her opening remarks. It is no longer a question of whether we can make machines that produce new content, she said, but how we can use these tools to enhance businesses and ensure sustainability. 

“Today, we will discuss the possibility of a future where generative AI does not just exist as a technological marvel, but stands as a source of hope and a force for good,” said Rus, who is also the Andrew and Erna Viterbi Professor in the Department of Electrical Engineering and Computer Science.

But before the discussion dove deeply into the capabilities of generative AI, attendees were first asked to ponder their humanity, as MIT Professor Joshua Bennett read an original poem.

Bennett, a professor in the MIT Literature Section and Distinguished Chair of the Humanities, was asked to write a poem about what it means to be human, and drew inspiration from his daughter, who was born three weeks ago.

The poem told of his experiences as a boy watching Star Trek with his father and touched on the importance of passing traditions down to the next generation.

In his keynote remarks, Brooks set out to unpack some of the deep, scientific questions surrounding generative AI, as well as explore what the technology can tell us about ourselves.

To begin, he sought to dispel some of the mystery swirling around generative AI tools like ChatGPT by explaining the basics of how this large language model works. ChatGPT, for instance, generates text one word at a time by determining what the next word should be in the context of what it has already written. While a human might write a story by thinking about entire phrases, ChatGPT only focuses on the next word, Brooks explained.

ChatGPT 3.5 is built on a machine-learning model that has 175 billion parameters and has been exposed to billions of pages of text on the web during training. (The newest iteration, ChatGPT 4, is even larger.) It learns correlations between words in this massive corpus of text and uses this knowledge to propose what word might come next when given a prompt.

The model has demonstrated some incredible capabilities, such as the ability to write a sonnet about robots in the style of Shakespeare’s famous Sonnet 18. During his talk, Brooks showcased the sonnet he asked ChatGPT to write side-by-side with his own sonnet.

But while researchers still don’t fully understand exactly how these models work, Brooks assured the audience that generative AI’s seemingly incredible capabilities are not magic, and it doesn’t mean these models can do anything.

His biggest fears about generative AI don’t revolve around models that could someday surpass human intelligence. Rather, he is most worried about researchers who may throw away decades of excellent work that was nearing a breakthrough, just to jump on shiny new advancements in generative AI; venture capital firms that blindly swarm toward technologies that can yield the highest margins; or the possibility that a whole generation of engineers will forget about other forms of software and AI.

At the end of the day, those who believe generative AI can solve the world’s problems and those who believe it will only generate new problems have at least one thing in common: Both groups tend to overestimate the technology, he said.

“What is the conceit with generative AI? The conceit is that it is somehow going to lead to artificial general intelligence. By itself, it is not,” Brooks said.

Following Brooks’ presentation, a group of MIT faculty spoke about their work using generative AI and participated in a panel discussion about future advances, important but underexplored research topics, and the challenges of AI regulation and policy.

The panel consisted of Jacob Andreas, an associate professor in the MIT Department of Electrical Engineering and Computer Science (EECS) and a member of CSAIL; Antonio Torralba, the Delta Electronics Professor of EECS and a member of CSAIL; Ev Fedorenko, an associate professor of brain and cognitive sciences and an investigator at the McGovern Institute for Brain Research at MIT; and Armando Solar-Lezama, a Distinguished Professor of Computing and associate director of CSAIL. It was moderated by William T. Freeman, the Thomas and Gerd Perkins Professor of EECS and a member of CSAIL.

The panelists discussed several potential future research directions around generative AI, including the possibility of integrating perceptual systems, drawing on human senses like touch and smell, rather than focusing primarily on language and images. The researchers also spoke about the importance of engaging with policymakers and the public to ensure generative AI tools are produced and deployed responsibly.

“One of the big risks with generative AI today is the risk of digital snake oil. There is a big risk of a lot of products going out that claim to do miraculous things but in the long run could be very harmful,” Solar-Lezama said.

The morning session concluded with an excerpt from the 1925 science fiction novel “Metropolis,” read by senior Joy Ma, a physics and theater arts major, followed by a roundtable discussion on the future of generative AI. The discussion included Joshua Tenenbaum, a professor in the Department of Brain and Cognitive Sciences and a member of CSAIL; Dina Katabi, the Thuan and Nicole Pham Professor in EECS and a principal investigator in CSAIL and the MIT Jameel Clinic; and Max Tegmark, professor of physics; and was moderated by Daniela Rus.

One focus of the discussion was the possibility of developing generative AI models that can go beyond what we can do as humans, such as tools that can sense someone’s emotions by using electromagnetic signals to understand how a person’s breathing and heart rate are changing.

But one key to integrating AI like this into the real world safely is to ensure that we can trust it, Tegmark said. If we know an AI tool will meet the specifications we insist on, then “we no longer have to be afraid of building really powerful systems that go out and do things for us in the world,” he said.

Tuning the mind to benefit mental health

This story also appears in the Winter 2024 issue of BrainScan.

___

llustration of woman sitting at end of a dock with head down, arms wrapped around her knees.
Mental health is the defining public health crisis of our time, according to U.S. Surgeon General Vivek Murthy, and the nation’s youth is at the
center of this crisis.

Psychiatrists and pediatricians have sounded an alarm. The mental health of youth in the United States is worsening. Youth visits to emergency departments related to depression, anxiety, and behavioral challenges have been on the rise for years. Suicide rates among young people have escalated, too. Researchers have tracked these trends for more than a decade, and the Covid-19 pandemic only exacerbated the situation.

“It’s all over the news, how shockingly common mental health difficulties are,” says John Gabrieli, the Grover Hermann Professor of Health Sciences and Technology at MIT and an investigator at the McGovern Institute. “It’s worsening by every measure.”

Experts worry that our mental health systems are inadequate to meet the growing need. “This has gone from bad to catastrophic, from my perspective,” says Susan Whitfeld-Gabrieli, a professor of psychology at Northeastern University and a research affiliate at the McGovern Institute.

“We really need to come up with novel interventions that target the neural mechanisms that we believe potentiate depression and anxiety.”

Training the brain

One approach may be to help young people learn to modulate some of the relevant brain circuitry themselves. Evidence is accumulating that practicing mindfulness — focusing awareness on the present, typically through meditation — can change patterns of brain activity associated with emotions and mental health.

“There’s been a steady flow of moderate-size studies showing that when you help people gain mindfulness through training programs, you get all kinds of benefits in terms of people feeling less stress, less anxiety, fewer negative emotions, and sometimes more positive ones as well,” says Gabrieli, who is also a professor of brain and cognitive sciences at MIT. “Those are the things you wish for people.”

“If there were a medicine with as much evidence of its effectiveness as mindfulness, it would be flying off the shelves of every pharmacy.”
– John Gabrieli

Researchers have even begun testing mindfulness-based interventions head-to-head against standard treatments for psychiatric disorders. The results of recent studies involving hundreds of adults with anxiety disorders or depression are encouraging. “It’s just as good as the best medicines and the best behavioral treatments that we know a ton about,” Gabrieli says.

Much mindfulness research has focused on adults, but promising data about the benefits of mindfulness training for children and adolescents is emerging as well. In studies supported by the McGovern Institute’s Poitras Center for Psychiatric Disorders Research in 2019 and 2020, Gabrieli and Whitfield-Gabrieli found that sixth-graders in a Boston middle school who participated in eight weeks of mindfulness training experienced reductions in feelings of stress and increases in sustained attention. More recently, Gabrieli and Whitfeld-Gabrieli’s teams have shown how new tools can support mindfulness training and make it accessible to more children and their families — from a smartphone app that can be used anywhere to real-time neurofeedback inside an MRI scanner.

Three people practicing mindfulness in MIT Building 46. Woman on left is leaning on a railing, wearing headphones with eyes closed. Man seated in the center holds a bowl and a wooden spoon. Woman on right is seated with legs crossed and eyes closed.
Isaac Treves (center), a PhD student in the lab of John Gabrieli, is the lead author of two studies which found that mindfulness training may improve children’s mental health. Treves and his co-authors Kimberly Wang (left) and Cindy Li (right) also practice mindfulness in their daily lives. Photo: Steph Stevens

Mindfulness and mental health

Mindfulness is not just a practice, it is a trait — an open, non-judgmental way of attending to experiences that some people exhibit more than others. By assessing individuals’ mindfulness with questionnaires that ask about attention and awareness, researchers have found the trait associates with many measures of mental health. Gabrieli and his team measured mindfulness in children between the ages of eight and ten and found it was highest in those who were most emotionally resilient to the stress they experienced during the Covid-19 pandemic. As the team reported this year in the journal PLOS One, children who were more mindful rated the impact of the pandemic on their own lives lower than other participants in the study. They also reported lower levels of stress, anxiety, and depression.

Illustration of a finger tracing the outline of a hand. There is a circle next to the hand with text that says, "Breathe In, Breathe Out. Children enrolled in John Gabrieli’s mindfulness study learned to trace the outline of their fingers in rhythm with their in-andout breathing pattern. This multisensory breathing technique has been shown to relieve anxiety and relax the body."

Mindfulness doesn’t come naturally to everyone, but brains are malleable, and both children and adults can cultivate mindfulness with training and practice. In their studies of middle schoolers, Gabrieli and Whitfeld-Gabrieli showed that the emotional effects of mindfulness training corresponded to measurable changes in the brain: Functional MRI scans revealed changes in regions involved in stress, negative feelings, and focused attention.

Whitfeld-Gabrieli says if mindfulness training makes kids more resilient, it could be a valuable tool for managing symptoms of anxiety and depression before they become severe. “I think it should be part of the standard school day,” she says. “I think we would have a much happier, healthier society if we could be doing this from the ground up.”

Data from Gabrieli’s lab suggests broadly implementing mindfulness training might even pay off in terms of academic achievement. His team found in a 2019 study that middle school students who reported greater levels of mindfulness had, on average, better grades, better scores on standardized tests, fewer absences, and fewer school suspensions than their peers.

Some schools have begun making mindfulness programs available to their students. But those programs don’t reach everyone, and their type and quality vary tremendously. Indeed, not every study of mindfulness training in schools has found the program to significantly benefit participants, which may be because not every approach to mindfulness training is equally effective.

“This is where I think the science matters,” Gabrieli says. “You have to find out what kinds of supports really work and you have to execute them reasonably. A recent report from Gabrieli’s lab offers encouraging news: mindfulness training doesn’t have to be in-person. Gabrieli and his team found that children can benefit from practicing mindfulness at home with the help of an app.

When the pandemic closed schools in 2020, school-based mindfulness programs came to an abrupt halt. Soon thereafter, a group called Inner Explorer had developed a smartphone app that could teach children mindfulness at home. Gabrieli and his team were eager to find out if this easy-access tool could effectively support children’s emotional well-being.

In October of this year, they reported in the journal Mindfulness that after 40 days of app use, children between the ages of eight and ten reported less stress than they had before beginning mindfulness training. Parents reported that their children were also experiencing fewer negative emotions, such as loneliness and fear.

The outcomes suggest a path toward making evidence-based mindfulness training for children broadly accessible. “Tons of people could do this,” says Gabrieli. “It’s super scalable. It doesn’t cost money; you don’t have to go somewhere. We’re very excited about that.”

Visualizing healthy minds

Mindfulness training may be even more effective when practitioners can visualize what’s happening in their brains. In Whitfeld-Gabrieli’s lab, teenagers have had a chance to slide inside an MRI scanner and watch their brain activity shift in real time as they practiced mindfulness meditation. The visualization they see focuses on the brain’s default mode network (DMN), which is most active when attention is not focused on a particular task. Certain patterns of activity in the DMN have been linked to depression, anxiety, and other psychiatric conditions, and mindfulness training may help break these patterns.

McGovern research affiliate Susan Whitfield-Gabrieli in the Martinos Imaging Center. Photo: Caitlin Cunningham

Whitfeld-Gabrieli explains that when the mind is free to wander, two hubs of the DMN become active. “Typically, that means we’re engaged in some kind of mental time travel,” she says. That might mean reminiscing about the past or planning for the future, but can be more distressing when it turns into obsessive rumination or worry. In people with anxiety, depression, and psychosis, these network hubs are often hyperconnected.

“It’s almost as if they’re hijacked,” Whitfeld-Gabrieli says. “The more they’re correlated, the more psychopathology one might be experiencing. We wanted to unlock that hyperconnectivity for kids who are suffering from depression and anxiety.” She hoped that by replacing thoughts of the past and the future with focus on the present, mindfulness meditation would rein in overactive DMNs, and she wanted a way to encourage kids to do exactly that.

The neurofeedback tool that she and her colleagues created focuses on the DMN as well as separate brain region that is called on during attention-demanding tasks. Activity in those regions is monitored with functional MRI and displayed to users in a game-like visualization. Inside the scanner, participants see how that activity changes as they focus on a meditation or when their mind wanders. As their mind becomes more focused on the present moment, changes in brain activity move a ball toward a target.

Whitfeld-Gabrieli says the real-time feedback was motivating for adolescents who participated in a recent study, who all had histories of anxiety or depression. “They’re training their brain to tune their mind, and they love it,” she says.

MRI images of two brains, one showing an active DMN and the other showing a healthy DMN.
The default mode network (DMN) is a large-scale brain network that is active when a person is not focused on the outside world and the brain is at wakeful rest. The DMN is often over-engaged in adolescents with depression and anxiety, as well as teens at risk for these affective disorders (left). DMN activation and connectivity can be “tuned” to a healthier state through the practice of mindfulness (right).

In March, she and her team reported in Molecular Psychiatry that the neurofeedback tool helped those study participants reduce connectivity in the DMN and engage a more desirable brain state. It’s not the first success the team has had with the approach. Previously, they found that the decreases in DMN connectivity brought about by mindfulness meditation with neurofeedback were associated with reduced hallucinations for patients with schizophrenia. Testing the clinical benefits of the approach in teens is on the horizon; Whitfeld-Gabrieli and her collaborators plan to investigate how mindfulness meditation with real-time neurofeedback affects depression symptoms in an upcoming clinical trial.

Whitfeld-Gabrieli emphasizes that the neurofeedback is a training tool, helping users improve mindfulness techniques they can later call on anytime, anywhere. While that training currently requires time inside an MRI scanner, she says it may be possible create an EEG-based version of the approach, which could be deployed in doctors’ offices and other more accessible settings.

Both Gabrieli and Whitfeld-Gabrieli continue to explore how mindfulness training impacts different aspects of mental health, in both children and adults and with a range of psychiatric conditions. Whitfeld-Gabrieli expects it will be one powerful tool for combating a youth mental health crisis for which there will be no single solution. “I think it’s going to take a village,” she says. “We are all going to have to work together, and we’ll have to come up some really innovative ways to help.”

Practicing mindfulness with an app may improve children’s mental health

Many studies have found that practicing mindfulness — defined as cultivating an open-minded attention to the present moment — has benefits for children. Children who receive mindfulness training at school have demonstrated improvements in attention and behavior, as well as greater mental health.

When the Covid-19 pandemic began in 2020, sending millions of students home from school, a group of MIT researchers wondered if remote, app-based mindfulness practices could offer similar benefits. In a study conducted during 2020 and 2021, they report that children who used a mindfulness app at home for 40 days showed improvements in several aspects of mental health, including reductions in stress and negative emotions such as loneliness and fear.

The findings suggest that remote, app-based mindfulness interventions, which could potentially reach a larger number of children than school-based approaches, could offer mental health benefits, the researchers say.

“There is growing and compelling scientific evidence that mindfulness can support mental well-being and promote mental health in diverse children and adults,” says John Gabrieli, the Grover Hermann Professor of Health Sciences and Technology, a professor of brain and cognitive sciences at MIT, and the senior author of the study, which appears this week in the journal Mindfulness.

Researchers in Gabrieli’s lab also recently reported that children who showed higher levels of mindfulness were more emotionally resilient to the negative impacts of the Covid-19 pandemic.

“To some extent, the impact of Covid is out of your control as an individual, but your ability to respond to it and to interpret it may be something that mindfulness can help with,” says MIT graduate student Isaac Treves, who is the lead author of both studies.

Pandemic resilience

After the pandemic began in early 2020, Gabrieli’s lab decided to investigate the effects of mindfulness on children who had to leave school and isolate from friends. In a study that appeared in the journal PLOS One in July, the researchers explored whether mindfulness could boost children’s resilience to negative emotions that the pandemic generated, such as frustration and loneliness.

Working with students between 8 and 10 years old, the researchers measured the children’s mindfulness using a standardized assessment that captures their tendency to blame themselves, ruminate on negative thoughts, and suppress their feelings.

The researchers also asked the children questions about how much the pandemic had affected different aspects of their lives, as well as questions designed to assess their levels of anxiety, depression, stress, and negative emotions such as worry or fear.

Among children who showed the highest levels of mindfulness, there was no correlation between how much the pandemic impacted them and negative feelings. However, in children with lower levels of mindfulness, there was a strong correlation between Covid-19 impact and negative emotions.

The children in this study did not receive any kind of mindfulness training, so their responses reflect their tendency to be mindful at the time they answered the researchers’ questions. The findings suggest that children with higher levels of mindfulness were less likely to get caught up in negative emotions or blame themselves for the negative things they experienced during the pandemic.

“This paper was our best attempt to look at mindfulness specifically in the context of Covid and to think about what are the factors that may help children adapt to the changing circumstances,” Treves says. “The takeaway is not that we shouldn’t worry about pandemics because we can just help the kids with mindfulness. People are able to be resilient when they’re in systems that support them, and in families that support them.”

Remote interventions

The researchers then built on that study by exploring whether a remote, app-based intervention could effectively increase mindfulness and improve mental health. Researchers in Gabrieli’s lab have previously shown that students who received mindfulness training in middle school showed better academic performance, received fewer suspensions, and reported less stress than those who did not receive the training.

For the new study, reported today in Mindfulness, the researchers worked with the same children they had recruited for the PLOS One study and divided them into three groups of about 80 students each.

One group received mindfulness training through an app created by Inner Explorer, a nonprofit that also develops school-based meditation programs. Those children were instructed to engage in mindfulness training five days a week, including relaxation exercises, breathing exercises, and other forms of meditation.

For comparison purposes, the other two groups were asked to use an app for listening to audiobooks (not related to mindfulness). One group was simply given the audiobook app and encouraged to listen at their own pace, while the other group also had weekly one-on-one virtual meetings with a facilitator.

At the beginning and end of the study, the researchers evaluated each participant’s levels of mindfulness, along with measures of mental health such as anxiety, stress, and depression. They found that in all three groups, mental health improved over the course of the eight-week study, and each group also showed increases in mindfulness and prosociality (engaging in helpful behavior).

Additionally, children in the mindfulness group showed some improvements that the other groups didn’t, including a more significant decrease in stress. They also found that parents in the mindfulness group reported that their children experienced more significant decreases in negative emotions such as anger and sadness. Students who practiced the mindfulness exercises the most days showed the greatest benefits.

The researchers were surprised to see that there were no significant differences in measures of anxiety and depression between the mindfulness group and audiobook groups; they hypothesize that may be because students who interacted with a facilitator in one of the audiobook groups also experienced beneficial effects on their mental health.

Overall, the findings suggest that there is value in remote, app-based mindfulness training, especially if children engage with the exercises consistently and receive encouragement from parents, the researchers say. Apps also offer the ability to reach a larger number of children than school-based programs, which require more training and resources.

“There are a lot of great ways to incorporate mindfulness training into schools, but in general, it’s more resource-intensive than having people download an app. So, in terms of pure scalability and cost-effectiveness, apps are useful,” Treves says. “Another good thing about apps is that the kids can go at their own pace and repeat practices that they like, so there’s more freedom of choice.”

The research was funded by the Chan Zuckerberg Initiative as part of the Reach Every Reader Project, the National Institutes of Health, and the National Science Foundation.

Re-imagining our theories of language

Over a decade ago, the neuroscientist Ev Fedorenko asked 48 English speakers to complete tasks like reading sentences, recalling information, solving math problems, and listening to music. As they did this, she scanned their brains using functional magnetic resonance imaging to see which circuits were activated. If, as linguists have proposed for decades, language is connected to thought in the human brain, then the language processing regions would be activated even during nonlinguistic tasks.

Fedorenko’s experiment, published in 2011 in the Proceedings of the National Academy of Sciences, showed that when it comes to arithmetic, musical processing, general working memory, and other nonlinguistic tasks, language regions of the human brain showed no response. Contrary to what many linguistists have claimed, complex thought and language are separate things. One does not require the other. “We have this highly specialized place in the brain that doesn’t respond to other activities,” says Fedorenko, who is an associate professor at the Department of Brain and Cognitive Sciences (BCS) and the McGovern Institute for Brain Research. “It’s not true that thought critically needs language.”

The design of the experiment, using neuroscience to understand how language works, how it evolved, and its relation to other cognitive functions, is at the heart of Fedorenko’s research. She is part of a unique intellectual triad at MIT’s Department of BCS, along with her colleagues Roger Levy and Ted Gibson. (Gibson and Fedorenko have been married since 2007). Together they have engaged in a years-long collaboration and built a significant body of research focused on some of the biggest questions in linguistics and human cognition. While working in three independent labs — EvLab, TedLab, and the Computational Psycholinguistics Lab — the researchers are motivated by a shared fascination with the human mind and how language works in the brain. “We have a great deal of interaction and collaboration,” says Levy. “It’s a very broadly collaborative, intellectually rich and diverse landscape.”

Using combinations of computational modeling, psycholinguistic experimentation, behavioral data, brain imaging, and large naturalistic language datasets, the researchers also share an answer to a fundamental question: What is the purpose of language? Of all the possible answers to why we have language, perhaps the simplest and most obvious is communication. “Believe it or not,” says Ted Gibson, “that is not the standard answer.”

Gibson first came to MIT in 1993 and joined the faculty of the Linguistics Department in 1997. Recalling the experience today, he describes it as frustrating. The field of linguistics at that time was dominated by the ideas of Noam Chomsky, one of the founders of MIT’s Graduate Program in Linguistics, who has been called the father of modern linguistics. Chomsky’s “nativist” theories of language posited that the purpose of language is the articulation of thought and that language capacity is built-in in advance of any learning. But Gibson, with his training in math and computer science, felt that researchers didn’t satisfyingly test these ideas. He believed that finding the answer to many outstanding questions about language required quantitative research, a departure from standard linguistic methodology. “There’s no reason to rely only on you and your friends, which is how linguistics has worked,” Gibson says. “The data you can get can be much broader if you crowdsource lots of people using experimental methods.” Chomsky’s ascendancy in linguistics presented Gibson with what he saw as a challenge and an opportunity. “I felt like I had to figure it out in detail and see if there was truth in these claims,” he says.

Three decades after he first joined MIT, Gibson believes that the collaborative research at BCS is persuasive and provocative, pointing to new ways of thinking about human culture and cognition. “Now we’re at a stage where it is not just arguments against. We have a lot of positive stuff saying what language is,” he explains. Levy adds: “I would say all three of us are of the view that communication plays a very import role in language learning and processing, but also in the structure of language itself.”

Levy points out that the three researchers completed PhDs in different subjects: Fedorenko in neuroscience, Gibson in computer science, Levy in linguistics. Yet for years before their paths finally converged at MIT, their shared interests in quantitative linguistic research led them to follow each other’s work closely and be influenced by it. The first collaboration between the three was in 2005 and focused on language processing in Russian relative clauses. Around that time, Gibson recalls, Levy was presenting what he describes as “lovely work” that was instrumental in helping him to understand the links between language structure and communication. “Communicative pressures drive the structures,” says Gibson. “Roger was crucial for that. He was the one helping me think about those things a long time ago.”

Levy’s lab is focused on the intersection of artificial intelligence, linguistics, and psychology, using natural language processing tools. “I try to use the tools that are afforded by mathematical and computer science approaches to language to formalize scientific hypotheses about language and the human mind and test those hypotheses,” he says.

Levy points to ongoing research between him and Gibson focused on language comprehension as an example of the benefits of collaboration. “One of the big questions is: When language understanding fails, why does it fail?” Together, the researchers have applied the concept of a “noisy channel,” first developed by the information theorist Claude Shannon in the 1950s, which says that information or messages are corrupted in transmission. “Language understanding unfolds over time, involving an ongoing integration of the past with the present,” says Levy. “Memory itself is an imperfect channel conveying the past from our brain a moment ago to our brain now in order to support successful language understanding.” Indeed, the richness of our linguistic environment, the experience of hundreds of millions of words by adulthood, may create a kind of statistical knowledge guiding our expectations, beliefs, predictions, and interpretations of linguistic meaning. “Statistical knowledge of language actually interacts with the constraints of our memory,” says Levy. “Our experience shapes our memory for language itself.”

All three researchers say they share the belief that by following the evidence, they will eventually discover an even bigger and more complete story about language. “That’s how science goes,” says Fedorenko. “Ted trained me, along with Nancy Kanwisher, and both Ted and Roger are very data-driven. If the data is not giving you the answer you thought, you don’t just keep pushing your story. You think of new hypotheses. Almost everything I have done has been like that.” At times, Fedorenko’s research into parts of the brain’s language system has surprised her and forced her to abandon her hypotheses. “In a certain project I came in with a prior idea that there would be some separation between parts that cared about combinatorics versus words meanings,” she says, “but every little bit of the language system is sensitive to both. At some point, I was like, this is what the data is telling us, and we have to roll with it.”

The researchers’ work pointing to communication as the constitutive purpose of language opens new possibilities for probing and studying non-human language. The standard claim is that human language has a drastically more extensive lexicon than animals, which have no grammar. “But many times, we don’t even know what other species are communicating,” says Gibson. “We say they can’t communicate, but we don’t know. We don’t speak their language.” Fedorenko hopes that more opportunities to make cross-species linguistic comparisons will open up. “Understanding where things are similar and where things diverge would be super useful,” she says.

Meanwhile, the potential applications of language research are far-reaching. One of Levy’s current research projects focuses on how people read and use machine learning algorithms informed by the psychology of eye movements to develop proficiency tests. By tracking the eye movements of people who speak English as a second language while they read texts in English, Levy can predict how good they are at English, an approach that could one day replace the Test of English as a Foreign Language. “It’s an implicit measure of language rather than a much more game-able test,” he says.

The researchers agree that some of the most exciting opportunities in the neuroscience of language lies with large language models that provide new opportunities for asking new questions and making new discoveries. “In the neuroscience of language, the kind of stories that we’ve been able to tell about how the brain does language were limited to verbal, descriptive hypotheses,” says Fedorenko. Computationally implemented models are now amazingly good at language and show some degree of alignment to the brain, she adds. Now, researchers can ask questions such as: what are the actual computations that cells are doing to get meaning from strings of words? “You can now use these models as tools to get insights into how humans might be processing language,” she says. “And you can take the models apart in ways you can’t take apart the brain.”

Nuevo podcast de neurociencia en español celebra su tercera temporada

Sylvia Abente, neuróloga clínica de la Universidad Nacional de Asunción (Paraguay), investiga la variedad de síntomas que son característicos de la epilepsia. Trabaja con los pueblos indígenas de Paraguay, y su dominio del español y el guaraní, los dos idiomas oficiales de Paraguay, le permite ayudar a los pacientes a encontrar las palabras que ayuden a describir sus síntomas de epilepsia para poder tratarlos.

Juan Carlos Caicedo Mera, neurocientífico de la Universidad Externado de Colombia, utiliza modelos de roedores para investigar los efectos neurobiológicos del estrés en los primeros años de vida. Ha desempeñado un papel decisivo en despertar la conciencia pública sobre los efectos biológicos y conductuales del castigo físico a edades tempranas, lo que ha propiciado cambios políticos encaminados a reducir su prevalencia como práctica cultural en Colombia.

Woman interviews a man at a table with a camera recording the interview in the foreground.
Jessica Chomik-Morales (right) interviews Pedro Maldonado at the Biomedical Neuroscience Institute of Chile at the University of Chile. Photo: Jessica Chomik-Morales

Estos son solo dos de los 33 neurocientíficos de siete países latinoamericanos que Jessica Chomik-Morales entrevistó durante 37 días para la tercera temporada de su podcast en español “Mi Última Neurona,” que se estrenará el 18 de septiembre a las 5:00 p. m. en YouTube. Cada episodio dura entre 45 y 90 minutos.

“Quise destacar sus historias para disipar la idea errónea de que la ciencia de primer nivel solo puede hacerse en Estados Unidos y Europa,” dice Chomik-Morales, “o que no se consigue en Sudamérica debido a barreras financieras y de otro tipo.”

Chomik-Morales, graduada universitaria de primera generación que creció en Asunción (Paraguay) y Boca Ratón (Florida), es ahora investigadora académica de post licenciatura en el MIT. Aquí trabaja con Laura Schulz, profesora de Ciencia Cognitiva, y Nancy Kanwisher, investigadora del McGovern Institute y la profesora Walter A. Rosenblith de Neurociencia Cognitiva, utilizando imágenes cerebrales funcionales para investigar de qué forma el cerebro explica el pasado, predice el futuro e interviene sobre el presente a traves del razonamiento causal.

“El podcast está dirigido al público en general y es apto para todas las edades,” afirma. “Se explica la neurociencia de forma fácil para inspirar a los jóvenes en el sentido de que ellos también pueden llegar a ser científicos y para mostrar la amplia variedad de investigaciones que se realizan en los países de origen de los escuchas.”

El viaje de toda una vida

“Mi Última Neurona” comenzó como una idea en 2021 y creció rápidamente hasta convertirse en una serie de conversaciones con destacados científicos hispanos, entre ellos L. Rafael Reif, ingeniero electricista venezolano-estadounidense y 17.º presidente del MIT.

Woman interviews man at a table while another man adjusts microphone.
Jessica Chomik-Morales (left) interviews the 17th president of MIT, L. Rafael Reif (right), for her podcast while Héctor De Jesús-Cortés (center) adjusts the microphone. Photo: Steph Stevens

Con las relaciones profesionales que estableció en las temporadas uno y dos, Chomik-Morales amplió su visión y reunió una lista de posibles invitados en América Latina para la tercera temporada. Con la ayuda de su asesor científico, Héctor De Jesús-Cortés, un investigador Boricua de posdoctorado del MIT, y el apoyo financiero del McGovern Institute, el Picower Institute for Learning and Memory, el Departamento de Ciencias Cerebrales y Cognitivas, y las Iniciativas Internacionales de Ciencia y Tecnología del MIT, Chomik-Morales organizó entrevistas con científicos en México, Perú, Colombia, Chile, Argentina, Uruguay y Paraguay durante el verano de 2023.

Viajando en avión cada cuatro o cinco días, y consiguiendo más posibles participantes de una etapa del viaje a la siguiente por recomendación, Chomik-Morales recorrió más de 10,000 millas y recopiló 33 historias para su tercera temporada. Las áreas de especialización de los científicos abarcan toda una variedad de temas, desde los aspectos sociales de los ciclos de sueño y vigilia hasta los trastornos del estado de ánimo y la personalidad, pasando por la lingüística y el lenguaje en el cerebro o el modelado por computadoras como herramienta de investigación.

“Si alguien estudia la depresión y la ansiedad, quiero hablar sobre sus opiniones con respecto a diversas terapias, incluidos los fármacos y también las microdosis con alucinógenos,” dice Chomik-Morales. “Estas son las cosas de las que habla la gente.” No le teme a abordar temas delicados, como la relación entre las hormonas y la orientación sexual, porque “es importante que la gente escuche a los expertos hablar de estas cosas,” comenta.

El tono de las entrevistas va de lo informal (“el investigador y yo somos como amigos”, dice) a lo pedagógico (“de profesor a alumno”). Lo que no cambia es la accesibilidad (se evitan términos técnicos) y las preguntas iniciales y finales en cada entrevista. Para empezar: “¿Cómo ha llegado hasta aquí? ¿Qué le atrajo de la neurociencia?”. Para terminar: “¿Qué consejo le daría a un joven estudiante latino interesado en Ciencias, Ingeniería, Tecnología y Matemáticas[1]?

Permite que el marco de referencia de sus escuchas sea lo que la guíe. “Si no entendiera algo o pensara que se podría explicar mejor, diría: ‘Hagamos una pausa’. ¿Qué significa esta palabra?”, aunque ella conociera la definición. Pone el ejemplo de la palabra “MEG” (magnetoencefalografía): la medición del campo magnético generado por la actividad eléctrica de las neuronas, que suele combinarse con la resonancia magnética para producir imágenes de fuentes magnéticas. Para aterrizar el concepto, preguntaría: “¿Cómo funciona? ¿Este tipo de exploración hace daño al paciente?”.

Allanar el camino para la creación de redes globales

El equipo de Chomik-Morales era escaso: tres micrófonos Yeti y una cámara de video Canon conectada a su computadora portátil. Las entrevistas se realizaban en salones de clase, oficinas universitarias, en la casa de los investigadores e incluso al aire libre, ya que no había estudios insonorizados disponibles. Ha estado trabajando con el ingeniero de sonido David Samuel Torres, de Puerto Rico, para obtener un sonido más claro.

Ninguna limitación tecnológica podía ocultar la importancia del proyecto para los científicos participantes.

Two women talking at a table in front of a camera.
Jessica Chomik-Morales (left) interviews Josefina Cruzat (right) at Adolfo Ibañez University in Chile. Photo: Jessica Chomik-Morales

“Mi Última Neurona” muestra nuestro conocimiento diverso en un escenario global, proporcionando un retrato más preciso del panorama científico en América Latina,” dice Constanza Baquedano, originaria de Chile. “Es un avance hacia la creación de una representación más inclusiva en la ciencia”. Baquendano es profesora adjunta de psicología en la Universidad Adolfo Ibáñez, en donde utiliza electrofisiología y mediciones electroencefalográficas y conductuales para investigar la meditación y otros estados contemplativos. “Estaba ansiosa por ser parte de un proyecto que buscara brindar reconocimiento a nuestras experiencias compartidas como mujeres latinoamericanas en el campo de la neurociencia.”

“Comprender los retos y las oportunidades de los neurocientíficos que trabajan en América Latina es primordial,” afirma Agustín Ibáñez, profesor y director del Instituto Latinoamericano de Salud Cerebral (BrainLat) de la Universidad Adolfo Ibáñez de Chile. “Esta región, que se caracteriza por tener importantes desigualdades que afectan la salud cerebral, también presenta desafíos únicos en el campo de la neurociencia,” afirma Ibáñez, quien se interesa principalmente en la intersección de la neurociencia social, cognitiva y afectiva. “Al centrarse en América Latina, el podcast da a conocer las historias que frecuentemente no se cuentan en la mayoría de los medios. Eso tiende puentes y allana el camino para la creación de redes globales.”

Por su parte, Chomik-Morales confía en que su podcast generará un gran número de seguidores en América Latina. “Estoy muy agradecida por el espléndido patrocinio del MIT,” dice Chomik-Morales. “Este es el proyecto más gratificante que he hecho en mi vida.”

__

[1] En inglés Science, Technology, Engineering and Mathematics (STEM)

New Spanish-language neuroscience podcast flourishes in third season

A Spanish version of this news story can be found here. (Una versión en español de esta noticia se puede encontrar aquí.)

___

Sylvia Abente, a clinical neurologist at the Universidad Nacional de Asunción in Paraguay, investigates the range of symptoms that characterize epilepsy. She works with indigenous peoples in Paraguay, and her fluency in Spanish and Guarni—the two official languages of Paraguay—allows her to help patients find the words to describe their epilepsy symptoms so she can treat them.

Juan Carlos Caicedo Mera, a neuroscientist at the Universidad Externado de Colombia, uses rodent models to research the neurobiological effects of early life stress. He has been instrumental in raising public awareness about the biological and behavioral effects of early-age physical punishment, leading to policy changes aimed at reducing its prevalence as a cultural practice in Colombia.

Woman interviews a man at a table with a camera recording the interview in the foreground.
Jessica Chomik-Morales (right) interviews Pedro Maldonado at the Biomedical Neuroscience Institute of Chile at the University of Chile. Photo: Jessica Chomik-Morales

Those are just two of the 33 neuroscientists in seven Latin American countries that Jessica Chomik-Morales interviewed over 37 days for the expansive third season of her Spanish-language podcast, “Mi Ultima Neurona” (“My Last Neuron”), which launches Sept. 18 at 5 p.m. on YouTube. Each episode runs between 45 and 90 minutes.

“I wanted to shine a spotlight on their stories to dispel the misconception that excellent science can only be done in America and Europe,” says Chomik-Morales, “or that it isn’t being produced in South America because of financial and other barriers.”

A first-generation college graduate who grew up in Asunción, Paraguay and Boca Raton, Florida, Chomik-Morales is now a postbaccalaureate research scholar at MIT. Here she works with Laura Schulz, professor of cognitive science, and Nancy Kanwisher, McGovern Institute investigator and the Walter A. Rosenblith Professor of Cognitive Neuroscience, using functional brain imaging to investigate how the brain explains the past, predicts the future, and intervenes on the present.

“The podcast is for the general public and is suitable for all ages,” she says. “It explains neuroscience in a digestable way to inspire young people that they, too, can become scientists and to show the rich variety of reseach that is being done in listeners’ home countries.”

Journey of a lifetime

“Mi Ultima Neurona” began as an idea in 2021 and grew rapidly into a collection of conversations with prominent Hispanic scientists, including L. Rafael Reif, a Venezuelan-American electrical engineer and the 17th president of MIT.

Woman interviews man at a table while another man adjusts microphone.
Jessica Chomik-Morales (left) interviews the 17th president of MIT, L. Rafael Reif (right), for her podcast while Héctor De Jesús-Cortés (center) adjusts the microphone. Photo: Steph Stevens

Building upon the professional relationships she built in seasons one and two, Chomik-Morales broadened her vision, and assembled a list of potential guests in Latin America for season three.  With research help from her scientific advisor, Héctor De Jesús-Cortés, an MIT postdoc from Puerto Rico, and financial support from the McGovern Institute, the Picower Institute for Learning and Memory, the Department of Brain and Cognitive Sciences, and MIT International Science and Technology Initiatives, Chomik-Morales lined up interviews with scientists in Mexico, Peru, Colombia, Chile, Argentina, Uruguay, and Paraguay during the summer of 2023.

Traveling by plane every four or five days, and garnering further referrals from one leg of the trip to the next through word of mouth, Chomik-Morales logged over 10,000 miles and collected 33 stories for her third season. The scientists’ areas of specialization run the gamut— from the social aspects of sleep/wake cycles to mood and personality disorders, from linguistics and language in the brain to computational modeling as a research tool.

“This is the most fulfilling thing I’ve ever done.” – Jessica Chomik-Morales

“If somebody studies depression and anxiety, I want to touch on their opinions regarding various therapies, including drugs, even microdosing with hallucinogens,” says Chomik-Morales. “These are the things people are talking about.” She’s not afraid to broach sensitive topics, like the relationship between hormones and sexual orientation, because “it’s important that people listen to experts talk about these things,” she says.

The tone of the interviews range from casual (“the researcher and I are like friends,” she says) to pedagogic (“professor to student”). The only constants are accessibility—avoiding technical terms—and the opening and closing questions in each one. To start: “How did you get here? What drew you to neuroscience?” To end: “What advice would you give a young Latino student who is interested in STEM?”

She lets her listeners’ frame of reference be her guide. “If I didn’t understand something or thought it could be explained better, I’d say, ‘Let’s pause. ‘What does this word mean?’ ” even if she knew the definition herself. She gives the example of the word “MEG” (magnetoencephalography)—the measurement of the magnetic field generated by the electrical activity of neurons, which is usually combined with magnetic resonance imaging to produce magnetic source imaging. To bring the concept down to Earth, she’d ask: “How does it work? Does this kind of scan hurt the patient?’ ”

Paving the way for global networking

Chomik-Morales’s equipment was spare: three Yeti microphones and a Canon video camera connected to her laptop computer. The interviews took place in classrooms, university offices, at researchers’ homes, even outside—no soundproof studios were available. She has been working with sound engineer David Samuel Torres, from Puerto Rico, to clarify the audio.

No technological limitations could obscure the significance of the project for the participating scientists.

Two women talking at a table in front of a camera.
Jessica Chomik-Morales (left) interviews Josefina Cruzat (right) at Adolfo Ibañez University in Chile. Photo: Jessica Chomik-Morales

“‘Mi Ultima Neurona’ showcases our diverse expertise on a global stage, providing a more accurate portrayal of the scientific landscape in Latin America,” says Constanza Baquedano, who is from Chile. “It’s a step toward creating a more inclusive representation in science.” Baquendano is an assistant professor of psychology at Universidad Adolfo Ibáñez, where she uses electrophysiology and electroencephalographic and behavioral measurements to investigate meditation and other contemplative states. “I was eager to be a part of a project that aimed to bring recognition to our shared experiences as Latin American women in the field of neuroscience.”

“Understanding the challenges and opportunities of neuroscientists working in Latin America is vital,”says Agustín Ibañez, professor and director of the Latin American Brain Health Institute (BrainLat) at Universidad Adolfo Ibáñez in Chile. “This region, characterized by significant inequalities affecting brain health, also presents unique challenges in the field of neuroscience,” says Ibañez, who is primarily interested in the intersection of social, cognitive, and affective neuroscience. “By focusing on Latin America, the podcast brings forth the narratives that often remain untold in the mainstream. That bridges gaps and paves the way for global networking.”

For her part, Chomik-Morales is hopeful that her podcast will generate a strong following in Latin America. “I am so grateful for the wonderful sponsorship from MIT,” says Chomik-Morales. “This is the most fulfilling thing I’ve ever done.”

Unpacking auditory hallucinations

Tamar Regev, the 2022–2024 Poitras Center Postdoctoral Fellow, has identified a new neural system that may shed light on the auditory hallucinations experienced by patients diagnosed with schizophrenia.

Scientist portrait
Tamar Regev is the 2022–2024 Poitras Center Postdoctoral
Fellow in Ev Fedorenko’s lab at the McGovern Institute. Photo: Steph Stevens

“The system appears integral to prosody processing,”says Regev. “‘Prosody’ can be described as the melody of speech — auditory gestures that we use when we’re speaking to signal linguistic, emotional, and social information.” The prosody processing system Regev has uncovered is distinct from the lower-level auditory speech processing system as well as the higher-level language processing system. Regev aims to understand how the prosody system, along with the speech and language processing systems, may be impaired in neuropsychiatric disorders such as schizophrenia, especially when experienced with auditory hallucinations in the form of speech.

“Knowing which neural systems are affected by schizophrenia can lay the groundwork for future research into interventions that target the mechanisms underlying symptoms such as hallucinations,” says Regev. Passionate about bridging gaps between disciplines, she is collaborating with Ann Shinn, MD, MPH, of McLean Hospital’s Schizophrenia and Bipolar Disorder Research Program.

Regev’s graduate work at the Hebrew University of Jerusalem focused on exploring the auditory system with electroencephalography (EEG), which measures electrical activity in the brain using small electrodes attached to the scalp. She came to MIT to study under Evelina Fedorenko, a world leader in researching the cognitive and neural mechanisms underlying language processing. With Fedorenko she has learned to use functional magnetic resonance imaging (fMRI), which reveals the brain’s functional anatomy by measuring small changes in blood flow that occur with brain activity.

“I hope my research will lead to a better understanding of the neural architectures that underlie these disorders—and eventually help us as a society to better understand and accept special populations.”- Tamar Regev

“EEG has very good temporal resolution but poor spatial resolution, while fMRI provides a map of the brain showing where neural signals are coming from,” says Regev. “With fMRI I can connect my work on the auditory system with that on the language system.”

Regev developed a unique fMRI paradigm to do that. While her human subjects are in the scanner, she is comparing brain responses to speech with expressive prosody versus flat prosody to find the role of the prosody system among the auditory, speech, and language regions. She plans to apply her findings to analyze a rich data set drawn from fMRI studies that Fedorenko and Shinn began a few years ago while investigating the neural basis of auditory hallucinations in patients with schizophrenia and bipolar disorder. Regev is exploring how the neural architecture may differ between control subjects and those with and without auditory hallucinations as well as those with schizophrenia and bipolar disorder.

“This is the first time these questions are being asked using the individual-subject approach developed in the Fedorenko lab,” says Regev. The approach provides superior sensitivity, functional resolution, interpretability, and versatility compared with the group analyses of the past. “I hope my research will lead to a better understanding of the neural architectures that underlie these disorders,” says Regev, “and eventually help us as a society to better understand and accept special populations.”

Using the tools of neuroscience to personalize medicine

Profile picture of Sadie Zacharek
Graduate student Sadie Zacharek. Photo: Steph Stevens

From summer internships as an undergraduate studying neuroscience at the University of Notre Dame, Sadie Zacharek developed interests in areas ranging from neuroimaging to developmental psychopathologies, from basic-science research to clinical translation. When she interviewed with John Gabrieli, the Grover Hermann Professor of Health Sciences and Technology and Cognitive Neuroscience, for a position in his lab as a graduate fellow, everything came together.

“The brain provides a window not only into dysfunction but also into response to treatment,” she says. “John and I both wanted to explore how we might use neuroimaging as a step toward personalized medicine.”

Zacharek joined the Gabrieli lab in 2020 and currently holds the Sheldon and Janet Razin’59 Fellowship for 2023-2024. In the Gabrieli lab, she has been designing and helping launch studies focusing on the neural mechanisms driving childhood depression and social anxiety disorder with the aim of developing strategies to predict which treatments will be most effective for individual patients.

Helping children and adults

“Depression in children is hugely understudied,” says Zacharek. “Most of the research has focused on adult and adolescent depression.” But the clinical presentation differs in the two groups, she says. “In children, irritability can be the primary presenting symptom rather than melancholy.” To get to the root of childhood depression, she is exploring both the brain basis of the disorder and how the parent-child relationship might influence symptoms. “Parents help children develop their emotion-regulation skills,” she says. “Knowing the underlying mechanisms could, in family-focused therapy, help them turn a ‘downward spiral’ into irritability, into an ‘upward spiral,’ away from it.”

The studies she is conducting include functional magnetic resonance imaging (fMRI) of children to explore their brain responses to positive and negative stimuli, fMRI of both the child and parent to compare maps of their brains’ functional connectivity, and magnetic resonance spectroscopy to explore the neurochemical environment of both, including quantities of neurometabolites that indicate inflammation (higher levels have been found to correlate with depressive pathology).

“If we could find a normative range for neurochemicals and then see how far someone has deviated in depression, or a neural signature of elevated activity in a brain region, that could serve as a biomarker for future interventions,” she says. “Such a biomarker would be especially relevant for children given that they are less able to articulately convey their symptoms or internal experience.”

“The brain provides a window not only into dysfunction but also into response to treatment.” – Sadie Zacharek

Social anxiety disorder is a chronic and disabling condition that affects about 7.1 percent of U.S. adults. Treatment usually involves cognitive behavior therapy (CBT), and then, if there is limited response, the addition of a selective serotonin reuptake inhibitor (SSRI), as an anxiolytic.

But what if research could reveal the key neurocircuitry of social anxiety disorder as well as changes associated with treatment? That could open the door to predicting treatment outcome.

Zacharek is collecting neuroimaging data, as well as clinical assessments, from participants. The participants diagnosed with social anxiety disorder will then undergo 12 weeks of group CBT, followed by more data collection, and then individual CBT for 12 weeks plus an SSRI for those who do not benefit from the group CBT. The results from those two time points will help determine the best treatment for each person.

“We hope to build a predictive model that could enable clinicians to scan a new patient and select the optimal treatment,” says Zacharek. “John’s many long-standing relationships with clinicians in this area make all of these translational studies possible.”