Do we only use 10 percent of our brain?

Movies like “Limitless” and “Lucy” play on the notion that humans use only 10 percent of their brains—and those who unlock a higher percentage wield powers like infinite memory or telekinesis. It’s enticing to think that so much of the brain remains untapped and is ripe for boosting human potential.

But the idea that we use 10 percent of our brain is 100 percent a myth.

In fact, scientists believe that we use our entire brain every day. Mila Halgren is a graduate student in the lab of Mark Harnett, an associate professor of brain and cognitive sciences and an investigator at the McGovern Institute. The Harnett lab studies the computational power of neurons, that is, how neural networks rapidly process massive amounts of information.

“All of our brain is constantly in use and consumes a tremendous amount of energy,” Halgren says. “Despite making up only two percent of our body weight, it devours 20 percent of our calories.” This doesn’t appear to change significantly with different tasks, from typing on a computer to doing yoga. “Even while we sleep, our entire brain remains intensely active.”

When did this myth take root?

Portrait of scientist Mila Halgren
Mila Halgren is a PhD student in MIT’s Department of Brain and Cognitive Sciences. Photo: Mila Halgren

The myth is thought to have gained traction when scientists first began exploring the brain’s abilities but lacked the tools to capture its exact workings. In 1907, William James, a founder of American psychology, suggested in his book “The Energies of Men” that “we are making use of only a small part of our possible mental and physical resources.” This influential work likely sparked the idea that humans access a mere fraction of the brain—setting this common misconception ablaze.

Brainpower lore even suggests that Albert Einstein credited his genius to being able to access more than 10 percent of his brain. However, no such quote has been documented and this too is perhaps a myth of cosmic proportion.

Halgren believes that there may be some fact backing this fiction. “People may think our brain is underutilized in the sense that some neurons fire very infrequently—once every few minutes or less. But this isn’t true of most neurons, some of which fire hundreds of times per second,” she says.

In the nascent years of neuroscience, scientists also argued that a large portion of the brain must be inactive because some people experience brain injuries and can still function at a high level, like the famous case of Phineas Gage. Halgren points to the brain’s remarkable plasticity—the reshaping of neural connections. “Entire brain hemispheres can be removed during early childhood and the rest of the brain will rewire and compensate for the loss. In other words, the brain will use 100 percent of what it has, but can make do with less depending on which structures are damaged.”

Is there a limit to the brain?

If we indeed use our entire brain, can humans tease out any problem? Or, are there enigmas in the world that we will never unravel?

“This is still in contention,” Halgren says. “There may be certain problems that the human brain is fundamentally unable to solve, like how a mouse will never understand chemistry and a chimpanzee can’t do calculus.”

Can we increase our brainpower?

The brain may have its limits, but there are ways to boost our cognitive prowess to ace that midterm or crank up productivity in the workplace. According to Halgren, “You can increase your brainpower, but there’s no ‘trick’ that will allow you to do so. Like any organ in your body, the brain works best with proper sleep, exercise, low stress, and a well-balanced diet.”

The truth is, we may never rearrange furniture with our minds or foresee which team will win the Super Bowl. The idea of a largely latent brain is draped in fantasy, but debunking this myth speaks to the immense growth of neuroscience over the years—and the allure of other misconceptions that scientists have yet to demystify.

The brain runs an internal simulation to keep track of time

Clocks, computers, and metronomes can keep time with exquisite precision. But even in the absence of an external time keeper, we can track time on our own. We know when minutes or hours have elapsed, and we can maintain a rhythm when we dance, sing, or play music. Now, neuroscientists at the National Autonomous University of Mexico and MIT’s McGovern Institute and have discovered one way the brain keeps a beat: It runs an internal simulation, mentally recreating the perception of an external rhythm and preparing an appropriately timed response.

The discovery, reported January 10, 2024, in the journal Science Advances, illustrates how animals can think about imaginary events and use an internal model to guide their interactions with the world. “It’s a real indication of mental states as an independent driver of behavior,” says neuroscientist Mehrdad Jazayeri, an investigator at the McGovern Institute and an associate professor of brain and cognitive sciences at MIT.

Predicting the future

Jazayeri teamed up with Victor de Lafuente, a neuroscientist at the National Autonomous University of Mexico, to investigate the brain’s time-keeping ability. De Lafuente, who led the study, says they were motivated by curiosity about how the brain makes predictions and prepares for future states of the world.

De Lafuente and his team used a visual metronome to teach monkeys a simple rhythm, showing them a circle that moved between two positions on a screen to set a steady tempo. Then the metronome stopped. After a variable and unpredictable pause, the monkeys were asked to indicate where the dot would be if the metronome had carried on.

Monkeys do well at this task, successfully keeping time after the metronome stops. After the waiting period, they are usually able to identify the expected position of the circle, which they communicate by reaching towards a touchscreen.

To find out how the animals were keeping track of the metronome’s rhythm, de Lafuente’s group monitored their brain activity. In several key brain regions, they found rhythmic patterns of activity that oscillated at the same frequency as the metronome. This occurred while the monkeys watched the metronome. More remarkably, it continued after the metronome had stopped.

“The animal is seeing things going and then things stop. What we find in the brain is the continuation of that process in the animal’s mind,” Jazayeri says. “An entire network is replicating what it was doing.”

That was true in the visual cortex, where clusters of neurons respond to stimuli in specific spots within the eyes’ field of view. One set of cells in the visual cortex fired when the metronome’s circle was on the left of the screen; another set fired when the dot was on the right. As a monkey followed the visual metronome, the researchers could see these cells’ activity alternating rhythmically, tracking the movement. When the metronome stopped, the back-and-forth neural activity continued, maintaining the rhythm. “Once the stimulus was no longer visible, they were seeing the stimulus within their minds,” de Lafuente says.

They found something similar in the brain’s motor cortex, where movements are prepared and executed. De Lafuente explains that the monkeys are motionless for most of their time-keeping task; only when they are asked to indicate where the metronome’s circle should be do they move a hand to touch the screen. But the motor cortex was engaged even before it was time to move. “Within their brains there is a signal that is switching from the left to the right,” he says. “So the monkeys are thinking ‘left, right, left, right’—even when they are not moving and the world is constant.”

While some scientists have proposed that the brain may have a central time-keeping mechanism, the team’s findings indicate that entire networks can be called on to track the passage of time. The monkeys’ model of the future was surprisingly explicit, de Lafuente says, representing specific sensory stimuli and plans for movement. “This offers a potential solution to mentally tracking the dynamics in the world, which is to basically think about them in terms of how they actually would have happened,” Jazayeri says.

 

How the brain responds to reward is linked to socioeconomic background

MIT neuroscientists have found that the brain’s sensitivity to rewarding experiences — a critical factor in motivation and attention — can be shaped by socioeconomic conditions.

In a study of 12 to 14-year-olds whose socioeconomic status (SES) varied widely, the researchers found that children from lower SES backgrounds showed less sensitivity to reward than those from more affluent backgrounds.

Using functional magnetic resonance imaging (fMRI), the research team measured brain activity as the children played a guessing game in which they earned extra money for each correct guess. When participants from higher SES backgrounds guessed correctly, a part of the brain called the striatum, which is linked to reward, lit up much more than in children from lower SES backgrounds.

The brain imaging results also coincided with behavioral differences in how participants from lower and higher SES backgrounds responded to correct guesses. The findings suggest that lower SES circumstances may prompt the brain to adapt to the environment by dampening its response to rewards, which are often scarcer in low SES environments.

“If you’re in a highly resourced environment, with many rewards available, your brain gets tuned in a certain way. If you’re in an environment in which rewards are more scarce, then your brain accommodates the environment in which you live. Instead of being overresponsive to rewards, it seems like these brains, on average, are less responsive, because probably their environment has been less consistent in the availability of rewards,” says John Gabrieli, the Grover Hermann Professor of Health Sciences and Technology, a professor of brain and cognitive sciences, and a member of MIT’s McGovern Institute for Brain Research.

Gabrieli and Rachel Romeo, a former MIT postdoc who is now an assistant professor in the Department of Human Development and Quantitative Methodology at the University of Maryland, are the senior authors of the study. MIT postdoc Alexandra Decker is the lead author of the paper, which appears today in the Journal of Neuroscience.

Reward response

Previous research has shown that children from lower SES backgrounds tend to perform worse on tests of attention and memory, and they are more likely to experience depression and anxiety. However, until now, few studies have looked at the possible association between SES and reward sensitivity.

In the new study, the researchers focused on a part of the brain called the striatum, which plays a significant role in reward response and decision-making. Studies in people and animal models have shown that this region becomes highly active during rewarding experiences.

To investigate potential links between reward sensitivity, the striatum, and socioeconomic status, the researchers recruited more than 100 adolescents from a range of SES backgrounds, as measured by household income and how much education their parents received.

Each of the participants underwent fMRI scanning while they played a guessing game. The participants were shown a series of numbers between 1 and 9, and before each trial, they were asked to guess whether the next number would be greater than or less than 5. They were told that for each correct guess, they would earn an extra dollar, and for each incorrect guess, they would lose 50 cents.

Unbeknownst to the participants, the game was set up to control whether the guess would be correct or incorrect. This allowed the researchers to ensure that each participant had a similar experience, which included periods of abundant rewards or few rewards. In the end, everyone ended up winning the same amount of money (in addition to a stipend that each participant received for participating in the study).

Previous work has shown that the brain appears to track the rate of rewards available. When rewards are abundant, people or animals tend to respond more quickly because they don’t want to miss out on the many available rewards. The researchers saw that in this study as well: When participants were in a period when most of their responses were correct, they tended to respond more quickly.

“If your brain is telling you there’s a really high chance that you’re going to receive a reward in this environment, it’s going to motivate you to collect rewards, because if you don’t act, you’re missing out on a lot of rewards,” Decker says.

Brain scans showed that the degree of activation in the striatum appeared to track fluctuations in the rate of rewards across time, which the researchers think could act as a motivational signal that there are many rewards to collect. The striatum lit up more during periods in which rewards were abundant and less during periods in which rewards were scarce. However, this effect was less pronounced in the children from lower SES backgrounds, suggesting their brains were less attuned to fluctuations in the rate of reward over time.

The researchers also found that during periods of scarce rewards, participants tended to take longer to respond after a correct guess, another phenomenon that has been shown before. It’s unknown exactly why this happens, but two possible explanations are that people are savoring their reward or that they are pausing to update the reward rate. However, once again, this effect was less pronounced in the children from lower SES backgrounds — that is, they did not pause as long after a correct guess during the scarce-reward periods.

“There was a reduced response to reward, which is really striking. It may be that if you’re from a lower SES environment, you’re not as hopeful that the next response will gain similar benefits, because you may have a less reliable environment for earning rewards,” Gabrieli says. “It just points out the power of the environment. In these adolescents, it’s shaping their psychological and brain response to reward opportunity.”

Environmental effects

The fMRI scans performed during the study also revealed that children from lower SES backgrounds showed less activation in the striatum when they guessed correctly, suggesting that their brains have a dampened response to reward.

The researchers hypothesize that these differences in reward sensitivity may have evolved over time, in response to the children’s environments.

“Socioeconomic status is associated with the degree to which you experience rewards over the course of your lifetime,” Decker says. “So, it’s possible that receiving a lot of rewards perhaps reinforces behaviors that make you receive more rewards, and somehow this tunes the brain to be more responsive to rewards. Whereas if you are in an environment where you receive fewer rewards, your brain might become, over time, less attuned to them.”

The study also points out the value of recruiting study subjects from a range of SES backgrounds, which takes more effort but yields important results, the researchers say.

“Historically, many studies have involved the easiest people to recruit, who tend to be people who come from advantaged environments. If we don’t make efforts to recruit diverse pools of participants, we almost always end up with children and adults who come from high-income, high-education environments,” Gabrieli says. “Until recently, we did not realize that principles of brain development vary in relation to the environment in which one grows up, and there was very little evidence about the influence of SES.”

The research was funded by the William and Flora Hewlett Foundation and a Natural Sciences and Engineering Research Council of Canada Postdoctoral Fellowship.

Study reveals a universal pattern of brain wave frequencies

Throughout the brain’s cortex, neurons are arranged in six distinctive layers, which can be readily seen with a microscope. A team of MIT and Vanderbilt University neuroscientists has now found that these layers also show distinct patterns of electrical activity, which are consistent over many brain regions and across several animal species, including humans.

The researchers found that in the topmost layers, neuron activity is dominated by rapid oscillations known as gamma waves. In the deeper layers, slower oscillations called alpha and beta waves predominate. The universality of these patterns suggests that these oscillations are likely playing an important role across the brain, the researchers say.

“When you see something that consistent and ubiquitous across cortex, it’s playing a very fundamental role in what the cortex does,” says Earl Miller, the Picower Professor of Neuroscience, a member of MIT’s Picower Institute for Learning and Memory, and one of the senior authors of the new study.

Imbalances in how these oscillations interact with each other may be involved in brain disorders such as attention deficit hyperactivity disorder, the researchers say.

“Overly synchronous neural activity is known to play a role in epilepsy, and now we suspect that different pathologies of synchrony may contribute to many brain disorders, including disorders of perception, attention, memory, and motor control. In an orchestra, one instrument played out of synchrony with the rest can disrupt the coherence of the entire piece of music,” says Robert Desimone, director of MIT’s McGovern Institute for Brain Research and one of the senior authors of the study.

André Bastos, an assistant professor of psychology at Vanderbilt University, is also a senior author of the open-access paper, which appears today in Nature Neuroscience. The lead authors of the paper are MIT research scientist Diego Mendoza-Halliday and MIT postdoc Alex Major.

Layers of activity

The human brain contains billions of neurons, each of which has its own electrical firing patterns. Together, groups of neurons with similar patterns generate oscillations of electrical activity, or brain waves, which can have different frequencies. Miller’s lab has previously shown that high-frequency gamma rhythms are associated with encoding and retrieving sensory information, while low-frequency beta rhythms act as a control mechanism that determines which information is read out from working memory.

His lab has also found that in certain parts of the prefrontal cortex, different brain layers show distinctive patterns of oscillation: faster oscillation at the surface and slower oscillation in the deep layers. One study, led by Bastos when he was a postdoc in Miller’s lab, showed that as animals performed working memory tasks, lower-frequency rhythms generated in deeper layers regulated the higher-frequency gamma rhythms generated in the superficial layers.

In addition to working memory, the brain’s cortex also is the seat of thought, planning, and high-level processing of emotion and sensory information. Throughout the regions involved in these functions, neurons are arranged in six layers, and each layer has its own distinctive combination of cell types and connections with other brain areas.

“The cortex is organized anatomically into six layers, no matter whether you look at mice or humans or any mammalian species, and this pattern is present in all cortical areas within each species,” Mendoza-Halliday says. “Unfortunately, a lot of studies of brain activity have been ignoring those layers because when you record the activity of neurons, it’s been difficult to understand where they are in the context of those layers.”

In the new paper, the researchers wanted to explore whether the layered oscillation pattern they had seen in the prefrontal cortex is more widespread, occurring across different parts of the cortex and across species.

Using a combination of data acquired in Miller’s lab, Desimone’s lab, and labs from collaborators at Vanderbilt, the Netherlands Institute for Neuroscience, and the University of Western Ontario, the researchers were able to analyze 14 different areas of the cortex, from four mammalian species. This data included recordings of electrical activity from three human patients who had electrodes inserted in the brain as part of a surgical procedure they were undergoing.

Recording from individual cortical layers has been difficult in the past, because each layer is less than a millimeter thick, so it’s hard to know which layer an electrode is recording from. For this study, electrical activity was recorded using special electrodes that record from all of the layers at once, then feed the data into a new computational algorithm the authors designed, termed FLIP (frequency-based layer identification procedure). This algorithm can determine which layer each signal came from.

“More recent technology allows recording of all layers of cortex simultaneously. This paints a broader perspective of microcircuitry and allowed us to observe this layered pattern,” Major says. “This work is exciting because it is both informative of a fundamental microcircuit pattern and provides a robust new technique for studying the brain. It doesn’t matter if the brain is performing a task or at rest and can be observed in as little as five to 10 seconds.”

Across all species, in each region studied, the researchers found the same layered activity pattern.

“We did a mass analysis of all the data to see if we could find the same pattern in all areas of the cortex, and voilà, it was everywhere. That was a real indication that what had previously been seen in a couple of areas was representing a fundamental mechanism across the cortex,” Mendoza-Halliday says.

Maintaining balance

The findings support a model that Miller’s lab has previously put forth, which proposes that the brain’s spatial organization helps it to incorporate new information, which carried by high-frequency oscillations, into existing memories and brain processes, which are maintained by low-frequency oscillations. As information passes from layer to layer, input can be incorporated as needed to help the brain perform particular tasks such as baking a new cookie recipe or remembering a phone number.

“The consequence of a laminar separation of these frequencies, as we observed, may be to allow superficial layers to represent external sensory information with faster frequencies, and for deep layers to represent internal cognitive states with slower frequencies,” Bastos says. “The high-level implication is that the cortex has multiple mechanisms involving both anatomy and oscillations to separate ‘external’ from ‘internal’ information.”

Under this theory, imbalances between high- and low-frequency oscillations can lead to either attention deficits such as ADHD, when the higher frequencies dominate and too much sensory information gets in, or delusional disorders such as schizophrenia, when the low frequency oscillations are too strong and not enough sensory information gets in.

“The proper balance between the top-down control signals and the bottom-up sensory signals is important for everything the cortex does,” Miller says. “When the balance goes awry, you get a wide variety of neuropsychiatric disorders.”

The researchers are now exploring whether measuring these oscillations could help to diagnose these types of disorders. They are also investigating whether rebalancing the oscillations could alter behavior — an approach that could one day be used to treat attention deficits or other neurological disorders, the researchers say.

The researchers also hope to work with other labs to characterize the layered oscillation patterns in more detail across different brain regions.

“Our hope is that with enough of that standardized reporting, we will start to see common patterns of activity across different areas or functions that might reveal a common mechanism for computation that can be used for motor outputs, for vision, for memory and attention, et cetera,” Mendoza-Halliday says.

The research was funded by the U.S. Office of Naval Research, the U.S. National Institutes of Health, the U.S. National Eye Institute, the U.S. National Institute of Mental Health, the Picower Institute, a Simons Center for the Social Brain Postdoctoral Fellowship, and a Canadian Institutes of Health Postdoctoral Fellowship.

Complex, unfamiliar sentences make the brain’s language network work harder

With help from an artificial language network, MIT neuroscientists have discovered what kind of sentences are most likely to fire up the brain’s key language processing centers.

The new study reveals that sentences that are more complex, either because of unusual grammar or unexpected meaning, generate stronger responses in these language processing centers. Sentences that are very straightforward barely engage these regions, and nonsensical sequences of words don’t do much for them either.

For example, the researchers found this brain network was most active when reading unusual sentences such as “Buy sell signals remains a particular,” taken from a publicly available language dataset called C4. However, it went quiet when reading something very straightforward, such as “We were sitting on the couch.”

“The input has to be language-like enough to engage the system,” says Evelina Fedorenko, Associate Professor of Neuroscience at MIT and a member of MIT’s McGovern Institute for Brain Research. “And then within that space, if things are really easy to process, then you don’t have much of a response. But if things get difficult, or surprising, if there’s an unusual construction or an unusual set of words that you’re maybe not very familiar with, then the network has to work harder.”

Fedorenko is the senior author of the study, which appears today in Nature Human Behavior. MIT graduate student Greta Tuckute is the lead author of the paper.

Processing language

In this study, the researchers focused on language-processing regions found in the left hemisphere of the brain, which includes Broca’s area as well as other parts of the left frontal and temporal lobes of the brain.

“This language network is highly selective to language, but it’s been harder to actually figure out what is going on in these language regions,” Tuckute says. “We wanted to discover what kinds of sentences, what kinds of linguistic input, drive the left hemisphere language network.”

The researchers began by compiling a set of 1,000 sentences taken from a wide variety of sources — fiction, transcriptions of spoken words, web text, and scientific articles, among many others.

Five human participants read each of the sentences while the researchers measured their language network activity using functional magnetic resonance imaging (fMRI). The researchers then fed those same 1,000 sentences into a large language model — a model similar to ChatGPT, which learns to generate and understand language from predicting the next word in huge amounts of text — and measured the activation patterns of the model in response to each sentence.

Once they had all of those data, the researchers trained a mapping model, known as an “encoding model,” which relates the activation patterns seen in the human brain with those observed in the artificial language model. Once trained, the model could predict how the human language network would respond to any new sentence based on how the artificial language network responded to these 1,000 sentences.

The researchers then used the encoding model to identify 500 new sentences that would generate maximal activity in the human brain (the “drive” sentences), as well as sentences that would elicit minimal activity in the brain’s language network (the “suppress” sentences).

In a group of three new human participants, the researchers found these new sentences did indeed drive and suppress brain activity as predicted.

“This ‘closed-loop’ modulation of brain activity during language processing is novel,” Tuckute says. “Our study shows that the model we’re using (that maps between language-model activations and brain responses) is accurate enough to do this. This is the first demonstration of this approach in brain areas implicated in higher-level cognition, such as the language network.”

Linguistic complexity

To figure out what made certain sentences drive activity more than others, the researchers analyzed the sentences based on 11 different linguistic properties, including grammaticality, plausibility, emotional valence (positive or negative), and how easy it is to visualize the sentence content.

For each of those properties, the researchers asked participants from crowd-sourcing platforms to rate the sentences. They also used a computational technique to quantify each sentence’s “surprisal,” or how uncommon it is compared to other sentences.

This analysis revealed that sentences with higher surprisal generate higher responses in the brain. This is consistent with previous studies showing people have more difficulty processing sentences with higher surprisal, the researchers say.

Another linguistic property that correlated with the language network’s responses was linguistic complexity, which is measured by how much a sentence adheres to the rules of English grammar and how plausible it is, meaning how much sense the content makes, apart from the grammar.

Sentences at either end of the spectrum — either extremely simple, or so complex that they make no sense at all — evoked very little activation in the language network. The largest responses came from sentences that make some sense but require work to figure them out, such as “Jiffy Lube of — of therapies, yes,” which came from the Corpus of Contemporary American English dataset.

“We found that the sentences that elicit the highest brain response have a weird grammatical thing and/or a weird meaning,” Fedorenko says. “There’s something slightly unusual about these sentences.”

The researchers now plan to see if they can extend these findings in speakers of languages other than English. They also hope to explore what type of stimuli may activate language processing regions in the brain’s right hemisphere.

The research was funded by an Amazon Fellowship from the Science Hub, an International Doctoral Fellowship from the American Association of University Women, the MIT-IBM Watson AI Lab, the National Institutes of Health, the McGovern Institute, the Simons Center for the Social Brain, and MIT’s Department of Brain and Cognitive Sciences.

K. Lisa Yang Postbaccalaureate Program names new scholars

Funded by philanthropist Lisa Yang, the K. Lisa Yang Postbaccalaureate Scholar Program provides two years of paid laboratory experience, mentorship, and education to recent college graduates from backgrounds underrepresented in neuroscience. This year, two young researchers in McGovern Institute labs, Joseph Itiat and Sam Merrow, are the recipients of the Yang postbac program.

Itiat moved to the United States from Nigeria in 2019 to pursue a degree in psychology and cognitive neuroscience at Temple University. Today, he is a Yang postbac in John Gabrieli’s lab studying the relationship between learning and value processes and their influence on future-oriented decision-making. Ultimately, Itiat hopes to develop models that map the underlying mechanisms driving these processes.

“Being African, with limited research experience and little representation in the domain of neuroscience research,” Itiat says, “I chose to pursue a postbaccalaureate
research program to prepare me for a top graduate school and a career in cognitive neuroscience.”

Merrow first fell in love with science while working at the Barrow Neurological Institute in Arizona during high school. After graduating from Simmons University in Boston, Massachusetts, Merrow joined Guoping Feng’s lab as a Yang postbac to pursue research on glial cells and brain disorders. “As a queer, nonbinary, LatinX person, I have not met anyone like me in my field, nor have I had role models that hold a similar identity to myself,” says Merrow.

“My dream is to one day become a professor, where I will be able to show others that science is for anyone.”

Previous Yang postbacs include Alex Negron, Zoe Pearce, Ajani Stewart, and Maya Taliaferro.

What does the future hold for generative AI?

Speaking at the “Generative AI: Shaping the Future” symposium on Nov. 28, the kickoff event of MIT’s Generative AI Week, keynote speaker and iRobot co-founder Rodney Brooks warned attendees against uncritically overestimating the capabilities of this emerging technology, which underpins increasingly powerful tools like OpenAI’s ChatGPT and Google’s Bard.

“Hype leads to hubris, and hubris leads to conceit, and conceit leads to failure,” cautioned Brooks, who is also a professor emeritus at MIT, a former director of the Computer Science and Artificial Intelligence Laboratory (CSAIL), and founder of Robust.AI.

“No one technology has ever surpassed everything else,” he added.

The symposium, which drew hundreds of attendees from academia and industry to the Institute’s Kresge Auditorium, was laced with messages of hope about the opportunities generative AI offers for making the world a better place, including through art and creativity, interspersed with cautionary tales about what could go wrong if these AI tools are not developed responsibly.

Generative AI is a term to describe machine-learning models that learn to generate new material that looks like the data they were trained on. These models have exhibited some incredible capabilities, such as the ability to produce human-like creative writing, translate languages, generate functional computer code, or craft realistic images from text prompts.

In her opening remarks to launch the symposium, MIT President Sally Kornbluth highlighted several projects faculty and students have undertaken to use generative AI to make a positive impact in the world. For example, the work of the Axim Collaborative, an online education initiative launched by MIT and Harvard, includes exploring the educational aspects of generative AI to help underserved students.

The Institute also recently announced seed grants for 27 interdisciplinary faculty research projects centered on how AI will transform people’s lives across society.

In hosting Generative AI Week, MIT hopes to not only showcase this type of innovation, but also generate “collaborative collisions” among attendees, Kornbluth said.

Collaboration involving academics, policymakers, and industry will be critical if we are to safely integrate a rapidly evolving technology like generative AI in ways that are humane and help humans solve problems, she told the audience.

“I honestly cannot think of a challenge more closely aligned with MIT’s mission. It is a profound responsibility, but I have every confidence that we can face it, if we face it head on and if we face it as a community,” she said.

While generative AI holds the potential to help solve some of the planet’s most pressing problems, the emergence of these powerful machine learning models has blurred the distinction between science fiction and reality, said CSAIL Director Daniela Rus in her opening remarks. It is no longer a question of whether we can make machines that produce new content, she said, but how we can use these tools to enhance businesses and ensure sustainability. 

“Today, we will discuss the possibility of a future where generative AI does not just exist as a technological marvel, but stands as a source of hope and a force for good,” said Rus, who is also the Andrew and Erna Viterbi Professor in the Department of Electrical Engineering and Computer Science.

But before the discussion dove deeply into the capabilities of generative AI, attendees were first asked to ponder their humanity, as MIT Professor Joshua Bennett read an original poem.

Bennett, a professor in the MIT Literature Section and Distinguished Chair of the Humanities, was asked to write a poem about what it means to be human, and drew inspiration from his daughter, who was born three weeks ago.

The poem told of his experiences as a boy watching Star Trek with his father and touched on the importance of passing traditions down to the next generation.

In his keynote remarks, Brooks set out to unpack some of the deep, scientific questions surrounding generative AI, as well as explore what the technology can tell us about ourselves.

To begin, he sought to dispel some of the mystery swirling around generative AI tools like ChatGPT by explaining the basics of how this large language model works. ChatGPT, for instance, generates text one word at a time by determining what the next word should be in the context of what it has already written. While a human might write a story by thinking about entire phrases, ChatGPT only focuses on the next word, Brooks explained.

ChatGPT 3.5 is built on a machine-learning model that has 175 billion parameters and has been exposed to billions of pages of text on the web during training. (The newest iteration, ChatGPT 4, is even larger.) It learns correlations between words in this massive corpus of text and uses this knowledge to propose what word might come next when given a prompt.

The model has demonstrated some incredible capabilities, such as the ability to write a sonnet about robots in the style of Shakespeare’s famous Sonnet 18. During his talk, Brooks showcased the sonnet he asked ChatGPT to write side-by-side with his own sonnet.

But while researchers still don’t fully understand exactly how these models work, Brooks assured the audience that generative AI’s seemingly incredible capabilities are not magic, and it doesn’t mean these models can do anything.

His biggest fears about generative AI don’t revolve around models that could someday surpass human intelligence. Rather, he is most worried about researchers who may throw away decades of excellent work that was nearing a breakthrough, just to jump on shiny new advancements in generative AI; venture capital firms that blindly swarm toward technologies that can yield the highest margins; or the possibility that a whole generation of engineers will forget about other forms of software and AI.

At the end of the day, those who believe generative AI can solve the world’s problems and those who believe it will only generate new problems have at least one thing in common: Both groups tend to overestimate the technology, he said.

“What is the conceit with generative AI? The conceit is that it is somehow going to lead to artificial general intelligence. By itself, it is not,” Brooks said.

Following Brooks’ presentation, a group of MIT faculty spoke about their work using generative AI and participated in a panel discussion about future advances, important but underexplored research topics, and the challenges of AI regulation and policy.

The panel consisted of Jacob Andreas, an associate professor in the MIT Department of Electrical Engineering and Computer Science (EECS) and a member of CSAIL; Antonio Torralba, the Delta Electronics Professor of EECS and a member of CSAIL; Ev Fedorenko, an associate professor of brain and cognitive sciences and an investigator at the McGovern Institute for Brain Research at MIT; and Armando Solar-Lezama, a Distinguished Professor of Computing and associate director of CSAIL. It was moderated by William T. Freeman, the Thomas and Gerd Perkins Professor of EECS and a member of CSAIL.

The panelists discussed several potential future research directions around generative AI, including the possibility of integrating perceptual systems, drawing on human senses like touch and smell, rather than focusing primarily on language and images. The researchers also spoke about the importance of engaging with policymakers and the public to ensure generative AI tools are produced and deployed responsibly.

“One of the big risks with generative AI today is the risk of digital snake oil. There is a big risk of a lot of products going out that claim to do miraculous things but in the long run could be very harmful,” Solar-Lezama said.

The morning session concluded with an excerpt from the 1925 science fiction novel “Metropolis,” read by senior Joy Ma, a physics and theater arts major, followed by a roundtable discussion on the future of generative AI. The discussion included Joshua Tenenbaum, a professor in the Department of Brain and Cognitive Sciences and a member of CSAIL; Dina Katabi, the Thuan and Nicole Pham Professor in EECS and a principal investigator in CSAIL and the MIT Jameel Clinic; and Max Tegmark, professor of physics; and was moderated by Daniela Rus.

One focus of the discussion was the possibility of developing generative AI models that can go beyond what we can do as humans, such as tools that can sense someone’s emotions by using electromagnetic signals to understand how a person’s breathing and heart rate are changing.

But one key to integrating AI like this into the real world safely is to ensure that we can trust it, Tegmark said. If we know an AI tool will meet the specifications we insist on, then “we no longer have to be afraid of building really powerful systems that go out and do things for us in the world,” he said.

Tuning the mind to benefit mental health

This story also appears in the Winter 2024 issue of BrainScan.

___

llustration of woman sitting at end of a dock with head down, arms wrapped around her knees.
Mental health is the defining public health crisis of our time, according to U.S. Surgeon General Vivek Murthy, and the nation’s youth is at the
center of this crisis.

Psychiatrists and pediatricians have sounded an alarm. The mental health of youth in the United States is worsening. Youth visits to emergency departments related to depression, anxiety, and behavioral challenges have been on the rise for years. Suicide rates among young people have escalated, too. Researchers have tracked these trends for more than a decade, and the Covid-19 pandemic only exacerbated the situation.

“It’s all over the news, how shockingly common mental health difficulties are,” says John Gabrieli, the Grover Hermann Professor of Health Sciences and Technology at MIT and an investigator at the McGovern Institute. “It’s worsening by every measure.”

Experts worry that our mental health systems are inadequate to meet the growing need. “This has gone from bad to catastrophic, from my perspective,” says Susan Whitfeld-Gabrieli, a professor of psychology at Northeastern University and a research affiliate at the McGovern Institute.

“We really need to come up with novel interventions that target the neural mechanisms that we believe potentiate depression and anxiety.”

Training the brain

One approach may be to help young people learn to modulate some of the relevant brain circuitry themselves. Evidence is accumulating that practicing mindfulness — focusing awareness on the present, typically through meditation — can change patterns of brain activity associated with emotions and mental health.

“There’s been a steady flow of moderate-size studies showing that when you help people gain mindfulness through training programs, you get all kinds of benefits in terms of people feeling less stress, less anxiety, fewer negative emotions, and sometimes more positive ones as well,” says Gabrieli, who is also a professor of brain and cognitive sciences at MIT. “Those are the things you wish for people.”

“If there were a medicine with as much evidence of its effectiveness as mindfulness, it would be flying off the shelves of every pharmacy.”
– John Gabrieli

Researchers have even begun testing mindfulness-based interventions head-to-head against standard treatments for psychiatric disorders. The results of recent studies involving hundreds of adults with anxiety disorders or depression are encouraging. “It’s just as good as the best medicines and the best behavioral treatments that we know a ton about,” Gabrieli says.

Much mindfulness research has focused on adults, but promising data about the benefits of mindfulness training for children and adolescents is emerging as well. In studies supported by the McGovern Institute’s Poitras Center for Psychiatric Disorders Research in 2019 and 2020, Gabrieli and Whitfield-Gabrieli found that sixth-graders in a Boston middle school who participated in eight weeks of mindfulness training experienced reductions in feelings of stress and increases in sustained attention. More recently, Gabrieli and Whitfeld-Gabrieli’s teams have shown how new tools can support mindfulness training and make it accessible to more children and their families — from a smartphone app that can be used anywhere to real-time neurofeedback inside an MRI scanner.

Three people practicing mindfulness in MIT Building 46. Woman on left is leaning on a railing, wearing headphones with eyes closed. Man seated in the center holds a bowl and a wooden spoon. Woman on right is seated with legs crossed and eyes closed.
Isaac Treves (center), a PhD student in the lab of John Gabrieli, is the lead author of two studies which found that mindfulness training may improve children’s mental health. Treves and his co-authors Kimberly Wang (left) and Cindy Li (right) also practice mindfulness in their daily lives. Photo: Steph Stevens

Mindfulness and mental health

Mindfulness is not just a practice, it is a trait — an open, non-judgmental way of attending to experiences that some people exhibit more than others. By assessing individuals’ mindfulness with questionnaires that ask about attention and awareness, researchers have found the trait associates with many measures of mental health. Gabrieli and his team measured mindfulness in children between the ages of eight and ten and found it was highest in those who were most emotionally resilient to the stress they experienced during the Covid-19 pandemic. As the team reported this year in the journal PLOS One, children who were more mindful rated the impact of the pandemic on their own lives lower than other participants in the study. They also reported lower levels of stress, anxiety, and depression.

Illustration of a finger tracing the outline of a hand. There is a circle next to the hand with text that says, "Breathe In, Breathe Out. Children enrolled in John Gabrieli’s mindfulness study learned to trace the outline of their fingers in rhythm with their in-andout breathing pattern. This multisensory breathing technique has been shown to relieve anxiety and relax the body."

Mindfulness doesn’t come naturally to everyone, but brains are malleable, and both children and adults can cultivate mindfulness with training and practice. In their studies of middle schoolers, Gabrieli and Whitfeld-Gabrieli showed that the emotional effects of mindfulness training corresponded to measurable changes in the brain: Functional MRI scans revealed changes in regions involved in stress, negative feelings, and focused attention.

Whitfeld-Gabrieli says if mindfulness training makes kids more resilient, it could be a valuable tool for managing symptoms of anxiety and depression before they become severe. “I think it should be part of the standard school day,” she says. “I think we would have a much happier, healthier society if we could be doing this from the ground up.”

Data from Gabrieli’s lab suggests broadly implementing mindfulness training might even pay off in terms of academic achievement. His team found in a 2019 study that middle school students who reported greater levels of mindfulness had, on average, better grades, better scores on standardized tests, fewer absences, and fewer school suspensions than their peers.

Some schools have begun making mindfulness programs available to their students. But those programs don’t reach everyone, and their type and quality vary tremendously. Indeed, not every study of mindfulness training in schools has found the program to significantly benefit participants, which may be because not every approach to mindfulness training is equally effective.

“This is where I think the science matters,” Gabrieli says. “You have to find out what kinds of supports really work and you have to execute them reasonably. A recent report from Gabrieli’s lab offers encouraging news: mindfulness training doesn’t have to be in-person. Gabrieli and his team found that children can benefit from practicing mindfulness at home with the help of an app.

When the pandemic closed schools in 2020, school-based mindfulness programs came to an abrupt halt. Soon thereafter, a group called Inner Explorer had developed a smartphone app that could teach children mindfulness at home. Gabrieli and his team were eager to find out if this easy-access tool could effectively support children’s emotional well-being.

In October of this year, they reported in the journal Mindfulness that after 40 days of app use, children between the ages of eight and ten reported less stress than they had before beginning mindfulness training. Parents reported that their children were also experiencing fewer negative emotions, such as loneliness and fear.

The outcomes suggest a path toward making evidence-based mindfulness training for children broadly accessible. “Tons of people could do this,” says Gabrieli. “It’s super scalable. It doesn’t cost money; you don’t have to go somewhere. We’re very excited about that.”

Visualizing healthy minds

Mindfulness training may be even more effective when practitioners can visualize what’s happening in their brains. In Whitfeld-Gabrieli’s lab, teenagers have had a chance to slide inside an MRI scanner and watch their brain activity shift in real time as they practiced mindfulness meditation. The visualization they see focuses on the brain’s default mode network (DMN), which is most active when attention is not focused on a particular task. Certain patterns of activity in the DMN have been linked to depression, anxiety, and other psychiatric conditions, and mindfulness training may help break these patterns.

McGovern research affiliate Susan Whitfield-Gabrieli in the Martinos Imaging Center. Photo: Caitlin Cunningham

Whitfeld-Gabrieli explains that when the mind is free to wander, two hubs of the DMN become active. “Typically, that means we’re engaged in some kind of mental time travel,” she says. That might mean reminiscing about the past or planning for the future, but can be more distressing when it turns into obsessive rumination or worry. In people with anxiety, depression, and psychosis, these network hubs are often hyperconnected.

“It’s almost as if they’re hijacked,” Whitfeld-Gabrieli says. “The more they’re correlated, the more psychopathology one might be experiencing. We wanted to unlock that hyperconnectivity for kids who are suffering from depression and anxiety.” She hoped that by replacing thoughts of the past and the future with focus on the present, mindfulness meditation would rein in overactive DMNs, and she wanted a way to encourage kids to do exactly that.

The neurofeedback tool that she and her colleagues created focuses on the DMN as well as separate brain region that is called on during attention-demanding tasks. Activity in those regions is monitored with functional MRI and displayed to users in a game-like visualization. Inside the scanner, participants see how that activity changes as they focus on a meditation or when their mind wanders. As their mind becomes more focused on the present moment, changes in brain activity move a ball toward a target.

Whitfeld-Gabrieli says the real-time feedback was motivating for adolescents who participated in a recent study, who all had histories of anxiety or depression. “They’re training their brain to tune their mind, and they love it,” she says.

MRI images of two brains, one showing an active DMN and the other showing a healthy DMN.
The default mode network (DMN) is a large-scale brain network that is active when a person is not focused on the outside world and the brain is at wakeful rest. The DMN is often over-engaged in adolescents with depression and anxiety, as well as teens at risk for these affective disorders (left). DMN activation and connectivity can be “tuned” to a healthier state through the practice of mindfulness (right).

In March, she and her team reported in Molecular Psychiatry that the neurofeedback tool helped those study participants reduce connectivity in the DMN and engage a more desirable brain state. It’s not the first success the team has had with the approach. Previously, they found that the decreases in DMN connectivity brought about by mindfulness meditation with neurofeedback were associated with reduced hallucinations for patients with schizophrenia. Testing the clinical benefits of the approach in teens is on the horizon; Whitfeld-Gabrieli and her collaborators plan to investigate how mindfulness meditation with real-time neurofeedback affects depression symptoms in an upcoming clinical trial.

Whitfeld-Gabrieli emphasizes that the neurofeedback is a training tool, helping users improve mindfulness techniques they can later call on anytime, anywhere. While that training currently requires time inside an MRI scanner, she says it may be possible create an EEG-based version of the approach, which could be deployed in doctors’ offices and other more accessible settings.

Both Gabrieli and Whitfeld-Gabrieli continue to explore how mindfulness training impacts different aspects of mental health, in both children and adults and with a range of psychiatric conditions. Whitfeld-Gabrieli expects it will be one powerful tool for combating a youth mental health crisis for which there will be no single solution. “I think it’s going to take a village,” she says. “We are all going to have to work together, and we’ll have to come up some really innovative ways to help.”

Practicing mindfulness with an app may improve children’s mental health

Many studies have found that practicing mindfulness — defined as cultivating an open-minded attention to the present moment — has benefits for children. Children who receive mindfulness training at school have demonstrated improvements in attention and behavior, as well as greater mental health.

When the Covid-19 pandemic began in 2020, sending millions of students home from school, a group of MIT researchers wondered if remote, app-based mindfulness practices could offer similar benefits. In a study conducted during 2020 and 2021, they report that children who used a mindfulness app at home for 40 days showed improvements in several aspects of mental health, including reductions in stress and negative emotions such as loneliness and fear.

The findings suggest that remote, app-based mindfulness interventions, which could potentially reach a larger number of children than school-based approaches, could offer mental health benefits, the researchers say.

“There is growing and compelling scientific evidence that mindfulness can support mental well-being and promote mental health in diverse children and adults,” says John Gabrieli, the Grover Hermann Professor of Health Sciences and Technology, a professor of brain and cognitive sciences at MIT, and the senior author of the study, which appears this week in the journal Mindfulness.

Researchers in Gabrieli’s lab also recently reported that children who showed higher levels of mindfulness were more emotionally resilient to the negative impacts of the Covid-19 pandemic.

“To some extent, the impact of Covid is out of your control as an individual, but your ability to respond to it and to interpret it may be something that mindfulness can help with,” says MIT graduate student Isaac Treves, who is the lead author of both studies.

Pandemic resilience

After the pandemic began in early 2020, Gabrieli’s lab decided to investigate the effects of mindfulness on children who had to leave school and isolate from friends. In a study that appeared in the journal PLOS One in July, the researchers explored whether mindfulness could boost children’s resilience to negative emotions that the pandemic generated, such as frustration and loneliness.

Working with students between 8 and 10 years old, the researchers measured the children’s mindfulness using a standardized assessment that captures their tendency to blame themselves, ruminate on negative thoughts, and suppress their feelings.

The researchers also asked the children questions about how much the pandemic had affected different aspects of their lives, as well as questions designed to assess their levels of anxiety, depression, stress, and negative emotions such as worry or fear.

Among children who showed the highest levels of mindfulness, there was no correlation between how much the pandemic impacted them and negative feelings. However, in children with lower levels of mindfulness, there was a strong correlation between Covid-19 impact and negative emotions.

The children in this study did not receive any kind of mindfulness training, so their responses reflect their tendency to be mindful at the time they answered the researchers’ questions. The findings suggest that children with higher levels of mindfulness were less likely to get caught up in negative emotions or blame themselves for the negative things they experienced during the pandemic.

“This paper was our best attempt to look at mindfulness specifically in the context of Covid and to think about what are the factors that may help children adapt to the changing circumstances,” Treves says. “The takeaway is not that we shouldn’t worry about pandemics because we can just help the kids with mindfulness. People are able to be resilient when they’re in systems that support them, and in families that support them.”

Remote interventions

The researchers then built on that study by exploring whether a remote, app-based intervention could effectively increase mindfulness and improve mental health. Researchers in Gabrieli’s lab have previously shown that students who received mindfulness training in middle school showed better academic performance, received fewer suspensions, and reported less stress than those who did not receive the training.

For the new study, reported today in Mindfulness, the researchers worked with the same children they had recruited for the PLOS One study and divided them into three groups of about 80 students each.

One group received mindfulness training through an app created by Inner Explorer, a nonprofit that also develops school-based meditation programs. Those children were instructed to engage in mindfulness training five days a week, including relaxation exercises, breathing exercises, and other forms of meditation.

For comparison purposes, the other two groups were asked to use an app for listening to audiobooks (not related to mindfulness). One group was simply given the audiobook app and encouraged to listen at their own pace, while the other group also had weekly one-on-one virtual meetings with a facilitator.

At the beginning and end of the study, the researchers evaluated each participant’s levels of mindfulness, along with measures of mental health such as anxiety, stress, and depression. They found that in all three groups, mental health improved over the course of the eight-week study, and each group also showed increases in mindfulness and prosociality (engaging in helpful behavior).

Additionally, children in the mindfulness group showed some improvements that the other groups didn’t, including a more significant decrease in stress. They also found that parents in the mindfulness group reported that their children experienced more significant decreases in negative emotions such as anger and sadness. Students who practiced the mindfulness exercises the most days showed the greatest benefits.

The researchers were surprised to see that there were no significant differences in measures of anxiety and depression between the mindfulness group and audiobook groups; they hypothesize that may be because students who interacted with a facilitator in one of the audiobook groups also experienced beneficial effects on their mental health.

Overall, the findings suggest that there is value in remote, app-based mindfulness training, especially if children engage with the exercises consistently and receive encouragement from parents, the researchers say. Apps also offer the ability to reach a larger number of children than school-based programs, which require more training and resources.

“There are a lot of great ways to incorporate mindfulness training into schools, but in general, it’s more resource-intensive than having people download an app. So, in terms of pure scalability and cost-effectiveness, apps are useful,” Treves says. “Another good thing about apps is that the kids can go at their own pace and repeat practices that they like, so there’s more freedom of choice.”

The research was funded by the Chan Zuckerberg Initiative as part of the Reach Every Reader Project, the National Institutes of Health, and the National Science Foundation.

Re-imagining our theories of language

Over a decade ago, the neuroscientist Ev Fedorenko asked 48 English speakers to complete tasks like reading sentences, recalling information, solving math problems, and listening to music. As they did this, she scanned their brains using functional magnetic resonance imaging to see which circuits were activated. If, as linguists have proposed for decades, language is connected to thought in the human brain, then the language processing regions would be activated even during nonlinguistic tasks.

Fedorenko’s experiment, published in 2011 in the Proceedings of the National Academy of Sciences, showed that when it comes to arithmetic, musical processing, general working memory, and other nonlinguistic tasks, language regions of the human brain showed no response. Contrary to what many linguistists have claimed, complex thought and language are separate things. One does not require the other. “We have this highly specialized place in the brain that doesn’t respond to other activities,” says Fedorenko, who is an associate professor at the Department of Brain and Cognitive Sciences (BCS) and the McGovern Institute for Brain Research. “It’s not true that thought critically needs language.”

The design of the experiment, using neuroscience to understand how language works, how it evolved, and its relation to other cognitive functions, is at the heart of Fedorenko’s research. She is part of a unique intellectual triad at MIT’s Department of BCS, along with her colleagues Roger Levy and Ted Gibson. (Gibson and Fedorenko have been married since 2007). Together they have engaged in a years-long collaboration and built a significant body of research focused on some of the biggest questions in linguistics and human cognition. While working in three independent labs — EvLab, TedLab, and the Computational Psycholinguistics Lab — the researchers are motivated by a shared fascination with the human mind and how language works in the brain. “We have a great deal of interaction and collaboration,” says Levy. “It’s a very broadly collaborative, intellectually rich and diverse landscape.”

Using combinations of computational modeling, psycholinguistic experimentation, behavioral data, brain imaging, and large naturalistic language datasets, the researchers also share an answer to a fundamental question: What is the purpose of language? Of all the possible answers to why we have language, perhaps the simplest and most obvious is communication. “Believe it or not,” says Ted Gibson, “that is not the standard answer.”

Gibson first came to MIT in 1993 and joined the faculty of the Linguistics Department in 1997. Recalling the experience today, he describes it as frustrating. The field of linguistics at that time was dominated by the ideas of Noam Chomsky, one of the founders of MIT’s Graduate Program in Linguistics, who has been called the father of modern linguistics. Chomsky’s “nativist” theories of language posited that the purpose of language is the articulation of thought and that language capacity is built-in in advance of any learning. But Gibson, with his training in math and computer science, felt that researchers didn’t satisfyingly test these ideas. He believed that finding the answer to many outstanding questions about language required quantitative research, a departure from standard linguistic methodology. “There’s no reason to rely only on you and your friends, which is how linguistics has worked,” Gibson says. “The data you can get can be much broader if you crowdsource lots of people using experimental methods.” Chomsky’s ascendancy in linguistics presented Gibson with what he saw as a challenge and an opportunity. “I felt like I had to figure it out in detail and see if there was truth in these claims,” he says.

Three decades after he first joined MIT, Gibson believes that the collaborative research at BCS is persuasive and provocative, pointing to new ways of thinking about human culture and cognition. “Now we’re at a stage where it is not just arguments against. We have a lot of positive stuff saying what language is,” he explains. Levy adds: “I would say all three of us are of the view that communication plays a very import role in language learning and processing, but also in the structure of language itself.”

Levy points out that the three researchers completed PhDs in different subjects: Fedorenko in neuroscience, Gibson in computer science, Levy in linguistics. Yet for years before their paths finally converged at MIT, their shared interests in quantitative linguistic research led them to follow each other’s work closely and be influenced by it. The first collaboration between the three was in 2005 and focused on language processing in Russian relative clauses. Around that time, Gibson recalls, Levy was presenting what he describes as “lovely work” that was instrumental in helping him to understand the links between language structure and communication. “Communicative pressures drive the structures,” says Gibson. “Roger was crucial for that. He was the one helping me think about those things a long time ago.”

Levy’s lab is focused on the intersection of artificial intelligence, linguistics, and psychology, using natural language processing tools. “I try to use the tools that are afforded by mathematical and computer science approaches to language to formalize scientific hypotheses about language and the human mind and test those hypotheses,” he says.

Levy points to ongoing research between him and Gibson focused on language comprehension as an example of the benefits of collaboration. “One of the big questions is: When language understanding fails, why does it fail?” Together, the researchers have applied the concept of a “noisy channel,” first developed by the information theorist Claude Shannon in the 1950s, which says that information or messages are corrupted in transmission. “Language understanding unfolds over time, involving an ongoing integration of the past with the present,” says Levy. “Memory itself is an imperfect channel conveying the past from our brain a moment ago to our brain now in order to support successful language understanding.” Indeed, the richness of our linguistic environment, the experience of hundreds of millions of words by adulthood, may create a kind of statistical knowledge guiding our expectations, beliefs, predictions, and interpretations of linguistic meaning. “Statistical knowledge of language actually interacts with the constraints of our memory,” says Levy. “Our experience shapes our memory for language itself.”

All three researchers say they share the belief that by following the evidence, they will eventually discover an even bigger and more complete story about language. “That’s how science goes,” says Fedorenko. “Ted trained me, along with Nancy Kanwisher, and both Ted and Roger are very data-driven. If the data is not giving you the answer you thought, you don’t just keep pushing your story. You think of new hypotheses. Almost everything I have done has been like that.” At times, Fedorenko’s research into parts of the brain’s language system has surprised her and forced her to abandon her hypotheses. “In a certain project I came in with a prior idea that there would be some separation between parts that cared about combinatorics versus words meanings,” she says, “but every little bit of the language system is sensitive to both. At some point, I was like, this is what the data is telling us, and we have to roll with it.”

The researchers’ work pointing to communication as the constitutive purpose of language opens new possibilities for probing and studying non-human language. The standard claim is that human language has a drastically more extensive lexicon than animals, which have no grammar. “But many times, we don’t even know what other species are communicating,” says Gibson. “We say they can’t communicate, but we don’t know. We don’t speak their language.” Fedorenko hopes that more opportunities to make cross-species linguistic comparisons will open up. “Understanding where things are similar and where things diverge would be super useful,” she says.

Meanwhile, the potential applications of language research are far-reaching. One of Levy’s current research projects focuses on how people read and use machine learning algorithms informed by the psychology of eye movements to develop proficiency tests. By tracking the eye movements of people who speak English as a second language while they read texts in English, Levy can predict how good they are at English, an approach that could one day replace the Test of English as a Foreign Language. “It’s an implicit measure of language rather than a much more game-able test,” he says.

The researchers agree that some of the most exciting opportunities in the neuroscience of language lies with large language models that provide new opportunities for asking new questions and making new discoveries. “In the neuroscience of language, the kind of stories that we’ve been able to tell about how the brain does language were limited to verbal, descriptive hypotheses,” says Fedorenko. Computationally implemented models are now amazingly good at language and show some degree of alignment to the brain, she adds. Now, researchers can ask questions such as: what are the actual computations that cells are doing to get meaning from strings of words? “You can now use these models as tools to get insights into how humans might be processing language,” she says. “And you can take the models apart in ways you can’t take apart the brain.”