brain mapping, functional connectivity, fMRI, EEG, MEG, predictive imaging, precision interventions, contrast agents, theory of mind, the developing brain, learning
The National Academy of Sciences (NAS) announced today that McGovern Investigator Evelina Fedorenko will receive a 2025 Troland Research Award for her groundbreaking contributions towards understanding the language network in the human brain.
The Troland Research Award is given annually to recognize unusual achievement by early-career researchers within the broad spectrum of experimental psychology.
Fedorenko, who is an associate professor of brain and cognitive sciences at MIT, is interested in how minds and brains create language. Her lab is unpacking the internal architecture of the brain’s language system and exploring the relationship between language and various cognitive, perceptual, and motor systems. Her novel methods combine precise measures of an individual’s brain organization with innovative computational modeling to make fundamental discoveries about the computations that underlie the uniquely human ability for language.
Fedorenko has shown that the language network is selective for language processing over diverse non-linguistic processes that have been argued to share computational demands with language, such as math, music, and social reasoning. Her work has also demonstrated that syntactic processing is not localized to a particular region within the language network, and every brain region that responds to syntactic processing is at least as sensitive to word meanings.
She has also shown that representations from neural network language models, such as ChatGPT, are similar to those in the human language brain areas. Fedorenko also highlighted that although language models can master linguistic rules and patterns, they are less effective at using language in real-world situations. In the human brain, that kind of functional competence is distinct from formal language competence, she says, requiring not just language-processing circuits but also brain areas that store knowledge of the world, reason, and interpret social interactions. Contrary to a prominent view that language is essential for thinking, Fedorenko argues that language is not the medium of thought and is primarily a tool for communication.
Ultimately, Fedorenko’s cutting-edge work is uncovering the computations and representations that fuel language processing in the brain. She will receive the Troland Award this April, during the annual meeting of the NAS in Washington DC.
McGovern investigator Nancy Kanwisher and her team have big questions about the nature of the human mind. Energized by Kanwisher’s enthusiasm for finding out how and why the brain works as it does, her team collaborates broadly and embraces various tools of neuroscience. But their core discoveries tend to emerge from pictures of the brain in action. For Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience at MIT, “there’s nothing like looking inside.”
Kanwisher and her colleagues have scanned the brains of hundreds of volunteers using functional magnetic resonance imaging (fMRI). With each scan, they collect a piece of insight into how the brain is organized.
Recognizing faces
By visualizing the parts of the brain that get involved in various mental activities — and, importantly, which do not — they’ve discovered that certain parts of the brain specialize in surprisingly specific tasks. Earlier this year Kanwisher was awarded the prestigious Kavli Prize in Neuroscience for the discovery of one of these hyper-specific regions: a small spot within the brain’s neocortex that recognizes faces.
Kanwisher found that this region, which she named the fusiform face area (FFA), is highly sensitive to images of faces and appears to be largely uninterested in other objects. Without the FFA, the brain struggles with facial recognition — an impairment seen in patients who have experienced damage to this part of the brain.
Beyond the FFA
Not everything in the brain is so specialized. Many areas participate in a range of cognitive processes, and even the most specialized modules, like the FFA, must work with other brain regions to process and use information. Plus, Kanwisher and her team have tracked brain activity during many functions without finding regions devoted exclusively to those tasks. (There doesn’t appear to be a part of the brain dedicated to recognizing snakes, for example).
Still, work in the Kanwisher lab demonstrates that as a specialized functional module within the brain, the FFA is not unique. In collaboration with McGovern colleagues Josh McDermott and Evelina Fedorenko, the group has found areas devoted to perceiving music and using language. There’s even a region dedicated to thinking about other people’s thoughts, identified by Rebecca Saxe in work she started as a graduate student in Kanwisher’s lab.
Having established these regions’ roles, Kanwisher and her collaborators are now looking at how and why they become so specialized. Meanwhile, the group has also turned its attention to a more complex function that seems to largely take place within a defined network: our intuitive sense of physics.
The brain’s game engine
Early in life, we begin to understand the nature of objects and materials, such as the fact that objects can support but not move through each other. Later, we intuitively understand how it feels to move on a slippery floor, what happens when moving objects collide, and where a tossed ball will fall. “You can’t do anything at all in the world without some understanding of the physics of the world you’re acting on,” Kanwisher says.
Kanwisher says MIT colleague Josh Tenenbaum first sparked her interest in intuitive physical reasoning. Tenenbaum and his students had been arguing that humans understand
the physical world using a simulation system, much like the physics engines that video games use to generate realistic movement and interactions within virtual environments. Kanwisher decided to team up with Tenenbaum to test whether there really is a game engine in the head, and if so, what it computes and represents.
To find out, Kanwisher and her team have asked volunteers to evaluate various scenarios while in an MRI scanner — some that require physical reasoning and some that do not. They found sizable parts of the brain that participate in physical reasoning tasks but stay quiet during other kinds of thinking.
Research scientist RT Pramod says he was initially skeptical the brain would dedicate special circuitry to the diverse tasks involved in our intuitive sense of physics — but he’s been convinced by the data he’s found. “I see consistent evidence that if you’re reasoning, if you’re thinking, or even if you’re looking at anything sort of “physics-y” about the world, you will see activations in these regions and only in these regions — not anywhere else,” he says.
Pramod’s experiments also show that these regions are called on to make predictions about the physical world. When volunteers watch videos of objects whose trajectories portend a crash — but do not actually depict that crash — it is the physics network that signals what is about to happen. “Only these regions have this information, suggesting that maybe there is some truth to the physics engine hypothesis,” Pramod says.
Kanwisher says she doesn’t expect physical reasoning, which her group has tied to sizable swaths of the brain’s frontal and parietal cortex, to be executed by a module as distinct as the FFA. “It’s not going to be like one hyper-specific region and that’s all that happens there,” she says. “I think ultimately it’s much more interesting than that.”
To figure out what these regions can and cannot do, Kanwisher’s team has broadened the ways in which they ask volunteers to think about physics inside the MRI scanner. So far, Kanwisher says, the group’s tests have focused on rigid objects. But what about soft, squishy ones, or liquids?
Vivian Paulun, a postdoc working jointly with Kanwisher and Tenenbaum, is investigating whether our innate expectations about these kinds of materials occur within the network that they have linked to physical reasoning about rigid objects. Another set of experiments will explore whether we use sounds, like that of a bouncing ball or a screeching car, to predict physics physical events with the same network that interprets visual cues.
Meanwhile, she is also excited about an opportunity to find out what happens when the brain’s physics network is damaged. With collaborators in England, the group plans to find out whether patients in which stroke has affected this part of the brain have specific deficits in physical reasoning.
Probing these questions could reveal fundamental truths about the human mind and intelligence. Pramod points out that it could also help advance artificial intelligence, which so far has been unable to match humans when it comes to physical reasoning. “Inferences that are sort of easy for us are still really difficult for even state-of-the art computer vision,” he says. “If we want to get to a stage where we have really good machine learning algorithms that can interact with the world the way we do, I think we should first understand how the brain does it.”
Language is a defining feature of humanity, and for centuries, philosophers and scientists have contemplated its true purpose. We use language to share information and exchange ideas—but is it more than that? Do we use language not just to communicate, but to think?
In the June 19, 2024, issue of the journal Nature, McGovern Institute neuroscientist Evelina Fedorenko and colleagues argue that we do not. Language, they say, is primarily a tool for communication.
Fedorenko acknowledges that there is an intuitive link between language and thought. Many people experience an inner voice that seems to narrate their own thoughts. And it’s not unreasonable to expect that well-spoken, articulate individuals are also clear thinkers. But as compelling as these associations can be, they are not evidence that we actually use language to think.
“I think there are a few strands of intuition and confusions that have led people to believe very strongly that language is the medium of thought,” she says.
“But when they are pulled apart thread by thread, they don’t really hold up to empirical scrutiny.”
Separating language and thought
For centuries, language’s potential role in facilitating thinking was nearly impossible to evaluate scientifically.
But neuroscientists and cognitive scientists now have tools that enable a more rigorous consideration of the idea. Evidence from both fields, which Fedorenko, MIT cognitive scientist and linguist Edward Gibson, and University of California Berkeley cognitive scientist Steven Piantadosi review in their Nature Perspective, supports the idea that language is a tool for communication, not for thought.
“What we’ve learned by using methods that actually tell us about the engagement of the linguistic processing mechanisms is that those mechanisms are not really engaged when we think,” Fedorenko says. Also, she adds, “you can take those mechanisms away, and it seems that thinking can go on just fine.”
Over the past 20 years, Fedorenko and other neuroscientists have advanced our understanding of what happens in the brain as it generates and understands language. Now, using functional MRI to find parts of the brain that are specifically engaged when someone reads or listens to sentences or passages, they can reliably identify an individual’s language-processing network. Then they can monitor those brain regions while the person performs other tasks, from solving a sudoku puzzle to reasoning about other people’s beliefs.
“Your language system is basically silent when you do all sorts of thinking.” – Ev Fedorenko
“Pretty much everything we’ve tested so far, we don’t see any evidence of the engagement of the language mechanisms,” Fedorenko says. “Your language system is basically silent when you do all sorts of thinking.”
That’s consistent with observations from people who have lost the ability to process language due to an injury or stroke. Severely affected patients can be completely unable to process words, yet this does not interfere with their ability to solve math problems, play chess, or plan for future events. “They can do all the things that they could do before their injury. They just can’t take those mental representations and convert them into a format which would allow them to talk about them with others,” Fedorenko says. “If language gives us the core representations that we use for reasoning, then…destroying the language system should lead to problems in thinking as well, and it really doesn’t.”
Conversely, intellectual impairments do not always associate with language impairment; people with intellectual disability disorders or neuropsychiatric disorders that limit their ability to think and reason do not necessarily have problems with basic linguistic functions. Just as language does not appear to be necessary for thought, Fedorenko and colleagues conclude that it is also not sufficient to produce clear thinking.
Language optimization
In addition to arguing that language is unlikely to be used for thinking, the scientists considered its suitability as a communication tool, drawing on findings from linguistic analyses. Analyses across dozens of diverse languages, both spoken and signed, have found recurring features that make them easy to produce and understand. “It turns out that pretty much any property you look at, you can find evidence that languages are optimized in a way that makes information transfer as efficient as possible,” Fedorenko says.
That’s not a new idea, but it has held up as linguists analyze larger corpora across more diverse sets of languages, which has become possible in recent years as the field has assembled corpora that are annotated for various linguistic features. Such studies find that across languages, sounds and words tend to be pieced together in ways that minimize effort for the language producer without muddling the message. For example, commonly used words tend to be short, while words whose meanings depend on one another tend to cluster close together in sentences. Likewise, linguists have noted features that help languages convey meaning despite potential “signal distortions,” whether due to attention lapses or ambient noise.
“All of these features seem to suggest that the forms of languages are optimized to make communication easier,” Fedorenko says, pointing out that such features would be irrelevant if language was primarily a tool for internal thought.
“Given that languages have all these properties, it’s likely that we use language for communication,” she says. She and her coauthors conclude that as a powerful tool for transmitting knowledge, language reflects the sophistication of human cognition—but does not give rise to it.
The Norwegian Academy of Science and Letters today announced the 2024 Kavli Prize Laureates in the fields of astrophysics, nanoscience, and neuroscience. The 2024 Kavli Prize in Neuroscience honors Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience at MIT and an investigator at the McGovern Institute, along with UC Berkeley neurobiologist Doris Tsao, and Rockefeller University neuroscientist Winrich Freiwald for their discovery of a highly localized and specialized system for representation of faces in human and non-human primate neocortex. The neuroscience laureates will share $1 million USD.
“Kanwisher, Freiwald, and Tsao together discovered a localized and specialized neocortical system for face recognition,” says Kristine Walhovd, Chair of the Kavli Neuroscience Committee. “Their outstanding research will ultimately further our understanding of recognition not only of faces, but objects and scenes.”
Overcoming failure
As a graduate student at MIT in the early days of functional brain imaging, Kanwisher was fascinated by the potential of the emerging technology to answer a suite of questions about the human mind. But a lack of brain imaging resources and a series of failed experiments led Kanwisher consider leaving the field for good. She credits her advisor, MIT Professor of Psychology Molly Potter, for supporting her through this challenging time and for teaching her how to make powerful inferences about the inner workings of the mind from behavioral data alone.
After receiving her PhD from MIT, Kanwisher spent a year studying nuclear strategy with a MacArthur Foundation Fellowship in Peace and International Security, but eventually returned to science by accepting a faculty position at Harvard University where she could use the latest brain imaging technology to pursue the scientific questions that had always fascinated her.
Zeroing in on faces
Recognizing faces is important for social interaction in many animals. Previous work in human psychology and animal research had suggested the existence of a functionally specialized system for face recognition, but this system had not clearly been identified with brain imaging technology. It is here that Kanwisher saw her opportunity.
Using a new method at the time, called functional magnetic resonance imaging or fMRI, Kanwisher’s team scanned people while they looked at faces and while they looked at objects, and searched for brain regions that responded more to one than the other. They found a small patch of neocortex, now called the fusiform face area (FFA), that is dedicated specifically to the task of face recognition. She found individual differences in the location of this area and devised an analysis technique to effectively localize specialized functional regions in the brain. This technique is now widely used and applied to domains beyond the face recognition system. Notably, Kanwisher’s first FFA paper was co-authored with Josh McDermott, who was an undergrad at Harvard University at the time, and is now an associate investigator at the McGovern Institute and holds a faculty position alongside Kanwisher in MIT’s Department of Brain and Cognitive Sciences.
From humans to monkeys
Inspired by Kanwisher´s findings, Winrich Freiwald and Doris Tsao together used fMRI to localize similar face patches in macaque monkeys. They mapped out six distinct brain regions, known as the face patch system, including these regions’ functional specialization and how they are connected. By recording the activity of individual brain cells, they revealed how cells in some face patches specialize in faces with particular views.
Tsao proceeded to identify how the face patches work together to identify a face, through a specific code that enables single cells to identify faces by assembling information of facial features. For example, some cells respond to the presence of hair, others to the distance between the eyes. Freiwald uncovered that a separate brain region, called the temporal pole, accelerates our recognition of familiar faces, and that some cells are selectively responsive to familiar faces.
“It was a special thrill for me when Doris and Winrich found face patches in monkeys using fMRI,” says Kanwisher, whose lab at MIT’s McGovern Institute has gone on to uncover many other regions of the human brain that engage in specific aspects of perception and cognition. “They are scientific heroes to me, and it is a thrill to receive the Kavli Prize in neuroscience jointly with them.”
“Nancy and her students have identified neocortical subregions that differentially engage in the perception of faces, places, music and even what others think,” says McGovern Institute Director Robert Desimone. “We are delighted that her groundbreaking work into the functional organization of the human brain is being honored this year with the Kavli Prize.”
Together, the laureates, with their work on neocortical specialization for face recognition, have provided basic principles of neural organization which will further our understanding of how we perceive the world around us.
About the Kavli Prize
The Kavli Prize is a partnership among The Norwegian Academy of Science and Letters, The Norwegian Ministry of Education and Research, and The Kavli Foundation (USA). The Kavli Prize honors scientists for breakthroughs in astrophysics, nanoscience and neuroscience that transform our understanding of the big, the small and the complex. Three one-million-dollar prizes are awarded every other year in each of the three fields. The Norwegian Academy of Science and Letters selects the laureates based on recommendations from three independent prize committees whose members are nominated by The Chinese Academy of Sciences, The French Academy of Sciences, The Max Planck Society of Germany, The U.S. National Academy of Sciences, and The Royal Society, UK.
Scientists often label cells with proteins that glow, allowing them to track the growth of a tumor, or measure changes in gene expression that occur as cells differentiate.
While this technique works well in cells and some tissues of the body, it has been difficult to apply this technique to image structures deep within the brain, because the light scatters too much before it can be detected.
MIT engineers have now come up with a novel way to detect this type of light, known as bioluminescence, in the brain: They engineered blood vessels of the brain to express a protein that causes them to dilate in the presence of light. That dilation can then be observed with magnetic resonance imaging (MRI), allowing researchers to pinpoint the source of light.
“A well-known problem that we face in neuroscience, as well as other fields, is that it’s very difficult to use optical tools in deep tissue. One of the core objectives of our study was to come up with a way to image bioluminescent molecules in deep tissue with reasonably high resolution,” says Alan Jasanoff, an MIT professor of biological engineering, brain and cognitive sciences, and nuclear science and engineering.
The new technique developed by Jasanoff and his colleagues could enable researchers to explore the inner workings of the brain in more detail than has previously been possible.
Jasanoff, who is also an associate investigator at MIT’s McGovern Institute for Brain Research, is the senior author of the study, which appears today in Nature Biomedical Engineering. Former MIT postdocs Robert Ohlendorf and Nan Li are the lead authors of the paper.
Detecting light
Bioluminescent proteins are found in many organisms, including jellyfish and fireflies. Scientists use these proteins to label specific proteins or cells, whose glow can be detected by a luminometer. One of the proteins often used for this purpose is luciferase, which comes in a variety of forms that glow in different colors.
Jasanoff’s lab, which specializes in developing new ways to image the brain using MRI, wanted to find a way to detect luciferase deep within the brain. To achieve that, they came up with a method for transforming the blood vessels of the brain into light detectors. A popular form of MRI works by imaging changes in blood flow in the brain, so the researchers engineered the blood vessels themselves to respond to light by dilating.
“Blood vessels are a dominant source of imaging contrast in functional MRI and other non-invasive imaging techniques, so we thought we could convert the intrinsic ability of these techniques to image blood vessels into a means for imaging light, by photosensitizing the blood vessels themselves,” Jasanoff says.
“We essentially turn the vasculature of the brain into a three-dimensional camera.” – Alan Jasanoff
To make the blood vessels sensitive to light, the researcher engineered them to express a bacterial protein called Beggiatoa photoactivated adenylate cyclase (bPAC). When exposed to light, this enzyme produces a molecule called cAMP, which causes blood vessels to dilate. When blood vessels dilate, it alters the balance of oxygenated and deoxygenated hemoglobin, which have different magnetic properties. This shift in magnetic properties can be detected by MRI.
BPAC responds specifically to blue light, which has a short wavelength, so it detects light generated within close range. The researchers used a viral vector to deliver the gene for bPAC specifically to the smooth muscle cells that make up blood vessels. When this vector was injected in rats, blood vessels throughout a large area of the brain became light-sensitive.
“Blood vessels form a network in the brain that is extremely dense. Every cell in the brain is within a couple dozen microns of a blood vessel,” Jasanoff says. “The way I like to describe our approach is that we essentially turn the vasculature of the brain into a three-dimensional camera.”
Once the blood vessels were sensitized to light, the researchers implanted cells that had been engineered to express luciferase if a substrate called CZT is present. In the rats, the researchers were able to detect luciferase by imaging the brain with MRI, which revealed dilated blood vessels.
Tracking changes in the brain
The researchers then tested whether their technique could detect light produced by the brain’s own cells, if they were engineered to express luciferase. They delivered the gene for a type of luciferase called GLuc to cells in a deep brain region known as the striatum. When the CZT substrate was injected into the animals, MRI imaging revealed the sites where light had been emitted.
This technique, which the researchers dubbed bioluminescence imaging using hemodynamics, or BLUsH, could be used in a variety of ways to help scientists learn more about the brain, Jasanoff says.
For one, it could be used to map changes in gene expression, by linking the expression of luciferase to a specific gene. This could help researchers observe how gene expression changes during embryonic development and cell differentiation, or when new memories form. Luciferase could also be used to map anatomical connections between cells or to reveal how cells communicate with each other.
The researchers now plan to explore some of those applications, as well as adapting the technique for use in mice and other animal models.
The research was funded by the U.S. National Institutes of Health, the G. Harold and Leila Y. Mathers Foundation, Lore Harp McGovern, Gardner Hendrie, a fellowship from the German Research Foundation, a Marie Sklodowska-Curie Fellowship from the European Union, and a Y. Eva Tan Fellowship and a J. Douglas Tan Fellowship, both from the McGovern Institute for Brain Research.
A new way of imaging the brain with magnetic resonance imaging (MRI) does not directly detect neural activity as originally reported, according to scientists at MIT’s McGovern Institute. The method, first described in 2022, generated excitement within the neuroscience community as a potentially transformative approach. But a study from the lab of McGovern Associate Investigator Alan Jasanoff, reported March 27, 2024, in the journal Science Advances, demonstrates that MRI signals produced by the new method are generated in large part by the imaging process itself, not neuronal activity.
Jasanoff explains that having a noninvasive means of seeing neuronal activity in the brain is a long-sought goal for neuroscientists. The functional MRI methods that researchers currently use to monitor brain activity don’t actually detect neural signaling. Instead, they use blood flow changes triggered by brain activity as a proxy. This reveals which parts of the brain are engaged during imaging, but it cannot pinpoint neural activity to precise locations, and it is too slow to truly track neurons’ rapid-fire communications.
So when a team of scientists reported in Science a new MRI method called DIANA, for “direct imaging of neuronal activity,” neuroscientists paid attention. The authors claimed that DIANA detected MRI signals in the brain that corresponded to the electrical signals of neurons, and that it acquired signals far faster than the methods now used for functional MRI.
“Everyone wants this,” Jasanoff says. “If we could look at the whole brain and follow its activity with millisecond precision and know that all the signals that we’re seeing have to do with cellular activity, this would be just wonderful. It could tell us all kinds of things about how the brain works and what goes wrong in disease.”
Jasanoff adds that from the initial report, it was not clear what brain changes DIANA was detecting to produce such a rapid readout of neural activity. Curious, he and his team began to experiment with the method. “We wanted to reproduce it, and we wanted to understand how it worked,” he says.
Decoding DIANA
Recreating the MRI procedure reported by DIANA’s developers, postdoctoral researcher Valerie Doan Phi Van imaged the brain of a rat as an electric stimulus was delivered to one paw. Phi Van says she was excited to see an MRI signal appear in the brain’s sensory cortex, exactly when and where neurons were expected to respond to the sensation on the paw. “I was able to reproduce it,” she says. “I could see the signal.”
With further tests of the system, however, her enthusiasm waned. To investigate the source of the signal, she disconnected the device used to stimulate the animal’s paw, then repeated the imaging. Again, signals showed up in the sensory processing part of the brain. But this time, there was no reason for neurons in that area to be activated. In fact, Phi Van found, the MRI produced the same kinds of signals when the animal inside the scanner was replaced with a tube of water. It was clear DIANA’s functional signals were not arising from neural activity.
Phi Van traced the source of the specious signals to the pulse program that directs DIANA’s imaging process, detailing the sequence of steps the MRI scanner uses to collect data. Embedded within DIANA’s pulse program was a trigger for the device that delivers sensory input to the animal inside the scanner. That synchronizes the two processes, so the stimulation occurs at a precise moment during data acquisition. That trigger appeared to be causing signals that DIANA’s developers had concluded indicated neural activity.
It was clear DIANA’s functional signals were not arising from neural activity.
Phi Van altered the pulse program, changing the way the stimulator was triggered. Using the updated program, the MRI scanner detected no functional signal in the brain in response to the same paw stimulation that had produced a signal before. “If you take this part of the code out, then the signal will also be gone. So that means the signal we see is an artifact of the trigger,” she says.
Jasanoff and Phi Van went on to find reasons why other researchers have struggled to reproduce the results of the original DIANA report, noting that the trigger-generated signals can disappear with slight variations in the imaging process. With their postdoctoral colleague Sajal Sen, they also found evidence that cellular changes that DIANA’s developers had proposed might give rise to a functional MRI signal were not related to neuronal activity.
Jasanoff and Phi Van say it was important to share their findings with the research community, particularly as efforts continue to develop new neuroimaging methods. “If people want to try to repeat any part of the study or implement any kind of approach like this, they have to avoid falling into these pits,” Jasanoff says. He adds that they admire the authors of the original study for their ambition: “The community needs scientists who are willing to take risks to move the field ahead.”
A new study of people who speak many languages has found that there is something special about how the brain processes their native language.
In the brains of these polyglots — people who speak five or more languages — the same language regions light up when they listen to any of the languages that they speak. In general, this network responds more strongly to languages in which the speaker is more proficient, with one notable exception: the speaker’s native language. When listening to one’s native language, language network activity drops off significantly.
The findings suggest there is something unique about the first language one acquires, which allows the brain to process it with minimal effort, the researchers say.
“Something makes it a little bit easier to process — maybe it’s that you’ve spent more time using that language — and you get a dip in activity for the native language compared to other languages that you speak proficiently,” says Evelina Fedorenko, an associate professor of neuroscience at MIT, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.
Saima Malik-Moraleda, a graduate student in the Speech and Hearing Bioscience and Technology Program at Harvard University, and Olessia Jouravlev, a former MIT postdoc who is now an associate professor at Carleton University, are the lead authors of the paper, which appears today in the journal Cerebral Cortex.
Many languages, one network
The brain’s language processing network, located primarily in the left hemisphere, includes regions in the frontal and temporal lobes. In a 2021 study, Fedorenko’s lab found that in the brains of polyglots, the language network was less active when listening to their native language than the language networks of people who speak only one language.
In the new study, the researchers wanted to expand on that finding and explore what happens in the brains of polyglots as they listen to languages in which they have varying levels of proficiency. Studying polyglots can help researchers learn more about the functions of the language network, and how languages learned later in life might be represented differently than a native language or languages.
“With polyglots, you can do all of the comparisons within one person. You have languages that vary along a continuum, and you can try to see how the brain modulates responses as a function of proficiency,” Fedorenko says.
For the study, the researchers recruited 34 polyglots, each of whom had at least some degree of proficiency in five or more languages but were not bilingual or multilingual from infancy. Sixteen of the participants spoke 10 or more languages, including one who spoke 54 languages with at least some proficiency.
Each participant was scanned with functional magnetic resonance imaging (fMRI) as they listened to passages read in eight different languages. These included their native language, a language they were highly proficient in, a language they were moderately proficient in, and a language in which they described themselves as having low proficiency.
They were also scanned while listening to four languages they didn’t speak at all. Two of these were languages from the same family (such as Romance languages) as a language they could speak, and two were languages completely unrelated to any languages they spoke.
The passages used for the study came from two different sources, which the researchers had previously developed for other language studies. One was a set of Bible stories recorded in many different languages, and the other consisted of passages from “Alice in Wonderland” translated into many languages.
Brain scans revealed that the language network lit up the most when participants listened to languages in which they were the most proficient. However, that did not hold true for the participants’ native languages, which activated the language network much less than non-native languages in which they had similar proficiency. This suggests that people are so proficient in their native language that the language network doesn’t need to work very hard to interpret it.
“As you increase proficiency, you can engage linguistic computations to a greater extent, so you get these progressively stronger responses. But then if you compare a really high-proficiency language and a native language, it may be that the native language is just a little bit easier, possibly because you’ve had more experience with it,” Fedorenko says.
Brain engagement
The researchers saw a similar phenomenon when polyglots listened to languages that they don’t speak: Their language network was more engaged when listening to languages related to a language that they could understand, than compared to listening to completely unfamiliar languages.
“Here we’re getting a hint that the response in the language network scales up with how much you understand from the input,” Malik-Moraleda says. “We didn’t quantify the level of understanding here, but in the future we’re planning to evaluate how much people are truly understanding the passages that they’re listening to, and then see how that relates to the activation.”
The researchers also found that a brain network known as the multiple demand network, which turns on whenever the brain is performing a cognitively demanding task, also becomes activated when listening to languages other than one’s native language.
“What we’re seeing here is that the language regions are engaged when we process all these languages, and then there’s this other network that comes in for non-native languages to help you out because it’s a harder task,” Malik-Moraleda says.
In this study, most of the polyglots began studying their non-native languages as teenagers or adults, but in future work, the researchers hope to study people who learned multiple languages from a very young age. They also plan to study people who learned one language from infancy but moved to the United States at a very young age and began speaking English as their dominant language, while becoming less proficient in their native language, to help disentangle the effects of proficiency versus age of acquisition on brain responses.
The research was funded by the McGovern Institute for Brain Research, MIT’s Department of Brain and Cognitive Sciences, and the Simons Center for the Social Brain.
Using a novel microscopy technique, MIT and Brigham and Women’s Hospital/Harvard Medical School researchers have imaged human brain tissue in greater detail than ever before, revealing cells and structures that were not previously visible.
Among their findings, the researchers discovered that some “low-grade” brain tumors contain more putative aggressive tumor cells than expected, suggesting that some of these tumors may be more aggressive than previously thought.
The researchers hope that this technique could eventually be deployed to diagnose tumors, generate more accurate prognoses, and help doctors choose treatments.
“We’re starting to see how important the interactions of neurons and synapses with the surrounding brain are to the growth and progression of tumors. A lot of those things we really couldn’t see with conventional tools, but now we have a tool to look at those tissues at the nanoscale and try to understand these interactions,” says Pablo Valdes, a former MIT postdoc who is now an assistant professor of neuroscience at the University of Texas Medical Branch and the lead author of the study.
Edward Boyden, the Y. Eva Tan Professor in Neurotechnology at MIT; a professor of biological engineering, media arts and sciences, and brain and cognitive sciences; a Howard Hughes Medical Institute investigator; and a member of MIT’s McGovern Institute for Brain Research and Koch Institute for Integrative Cancer Research; and E. Antonio Chiocca, a professor of neurosurgery at Harvard Medical School and chair of neurosurgery at Brigham and Women’s Hospital, are the senior authors of the study, which appears today in Science Translational Medicine.
Making molecules visible
The new imaging method is based on expansion microscopy, a technique developed in Boyden’s lab in 2015 based on a simple premise: Instead of using powerful, expensive microscopes to obtain high-resolution images, the researchers devised a way to expand the tissue itself, allowing it to be imaged at very high resolution with a regular light microscope.
The technique works by embedding the tissue into a polymer that swells when water is added, and then softening up and breaking apart the proteins that normally hold tissue together. Then, adding water swells the polymer, pulling all the proteins apart from each other. This tissue enlargement allows researchers to obtain images with a resolution of around 70 nanometers, which was previously possible only with very specialized and expensive microscopes such as scanning electron microscopes.
In 2017, the Boyden lab developed a way to expand preserved human tissue specimens, but the chemical reagents that they used also destroyed the proteins that the researchers were interested in labeling. By labeling the proteins with fluorescent antibodies before expansion, the proteins’ location and identity could be visualized after the expansion process was complete. However, the antibodies typically used for this kind of labeling can’t easily squeeze through densely packed tissue before it’s expanded.
So, for this study, the authors devised a different tissue-softening protocol that breaks up the tissue but preserves proteins in the sample. After the tissue is expanded, proteins can be labelled with commercially available fluorescent antibodies. The researchers then can perform several rounds of imaging, with three or four different proteins labeled in each round. This labeling of proteins enables many more structures to be imaged, because once the tissue is expanded, antibodies can squeeze through and label proteins they couldn’t previously reach.
The technique works by embedding the tissue into a polymer that swells when water is added, and then softening up and breaking apart the proteins that normally hold tissue together.
“We open up the space between the proteins so that we can get antibodies into crowded spaces that we couldn’t otherwise,” Valdes says. “We saw that we could expand the tissue, we could decrowd the proteins, and we could image many, many proteins in the same tissue by doing multiple rounds of staining.”
Working with MIT Assistant Professor Deblina Sarkar, the researchers demonstrated a form of this “decrowding” in 2022 using mouse tissue.
The new study resulted in a decrowding technique for use with human brain tissue samples that are used in clinical settings for pathological diagnosis and to guide treatment decisions. These samples can be more difficult to work with because they are usually embedded in paraffin and treated with other chemicals that need to be broken down before the tissue can be expanded.
In this study, the researchers labeled up to 16 different molecules per tissue sample. The molecules they targeted include markers for a variety of structures, including axons and synapses, as well as markers that identify cell types such as astrocytes and cells that form blood vessels. They also labeled molecules linked to tumor aggressiveness and neurodegeneration.
Using this approach, the researchers analyzed healthy brain tissue, along with samples from patients with two types of glioma — high-grade glioblastoma, which is the most aggressive primary brain tumor, with a poor prognosis, and low-grade gliomas, which are considered less aggressive.
“We wanted to look at brain tumors so that we can understand them better at the nanoscale level, and by doing that, to be able to develop better treatments and diagnoses in the future. At this point, it was more developing a tool to be able to understand them better, because currently in neuro-oncology, people haven’t done much in terms of super-resolution imaging,” Valdes says.
A diagnostic tool
To identify aggressive tumor cells in gliomas they studied, the researchers labeled vimentin, a protein that is found in highly aggressive glioblastomas. To their surprise, they found many more vimentin-expressing tumor cells in low-grade gliomas than had been seen using any other method.
“This tells us something about the biology of these tumors, specifically, how some of them probably have a more aggressive nature than you would suspect by doing standard staining techniques,” Valdes says.
When glioma patients undergo surgery, tumor samples are preserved and analyzed using immunohistochemistry staining, which can reveal certain markers of aggressiveness, including some of the markers analyzed in this study.
“These are incurable brain cancers, and this type of discovery will allow us to figure out which cancer molecules to target so we can design better treatments. It also proves the profound impact of having clinicians like us at the Brigham and Women’s interacting with basic scientists such as Ed Boyden at MIT to discover new technologies that can improve patient lives,” Chiocca says.
The researchers hope their expansion microscopy technique could allow doctors to learn much more about patients’ tumors, helping them to determine how aggressive the tumor is and guiding treatment choices. Valdes now plans to do a larger study of tumor types to try to establish diagnostic guidelines based on the tumor traits that can be revealed using this technique.
“Our hope is that this is going to be a diagnostic tool to pick up marker cells, interactions, and so on, that we couldn’t before,” he says. “It’s a practical tool that will help the clinical world of neuro-oncology and neuropathology look at neurological diseases at the nanoscale like never before, because fundamentally it’s a very simple tool to use.”
Boyden’s lab also plans to use this technique to study other aspects of brain function, in healthy and diseased tissue.
“Being able to do nanoimaging is important because biology is about nanoscale things — genes, gene products, biomolecules — and they interact over nanoscale distances,” Boyden says. “We can study all sorts of nanoscale interactions, including synaptic changes, immune interactions, and changes that occur during cancer and aging.”
The research was funded by K. Lisa Yang, the Howard Hughes Medical Institute, John Doerr, Open Philanthropy, the Bill and Melinda Gates Foundation, the Koch Institute Frontier Research Program, the National Institutes of Health, and the Neurosurgery Research and Education Foundation.
MIT neuroscientists have found that the brain’s sensitivity to rewarding experiences — a critical factor in motivation and attention — can be shaped by socioeconomic conditions.
In a study of 12 to 14-year-olds whose socioeconomic status (SES) varied widely, the researchers found that children from lower SES backgrounds showed less sensitivity to reward than those from more affluent backgrounds.
Using functional magnetic resonance imaging (fMRI), the research team measured brain activity as the children played a guessing game in which they earned extra money for each correct guess. When participants from higher SES backgrounds guessed correctly, a part of the brain called the striatum, which is linked to reward, lit up much more than in children from lower SES backgrounds.
The brain imaging results also coincided with behavioral differences in how participants from lower and higher SES backgrounds responded to correct guesses. The findings suggest that lower SES circumstances may prompt the brain to adapt to the environment by dampening its response to rewards, which are often scarcer in low SES environments.
“If you’re in a highly resourced environment, with many rewards available, your brain gets tuned in a certain way. If you’re in an environment in which rewards are more scarce, then your brain accommodates the environment in which you live. Instead of being overresponsive to rewards, it seems like these brains, on average, are less responsive, because probably their environment has been less consistent in the availability of rewards,” says John Gabrieli, the Grover Hermann Professor of Health Sciences and Technology, a professor of brain and cognitive sciences, and a member of MIT’s McGovern Institute for Brain Research.
Gabrieli and Rachel Romeo, a former MIT postdoc who is now an assistant professor in the Department of Human Development and Quantitative Methodology at the University of Maryland, are the senior authors of the study. MIT postdoc Alexandra Decker is the lead author of the paper, which appears today in the Journal of Neuroscience.
Reward response
Previous research has shown that children from lower SES backgrounds tend to perform worse on tests of attention and memory, and they are more likely to experience depression and anxiety. However, until now, few studies have looked at the possible association between SES and reward sensitivity.
In the new study, the researchers focused on a part of the brain called the striatum, which plays a significant role in reward response and decision-making. Studies in people and animal models have shown that this region becomes highly active during rewarding experiences.
To investigate potential links between reward sensitivity, the striatum, and socioeconomic status, the researchers recruited more than 100 adolescents from a range of SES backgrounds, as measured by household income and how much education their parents received.
Each of the participants underwent fMRI scanning while they played a guessing game. The participants were shown a series of numbers between 1 and 9, and before each trial, they were asked to guess whether the next number would be greater than or less than 5. They were told that for each correct guess, they would earn an extra dollar, and for each incorrect guess, they would lose 50 cents.
Unbeknownst to the participants, the game was set up to control whether the guess would be correct or incorrect. This allowed the researchers to ensure that each participant had a similar experience, which included periods of abundant rewards or few rewards. In the end, everyone ended up winning the same amount of money (in addition to a stipend that each participant received for participating in the study).
Previous work has shown that the brain appears to track the rate of rewards available. When rewards are abundant, people or animals tend to respond more quickly because they don’t want to miss out on the many available rewards. The researchers saw that in this study as well: When participants were in a period when most of their responses were correct, they tended to respond more quickly.
“If your brain is telling you there’s a really high chance that you’re going to receive a reward in this environment, it’s going to motivate you to collect rewards, because if you don’t act, you’re missing out on a lot of rewards,” Decker says.
Brain scans showed that the degree of activation in the striatum appeared to track fluctuations in the rate of rewards across time, which the researchers think could act as a motivational signal that there are many rewards to collect. The striatum lit up more during periods in which rewards were abundant and less during periods in which rewards were scarce. However, this effect was less pronounced in the children from lower SES backgrounds, suggesting their brains were less attuned to fluctuations in the rate of reward over time.
The researchers also found that during periods of scarce rewards, participants tended to take longer to respond after a correct guess, another phenomenon that has been shown before. It’s unknown exactly why this happens, but two possible explanations are that people are savoring their reward or that they are pausing to update the reward rate. However, once again, this effect was less pronounced in the children from lower SES backgrounds — that is, they did not pause as long after a correct guess during the scarce-reward periods.
“There was a reduced response to reward, which is really striking. It may be that if you’re from a lower SES environment, you’re not as hopeful that the next response will gain similar benefits, because you may have a less reliable environment for earning rewards,” Gabrieli says. “It just points out the power of the environment. In these adolescents, it’s shaping their psychological and brain response to reward opportunity.”
Environmental effects
The fMRI scans performed during the study also revealed that children from lower SES backgrounds showed less activation in the striatum when they guessed correctly, suggesting that their brains have a dampened response to reward.
The researchers hypothesize that these differences in reward sensitivity may have evolved over time, in response to the children’s environments.
“Socioeconomic status is associated with the degree to which you experience rewards over the course of your lifetime,” Decker says. “So, it’s possible that receiving a lot of rewards perhaps reinforces behaviors that make you receive more rewards, and somehow this tunes the brain to be more responsive to rewards. Whereas if you are in an environment where you receive fewer rewards, your brain might become, over time, less attuned to them.”
The study also points out the value of recruiting study subjects from a range of SES backgrounds, which takes more effort but yields important results, the researchers say.
“Historically, many studies have involved the easiest people to recruit, who tend to be people who come from advantaged environments. If we don’t make efforts to recruit diverse pools of participants, we almost always end up with children and adults who come from high-income, high-education environments,” Gabrieli says. “Until recently, we did not realize that principles of brain development vary in relation to the environment in which one grows up, and there was very little evidence about the influence of SES.”
The research was funded by the William and Flora Hewlett Foundation and a Natural Sciences and Engineering Research Council of Canada Postdoctoral Fellowship.
Psychiatrists and pediatricians have sounded an alarm. The mental health of youth in the United States is worsening. Youth visits to emergency departments related to depression, anxiety, and behavioral challenges have been on the rise for years. Suicide rates among young people have escalated, too. Researchers have tracked these trends for more than a decade, and the Covid-19 pandemic only exacerbated the situation.
“It’s all over the news, how shockingly common mental health difficulties are,” says John Gabrieli, the Grover Hermann Professor of Health Sciences and Technology at MIT and an investigator at the McGovern Institute. “It’s worsening by every measure.”
Experts worry that our mental health systems are inadequate to meet the growing need. “This has gone from bad to catastrophic, from my perspective,” says Susan Whitfeld-Gabrieli, a professor of psychology at Northeastern University and a research affiliate at the McGovern Institute.
“We really need to come up with novel interventions that target the neural mechanisms that we believe potentiate depression and anxiety.”
Training the brain
One approach may be to help young people learn to modulate some of the relevant brain circuitry themselves. Evidence is accumulating that practicing mindfulness — focusing awareness on the present, typically through meditation — can change patterns of brain activity associated with emotions and mental health.
“There’s been a steady flow of moderate-size studies showing that when you help people gain mindfulness through training programs, you get all kinds of benefits in terms of people feeling less stress, less anxiety, fewer negative emotions, and sometimes more positive ones as well,” says Gabrieli, who is also a professor of brain and cognitive sciences at MIT. “Those are the things you wish for people.”
“If there were a medicine with as much evidence of its effectiveness as mindfulness, it would be flying off the shelves of every pharmacy.”
– John Gabrieli
Researchers have even begun testing mindfulness-based interventions head-to-head against standard treatments for psychiatric disorders. The results of recent studies involving hundreds of adults with anxiety disorders or depression are encouraging. “It’s just as good as the best medicines and the best behavioral treatments that we know a ton about,” Gabrieli says.
Much mindfulness research has focused on adults, but promising data about the benefits of mindfulness training for children and adolescents is emerging as well. In studies supported by the McGovern Institute’s Poitras Center for Psychiatric Disorders Research in 2019 and 2020, Gabrieli and Whitfield-Gabrieli found that sixth-graders in a Boston middle school who participated in eight weeks of mindfulness training experienced reductions in feelings of stress and increases in sustained attention. More recently, Gabrieli and Whitfeld-Gabrieli’s teams have shown how new tools can support mindfulness training and make it accessible to more children and their families — from a smartphone app that can be used anywhere to real-time neurofeedback inside an MRI scanner.
Mindfulness and mental health
Mindfulness is not just a practice, it is a trait — an open, non-judgmental way of attending to experiences that some people exhibit more than others. By assessing individuals’ mindfulness with questionnaires that ask about attention and awareness, researchers have found the trait associates with many measures of mental health. Gabrieli and his team measured mindfulness in children between the ages of eight and ten and found it was highest in those who were most emotionally resilient to the stress they experienced during the Covid-19 pandemic. As the team reported this year in the journal PLOS One, children who were more mindful rated the impact of the pandemic on their own lives lower than other participants in the study. They also reported lower levels of stress, anxiety, and depression.
Mindfulness doesn’t come naturally to everyone, but brains are malleable, and both children and adults can cultivate mindfulness with training and practice. In their studies of middle schoolers, Gabrieli and Whitfeld-Gabrieli showed that the emotional effects of mindfulness training corresponded to measurable changes in the brain: Functional MRI scans revealed changes in regions involved in stress, negative feelings, and focused attention.
Whitfeld-Gabrieli says if mindfulness training makes kids more resilient, it could be a valuable tool for managing symptoms of anxiety and depression before they become severe. “I think it should be part of the standard school day,” she says. “I think we would have a much happier, healthier society if we could be doing this from the ground up.”
Data from Gabrieli’s lab suggests broadly implementing mindfulness training might even pay off in terms of academic achievement. His team found in a 2019 study that middle school students who reported greater levels of mindfulness had, on average, better grades, better scores on standardized tests, fewer absences, and fewer school suspensions than their peers.
Some schools have begun making mindfulness programs available to their students. But those programs don’t reach everyone, and their type and quality vary tremendously. Indeed, not every study of mindfulness training in schools has found the program to significantly benefit participants, which may be because not every approach to mindfulness training is equally effective.
“This is where I think the science matters,” Gabrieli says. “You have to find out what kinds of supports really work and you have to execute them reasonably. A recent report from Gabrieli’s lab offers encouraging news: mindfulness training doesn’t have to be in-person. Gabrieli and his team found that children can benefit from practicing mindfulness at home with the help of an app.
When the pandemic closed schools in 2020, school-based mindfulness programs came to an abrupt halt. Soon thereafter, a group called Inner Explorer had developed a smartphone app that could teach children mindfulness at home. Gabrieli and his team were eager to find out if this easy-access tool could effectively support children’s emotional well-being.
In October of this year, they reported in the journal Mindfulness that after 40 days of app use, children between the ages of eight and ten reported less stress than they had before beginning mindfulness training. Parents reported that their children were also experiencing fewer negative emotions, such as loneliness and fear.
The outcomes suggest a path toward making evidence-based mindfulness training for children broadly accessible. “Tons of people could do this,” says Gabrieli. “It’s super scalable. It doesn’t cost money; you don’t have to go somewhere. We’re very excited about that.”
Visualizing healthy minds
Mindfulness training may be even more effective when practitioners can visualize what’s happening in their brains. In Whitfeld-Gabrieli’s lab, teenagers have had a chance to slide inside an MRI scanner and watch their brain activity shift in real time as they practiced mindfulness meditation. The visualization they see focuses on the brain’s default mode network (DMN), which is most active when attention is not focused on a particular task. Certain patterns of activity in the DMN have been linked to depression, anxiety, and other psychiatric conditions, and mindfulness training may help break these patterns.
Whitfeld-Gabrieli explains that when the mind is free to wander, two hubs of the DMN become active. “Typically, that means we’re engaged in some kind of mental time travel,” she says. That might mean reminiscing about the past or planning for the future, but can be more distressing when it turns into obsessive rumination or worry. In people with anxiety, depression, and psychosis, these network hubs are often hyperconnected.
“It’s almost as if they’re hijacked,” Whitfeld-Gabrieli says. “The more they’re correlated, the more psychopathology one might be experiencing. We wanted to unlock that hyperconnectivity for kids who are suffering from depression and anxiety.” She hoped that by replacing thoughts of the past and the future with focus on the present, mindfulness meditation would rein in overactive DMNs, and she wanted a way to encourage kids to do exactly that.
The neurofeedback tool that she and her colleagues created focuses on the DMN as well as separate brain region that is called on during attention-demanding tasks. Activity in those regions is monitored with functional MRI and displayed to users in a game-like visualization. Inside the scanner, participants see how that activity changes as they focus on a meditation or when their mind wanders. As their mind becomes more focused on the present moment, changes in brain activity move a ball toward a target.
Whitfeld-Gabrieli says the real-time feedback was motivating for adolescents who participated in a recent study, who all had histories of anxiety or depression. “They’re training their brain to tune their mind, and they love it,” she says.
In March, she and her team reported in Molecular Psychiatry that the neurofeedback tool helped those study participants reduce connectivity in the DMN and engage a more desirable brain state. It’s not the first success the team has had with the approach. Previously, they found that the decreases in DMN connectivity brought about by mindfulness meditation with neurofeedback were associated with reduced hallucinations for patients with schizophrenia. Testing the clinical benefits of the approach in teens is on the horizon; Whitfeld-Gabrieli and her collaborators plan to investigate how mindfulness meditation with real-time neurofeedback affects depression symptoms in an upcoming clinical trial.
Whitfeld-Gabrieli emphasizes that the neurofeedback is a training tool, helping users improve mindfulness techniques they can later call on anytime, anywhere. While that training currently requires time inside an MRI scanner, she says it may be possible create an EEG-based version of the approach, which could be deployed in doctors’ offices and other more accessible settings.
Both Gabrieli and Whitfeld-Gabrieli continue to explore how mindfulness training impacts different aspects of mental health, in both children and adults and with a range of psychiatric conditions. Whitfeld-Gabrieli expects it will be one powerful tool for combating a youth mental health crisis for which there will be no single solution. “I think it’s going to take a village,” she says. “We are all going to have to work together, and we’ll have to come up some really innovative ways to help.”