Mehrdad Jazayeri selected as an HHMI investigator

The Howard Hughes Medical Institute (HHMI) has named McGovern Institute neuroscientist Mehrdad Jazayeri as one of 26 new HHMI investigators—a group of visionary scientists who HHMI will support with more than $300 million over the next seven years.

Support from HHMI is intended to give its investigators, who work at institutions across the United States, the time and resources they need to push the boundaries of the biological sciences. Jazayeri, whose work integrates neurobiology with cognitive science and machine learning, plans to use that support to explore how the brain enables rapid learning and flexible behavior—central aspects of intelligence that have been difficult to study using traditional neuroscience approaches.

Jazayeri says he is delighted and honored by the news. “This is a recognition of my lab’s past accomplishments and the promise of the exciting research we want to embark on,” he says. “I am looking forward to engaging with this wonderful community and making new friends and colleagues while we elevate our science to the next level.”

An unexpected path

Jazayeri, who has been an investigator at the McGovern Institute since 2013, has already made a series of groundbreaking discoveries about how physiological processes in the brain give rise to the abilities of the mind. “That’s what we do really well,” he says. “We expose the computational link between abstract mental concepts, like belief, and electrical signals in the brain,” he says.

Jazayeri’s expertise and enthusiasm for this work grew out a curiosity that was sparked unexpectedly several years after he’d abandoned university education. He’d pursued his undergraduate studies in electrical engineering, a path with good job prospects in Iran where he lived. But an undergraduate program at Sharif University of Technology in Tehran left him disenchanted. “It was an uninspiring experience,” he says. “It’s a top university and I went there excited, but I lost interest as I couldn’t think of a personally meaningful application for my engineering skills. So, after my undergrad, I started a string of random jobs, perhaps to search for my passion.”

A few years later, Jazayeri was trying something new, happily living and working at a banana farm near the Caspian Sea. The farm schedule allowed for leisure in the evenings, which he took advantage of by delving into boxes full of books that an uncle regularly sent him from London. The books were an unpredictable, eclectic mix. Jazayeri read them all—and it was those that talked about the brain that most captured his imagination.

Until then, he had never had much interest in biology. But when he read about neurological disorders and how scientists were studying the brain, he was captivated. The subject seemed to merge his inherent interest in philosophy with an analytical approach that he also loved. “These books made me think that you actually can understand this system at a more concrete level…you can put electrodes in the brain and listen to what neurons say,” he says. “It had never even occurred to me to think about those things.”

He wanted to know more. It took time to find a graduate program in neuroscience that would accept a student with his unconventional background, but eventually the University of Toronto accepted him into a master’s program after he crammed for and passed an undergraduate exam testing his knowledge of physiology. From there, he went on to earn a PhD in neuroscience from New York University studying visual perception, followed by a postdoctoral fellowship at the University of Washington where he studied time perception.

In 2013, Jazayeri joined MIT’s Department of Brain and Cognitive Sciences. At MIT, conversations with new colleagues quickly enriched the way he thought about the brain. “It is fascinating to listen to cognitive scientists’ ideas about the mind,” he says. “They have a rich and deep understanding of the mind but the language they use to describe the mind is not the language of the brain. Bridging this gap in language between neuroscience and cognitive science is at the core of research in my lab.”

His lab’s general approach has been to collect data on neural activity from humans and animals as they perform tasks that call on specific aspects of the mind. “We design tasks that are as simple as possible but get at the crux of the problems in cognitive science,” he explains. “Then we build models that help us connect abstract concepts and theories in cognitive science to signals and dynamics of neural activity in the brain.”

It’s an interdisciplinary approach that even calls on many of the engineering approaches that had failed to inspire him as a student. Students and postdocs in the lab bring a diverse set of knowledge and skills, and together the team has made significant contributions to neuroscience, cognitive science, and computational science.

With animals trained to reproduce a rhythm, they’ve shown how neurons adjust the speed of their signals to predict when something will occur, and what happens when the actual timing of a stimulus deviates from the brain’s expectations.

Studies of time interval predictions have also helped the team learn how the brain weighs different pieces of information as it assesses situations and makes decisions. This process, called Bayesian integration, shapes our beliefs and our confidence in those beliefs. “These are really fundamental concepts in cognitive sciences, and we can now say how neurons exactly do that,” he says.

More recently, by teaching animals to navigate a virtual environment, Jazayeri’s team has found activity in the brain that appears to call up a cognitive map of a space even when its features are not visible. The discovery helps reveal how the brain builds internal models and uses them to interact with the world.

A new paradigm

Jazayeri is proud of these achievements. But he knows that when it comes to understanding the power and complexity of cognition, something is missing.

“Two really important hallmarks of cognition are the ability to learn rapidly and generalize flexibly. If somebody can do that, we say they’re intelligent,” he says. It’s an ability we have from an early age. “If you bring a kid a bunch of toys, they don’t need several years of training, they just can play with the toys right away in very creative ways,” he says. In the wild, many animals are similarly adept at problem solving and finding uses for new tools. But when animals are trained for many months on a single task, as typically happens in a lab, they don’t behave as intelligently. “They become like an expert that does one thing well, but they’re no longer very flexible,” he says.

Figuring out how the brain adapts and acts flexibly in real-world situations in going to require a new approach. “What we have done is that we come up with a task, and then change the animal’s brain through learning to match our task,” he says. “What we now want to do is to add a new paradigm to our work, one in which we will devise the task such that it would match the animal’s brain.”

As an HHMI investigator, Jazayeri plans to take advantage of a host of new technologies to study the brain’s involvement in ecologically relevant behaviors. That means moving beyond the virtual scenarios and digital platforms that have been so widespread in neuroscience labs, including his own, and instead letting animals interact with real objects and environments. “The animal will use its eyes and hands to engage with physical objects in the real world,” he says.

To analyze and learn about animals’ behavior, the team plans detailed tracking of hand and eye movements, and even measurements of sensations that are felt through the hands as animals explore objects and work through problems. These activities are expected to engage the entire brain, so the team will broadly record and analyze neural activity.

Designing meaningful experiments and making sense of the data will be a deeply interdisciplinary endeavor, and Jazayeri knows working with a collaborative community of scientists will be essential. He’s looking forward to sharing the enormous amount of relevant data his lab expects to collect with the research community and getting others involved. Likewise, as a dedicated mentor, he is committed to training scientists who will continue and expand the work in the future.

He is enthusiastic about the opportunity to move into these bigger questions about cognition and intelligence, and support from HHMI comes at an opportune moment. “I think we have now built the infrastructure and conceptual frameworks to think about these problems, and technology for recording and tracking animals has developed a great deal, so we can now do more naturalistic experiments,” he says.

His passion for his work is one of many passions in his life. His love for family, friends, and art are just as deep, and making space to experience everything is a lifelong struggle. But he knows his zeal is infectious. “I think my love for science is probably one of the best motivators of people around me,” he says.

A new strategy to cope with emotional stress

Some people, especially those in public service, perform admirable feats—healthcare workers fighting to keep patients alive or a first responder arriving at the scene of a car crash. But the emotional weight can become a mental burden. Research has shown that emergency personnel are at elevated risk for mental health challenges like post-traumatic stress disorder. How can people undergo such stressful experiences and also maintain their well-being?

A new study from the McGovern Institute reveals that a cognitive strategy focused on social good may be effective in helping people cope with distressing events. The research team found that the approach was comparable to another well-established emotion regulation strategy, unlocking a new tool for dealing with highly adverse situations.

“How you think can improve how you feel.”
– John Gabrieli

“This research suggests that the social good approach might be particularly useful in improving well-being for those constantly exposed to emotionally taxing events,” says John Gabrieli, the Grover Hermann Professor of Health Sciences and Technology and a professor of brain and cognitive sciences at MIT, who is a senior author of the paper.

The study, published today in PLOS ONE, is the first to examine the efficacy of this cognitive strategy. Nancy Tsai, a postdoctoral research scientist in Gabrieli’s lab at the McGovern Institute, is the lead author of the paper.

Emotion regulation tools

Emotion regulation is the ability to mentally reframe how we experience emotions—a skill critical to maintaining good mental health. Doing so can make one feel better when dealing with adverse events, and emotion regulation has been shown to boost emotional, social, cognitive, and physiological outcomes across the lifespan.

Female scientist poses with her arms crossed.
MIT postdoctoral researcher Nancy Tsai. Photo: Steph Stevens

One emotion regulation strategy is “distancing,” where a person copes with a negative event by imagining it as happening far away, a long time ago, or from a third-person perspective. Distancing has been well-documented as a useful cognitive tool, but it may be less effective in certain situations, especially ones that are socially charged—like a firefighter rescuing a family from a burning home. Rather than distancing themselves, a person may instead be forced to engage directly with the situation.

“In these cases, the ‘social good’ approach may be a powerful alternative,” says Tsai. “When a person uses the social good method, they view a negative situation as an opportunity to help others or prevent further harm.” For example, a firefighter experiencing emotional distress might focus on the fact that their work enables them to save lives. The idea had yet to be backed by scientific investigation, so Tsai and her team, alongside Gabrieli, saw an opportunity to rigorously probe this strategy.

A novel study

The MIT researchers recruited a cohort of adults and had them complete a questionnaire to gather information including demographics, personality traits, and current well-being, as well as how they regulated their emotions and dealt with stress. The cohort was randomly split into two groups: a distancing group and a social good group. In the online study, each group was shown a series of images that were either neutral (such as fruit) or contained highly aversive content (such as bodily injury). Participants were fully informed of the types of images they might see and could opt out of the study at any time.

Each group was asked to use their assigned cognitive strategy to respond to half of the negative images. For example, while looking at a distressing image, a person in the distancing group could have imagined that it was a screenshot from a movie. Conversely, a subject in the social good group might have responded to the image by envisioning that they were a first responder saving people from harm. For the other half of the negative images, participants were asked to only look at them and pay close attention to their emotions. The researchers asked the participants how they felt after each image was shown.

Social good as a potent strategy

The MIT team found that distancing and social good approaches helped diminish negative emotions. Participants reported feeling better when they used these strategies after viewing adverse content compared to when they did not and stated that both strategies were easy to implement.

The results also revealed that, overall, distancing yielded a stronger effect. Importantly, however, Tsai and Gabrieli believe that this study offers compelling evidence for social good as a powerful method better suited to situations when people cannot distance themselves, like rescuing someone from a car crash, “Which is more probable for people in the real world,” notes Tsai. Moreover, the team discovered that people who most successfully used the social good approach were more likely to view stress as enhancing rather than debilitating. Tsai says this link may point to psychological mechanisms that underlie both emotion regulation and how people respond to stress.

“The social good approach may be a potent strategy to combat the immense emotional demands of certain professions.”
– John Gabrieli

Additionally, the results showed that older adults used the cognitive strategies more effectively than younger adults. The team suspects that this is probably because, as prior research has shown, older adults are more adept at regulating their emotions likely due to having greater life experiences. The authors note that successful emotion regulation also requires cognitive flexibility, or having a malleable mindset to adapt well to different situations.

“This is not to say that people, such as physicians, should reframe their emotions to the point where they fully detach themselves from negative situations,” says Gabrieli. “But our study shows that the social good approach may be a potent strategy to combat the immense emotional demands of certain professions.”

The MIT team says that future studies are needed to further validate this work, and that such research is promising in that it can uncover new cognitive tools to equip individuals to take care of themselves as they bravely assume the challenge of taking care of others.

What is language for?

Language is a defining feature of humanity, and for centuries, philosophers and scientists have contemplated its true purpose. We use language to share information and exchange ideas—but is it more than that? Do we use language not just to communicate, but to think?

McGovern Investivator Ev Fedorenko in the Martinos Imaging Center at MIT. Photo: Caitlin Cunningham

In the June 19, 2024, issue of the journal Nature, McGovern Institute neuroscientist Evelina Fedorenko and colleagues argue that we do not. Language, they say, is primarily a tool for communication.

Fedorenko acknowledges that there is an intuitive link between language and thought. Many people experience an inner voice that seems to narrate their own thoughts. And it’s not unreasonable to expect that well-spoken, articulate individuals are also clear thinkers. But as compelling as these associations can be, they are not evidence that we actually use language to think.

 “I think there are a few strands of intuition and confusions that have led people to believe very strongly that language is the medium of thought,” she says.

“But when they are pulled apart thread by thread, they don’t really hold up to empirical scrutiny.”

Separating language and thought

For centuries, language’s potential role in facilitating thinking was nearly impossible to evaluate scientifically. But neuroscientists and cognitive scientists now have tools that enable a more rigorous consideration of the idea. Evidence from both fields, which Fedorenko, MIT cognitive scientist and linguist Edward Gibson, and University of California Berkeley cognitive scientist Steven Piantadosi review in their Nature Perspective, supports the idea that language is a tool for communication, not for thought.

“What we’ve learned by using methods that actually tell us about the engagement of the linguistic processing mechanisms is that those mechanisms are not really engaged when we think,” Fedorenko says. Also, she adds, “you can take those mechanisms away, and it seems that thinking can go on just fine.”

Over the past 20 years, Fedorenko and other neuroscientists have advanced our understanding of what happens in the brain as it generates and understands language. Now, using functional MRI to find parts of the brain that are specifically engaged when someone reads or listens to sentences or passages, they can reliably identify an individual’s language-processing network. Then they can monitor those brain regions while the person performs other tasks, from solving a sudoku puzzle to reasoning about other people’s beliefs.

“Your language system is basically silent when you do all sorts of thinking.” – Ev Fedorenko

“Pretty much everything we’ve tested so far, we don’t see any evidence of the engagement of the language mechanisms,” Fedorenko says. “Your language system is basically silent when you do all sorts of thinking.”

That’s consistent with observations from people who have lost the ability to process language due to an injury or stroke. Severely affected patients can be completely unable to process words, yet this does not interfere with their ability to solve math problems, play chess, or plan for future events. “They can do all the things that they could do before their injury. They just can’t take those mental representations and convert them into a format which would allow them to talk about them with others,” Fedorenko says. “If language gives us the core representations that we use for reasoning, then…destroying the language system should lead to problems in thinking as well, and it really doesn’t.”

Conversely, intellectual impairments do not always associate with language impairment; people with intellectual disability disorders or neuropsychiatric disorders that limit their ability to think and reason do not necessarily have problems with basic linguistic functions. Just as language does not appear to be necessary for thought, Fedorenko and colleagues conclude that it is also not sufficient to produce clear thinking.

Language optimization

In addition to arguing that language is unlikely to be used for thinking, the scientists considered its suitability as a communication tool, drawing on findings from linguistic analyses. Analyses across dozens of diverse languages, both spoken and signed, have found recurring features that make them easy to produce and understand. “It turns out that pretty much any property you look at, you can find evidence that languages are optimized in a way that makes information transfer as efficient as possible,” Fedorenko says.

That’s not a new idea, but it has held up as linguists analyze larger corpora across more diverse sets of languages, which has become possible in recent years as the field has assembled corpora that are annotated for various linguistic features. Such studies find that across languages, sounds and words tend to be pieced together in ways that minimize effort for the language producer without muddling the message. For example, commonly used words tend to be short, while words whose meanings depend on one another tend to cluster close together in sentences. Likewise, linguists have noted features that help languages convey meaning despite potential “signal distortions,” whether due to attention lapses or ambient noise.

“All of these features seem to suggest that the forms of languages are optimized to make communication easier,” Fedorenko says, pointing out that such features would be irrelevant if language was primarily a tool for internal thought.

“Given that languages have all these properties, it’s likely that we use language for communication,” she says. She and her coauthors conclude that as a powerful tool for transmitting knowledge, language reflects the sophistication of human cognition—but does not give rise to it.

Nancy Kanwisher Shares 2024 Kavli Prize in Neuroscience

The Norwegian Academy of Science and Letters today announced the 2024 Kavli Prize Laureates in the fields of astrophysics, nanoscience, and neuroscience. The 2024 Kavli Prize in Neuroscience honors Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience at MIT and an investigator at the McGovern Institute, along with UC Berkeley neurobiologist Doris Tsao, and Rockefeller University neuroscientist Winrich Freiwald for their discovery of a highly localized and specialized system for representation of faces in human and non-human primate neocortex. The neuroscience laureates will share $1 million USD.

“Kanwisher, Freiwald, and Tsao together discovered a localized and specialized neocortical system for face recognition,” says Kristine Walhovd, Chair of the Kavli Neuroscience Committee. “Their outstanding research will ultimately further our understanding of recognition not only of faces, but objects and scenes.”

Overcoming failure

As a graduate student at MIT in the early days of functional brain imaging, Kanwisher was fascinated by the potential of the emerging technology to answer a suite of questions about the human mind. But a lack of brain imaging resources and a series of failed experiments led Kanwisher consider leaving the field for good. She credits her advisor, MIT Professor of Psychology Molly Potter, for supporting her through this challenging time and for teaching her how to make powerful inferences about the inner workings of the mind from behavioral data alone.

After receiving her PhD from MIT, Kanwisher spent a year studying nuclear strategy with a MacArthur Foundation Fellowship in Peace and International Security, but eventually returned to science by accepting a faculty position at Harvard University where she could use the latest brain imaging technology to pursue the scientific questions that had always fascinated her.

Zeroing in on faces

Recognizing faces is important for social interaction in many animals. Previous work in human psychology and animal research had suggested the existence of a functionally specialized system for face recognition, but this system had not clearly been identified with brain imaging technology. It is here that Kanwisher saw her opportunity.

Using a new method at the time, called functional magnetic resonance imaging or fMRI, Kanwisher’s team scanned people while they looked at faces and while they looked at objects, and searched for brain regions that responded more to one than the other. They found a small patch of neocortex, now called the fusiform face area (FFA), that is dedicated specifically to the task of face recognition. She found individual differences in the location of this area and devised an analysis technique to effectively localize specialized functional regions in the brain. This technique is now widely used and applied to domains beyond the face recognition system. Notably, Kanwisher’s first FFA paper was co-authored with Josh McDermott, who was an undergrad at Harvard University at the time, and is now an associate investigator at the McGovern Institute and holds a faculty position alongside Kanwisher in MIT’s Department of Brain and Cognitive Sciences.

A group of five scientists standing and smiling in front of a whiteboard.
The Kanwisher lab at Harvard University circa 1996. From left to right: Nancy Kanwisher, Josh McDermott (then an undergrad), Marvin Chun (postdoc), Ewa Wojciulik (postdoc), and Jody Culham (grad student). Photo: Nancy Kanwisher

From humans to monkeys

Inspired by Kanwisher´s findings, Winrich Freiwald and Doris Tsao together used fMRI to localize similar face patches in macaque monkeys. They mapped out six distinct brain regions, known as the face patch system, including these regions’ functional specialization and how they are connected. By recording the activity of individual brain cells, they revealed how cells in some face patches specialize in faces with particular views.

Tsao proceeded to identify how the face patches work together to identify a face, through a specific code that enables single cells to identify faces by assembling information of facial features. For example, some cells respond to the presence of hair, others to the distance between the eyes. Freiwald uncovered that a separate brain region, called the temporal pole, accelerates our recognition of familiar faces, and that some cells are selectively responsive to familiar faces.

“It was a special thrill for me when Doris and Winrich found face patches in monkeys using fMRI,” says Kanwisher, whose lab at MIT’s McGovern Institute has gone on to uncover many other regions of the human brain that engage in specific aspects of perception and cognition. “They are scientific heroes to me, and it is a thrill to receive the Kavli Prize in neuroscience jointly with them.”

“Nancy and her students have identified neocortical subregions that differentially engage in the perception of faces, places, music and even what others think,” says McGovern Institute Director Robert Desimone. “We are delighted that her groundbreaking work into the functional organization of the human brain is being honored this year with the Kavli Prize.”

Together, the laureates, with their work on neocortical specialization for face recognition, have provided basic principles of neural organization which will further our understanding of how we perceive the world around us.

About the Kavli Prize

The Kavli Prize is a partnership among The Norwegian Academy of Science and Letters, The Norwegian Ministry of Education and Research, and The Kavli Foundation (USA). The Kavli Prize honors scientists for breakthroughs in astrophysics, nanoscience and neuroscience that transform our understanding of the big, the small and the complex. Three one-million-dollar prizes are awarded every other year in each of the three fields. The Norwegian Academy of Science and Letters selects the laureates based on recommendations from three independent prize committees whose members are nominated by The Chinese Academy of Sciences, The French Academy of Sciences, The Max Planck Society of Germany, The U.S. National Academy of Sciences, and The Royal Society, UK.

What is consciousness?

In the hit T.V. show “Westworld,” Dolores Abernathy, a golden-tressed belle, lives in the days when Manifest Destiny still echoed in America. She begins to notice unusual stirrings shaking up her quaint western town—and soon discovers that her skin is synthetic, and her mind, metal. She’s a cyborg meant to entertain humans. The key to her autonomy lies in reaching consciousness.

Shows like “Westworld” and other media probe the idea of consciousness, attempting to nail down a definition of the concept. However, though humans have ruminated on consciousness for centuries, we still don’t have a solid definition (even the Merriam-Webster dictionary lists five). One framework suggests that consciousness is any experience, from eating a candy bar to heartbreak. Another argues that it is how certain stimuli influence one’s behavior.

MIT graduate student Adam Eisen.

While some search for a philosophical explanation, MIT graduate student Adam Eisen seeks a scientific one.

Eisen studies consciousness in the labs of Ila Fiete, an associate investigator at the McGovern Institute, and Earl Miller, an investigator at the Picower Institute for Learning and Memory. His work melds seemingly opposite fields, using mathematical models to quantitatively explain, and thereby ground, the loftiness of consciousness.

In the Fiete lab, Eisen leverages computational methods to compare the brain’s electrical signals in an awake, conscious state to those in an unconscious state via anesthesia—which dampens communication between neurons so people feel no pain or become unconscious.

“What’s nice about anesthesia is that we have a reliable way of turning off consciousness,” says Eisen.

“So we’re now able to ask: What’s the fluctuation of electrical activity in a conscious versus unconscious brain? By characterizing how these states vary—with the precision enabled by computational models—we can start to build a better intuition for what underlies consciousness.”

Theories of consciousness

How are scientists thinking about consciousness? Eisen says that there are four major theories circulating in the neuroscience sphere. These theories are outlined below.

Global workspace theory

Consider the placement of your tongue in your mouth. This sensory information is always there, but you only notice the sensation when you make the effort to think about it. How does this happen?

“Global workspace theory seeks to explain how information becomes available to our consciousness,” he says. “This is called access consciousness—the kind that stores information in your mind and makes it available for verbal report. In this view, sensory information is broadcasted to higher-level regions of the brain by a process called ignition.” The theory proposes that widespread jolts of neuronal activity or “spiking” are essential for ignition, like how a few claps can lead to an audience applause. It’s through ignition that we reach consciousness.

Eisen’s research in anesthesia suggests, though, that not just any spiking will do. There needs to be a balance: enough activity to spark ignition, but also enough stability such that the brain doesn’t lose its ability to respond to inputs and produce reliable computations to reach consciousness.

Higher order theories

Let’s say you’re listening to “Here Comes The Sun” by The Beatles. Your brain processes the medley of auditory stimuli; you hear the bouncy guitar, upbeat drums, and George Harrison’s perky vocals. You’re having a musical experience—what it’s like to listen to music. According to higher-order theories, such an experience unlocks consciousness.

“Higher-order theories posit that a conscious mental state involves having higher-order mental representations of stimuli—usually in the higher levels of the brain responsible for cognition—to experience the world,” Eisen says.

Integrated information theory

“Imagine jumping into a lake on a warm summer day. All components of that experience—the feeling of the sun on your skin and the coolness of the water as you submerge—come together to form your ‘phenomenal consciousness,’” Eisen says. If the day was slightly less sunny or the water a fraction warmer, he explains, the experience would be different.

“Integrated information theory suggests that phenomenal consciousness involves an experience that is irreducible, meaning that none of the components of that experience can be separated or altered without changing the experience itself,” he says.

Attention schema theory

Attention schema theory, Eisen explains, says ‘attention’ is the information that we are focused on in the world, while ‘awareness’ is the model we have of our attention. He cites an interesting psychology study to disentangle attention and awareness.

In the study, the researchers showed human subjects a mixed sequence of two numbers and six letters on a computer. The participants were asked to report back what the numbers were. While they were doing this task, faintly detectable dots moved across the screen in the background. The interesting part, Eisen notes, is that people weren’t aware of the dots—that is, they didn’t report that they saw them. But despite saying they didn’t see the dots, people performed worse on the task when the dots were present.

“This suggests that some of the subjects’ attention was allocated towards the dots, limiting their available attention for the actual task,” he says. “In this case, people’s awareness didn’t track their attention. The subjects were not aware of the dots, even though the study shows that the dots did indeed affect their attention.”

The science behind consciousness

Eisen notes that a solid understanding of the neural basis of consciousness has yet to be cemented. However, he and his research team are advancing in this quest. “In our work, we found that brain activity is more ‘unstable’ under anesthesia, meaning that it lacks the ability to recover from disturbances—like distractions or random fluctuations in activity—and regain a normal state,” he says.

He and his fellow researchers believe this is because the unconscious brain can’t reliably engage in computations like the conscious brain does, and sensory information gets lost in the noise. This crucial finding points to how the brain’s stability may be a cornerstone of consciousness.

There’s still more work to do, Eisen says. But eventually, he hopes that this research can help crack the enduring mystery of how consciousness shapes human existence. “There is so much complexity and depth to human experience, emotion, and thought. Through rigorous research, we may one day reveal the machinery that gives us our common humanity.”

Reevaluating an approach to functional brain imaging

A new way of imaging the brain with magnetic resonance imaging (MRI) does not directly detect neural activity as originally reported, according to scientists at MIT’s McGovern Institute. The method, first described in 2022, generated excitement within the neuroscience community as a potentially transformative approach. But a study from the lab of McGovern Associate Investigator Alan Jasanoff, reported March 27, 2024, in the journal Science Advances, demonstrates that MRI signals produced by the new method are generated in large part by the imaging process itself, not neuronal activity.

A man stands with his arms crossed in front of a board with mathematical equations written on it.
Alan Jasanoff, associate member of the McGovern Institute, and a professor of brain and cognitive sciences, biological engineering, and nuclear science and engineering at MIT. Photo: Justin Knight

Jasanoff explains that having a noninvasive means of seeing neuronal activity in the brain is a long-sought goal for neuroscientists. The functional MRI methods that researchers currently use to monitor brain activity don’t actually detect neural signaling. Instead, they use blood flow changes triggered by brain activity as a proxy. This reveals which parts of the brain are engaged during imaging, but it cannot pinpoint neural activity to precise locations, and it is too slow to truly track neurons’ rapid-fire communications.

So when a team of scientists reported in Science a new MRI method called DIANA, for “direct imaging of neuronal activity,” neuroscientists paid attention. The authors claimed that DIANA detected MRI signals in the brain that corresponded to the electrical signals of neurons, and that it acquired signals far faster than the methods now used for functional MRI.

“Everyone wants this,” Jasanoff says. “If we could look at the whole brain and follow its activity with millisecond precision and know that all the signals that we’re seeing have to do with cellular activity, this would be just wonderful. It could tell us all kinds of things about how the brain works and what goes wrong in disease.”

Jasanoff adds that from the initial report, it was not clear what brain changes DIANA was detecting to produce such a rapid readout of neural activity. Curious, he and his team began to experiment with the method. “We wanted to reproduce it, and we wanted to understand how it worked,” he says.

Decoding DIANA

Recreating the MRI procedure reported by DIANA’s developers, postdoctoral researcher Valerie Doan Phi Van imaged the brain of a rat as an electric stimulus was delivered to one paw. Phi Van says she was excited to see an MRI signal appear in the brain’s sensory cortex, exactly when and where neurons were expected to respond to the sensation on the paw. “I was able to reproduce it,” she says. “I could see the signal.”

With further tests of the system, however, her enthusiasm waned. To investigate the source of the signal, she disconnected the device used to stimulate the animal’s paw, then repeated the imaging. Again, signals showed up in the sensory processing part of the brain. But this time, there was no reason for neurons in that area to be activated. In fact, Phi Van found, the MRI produced the same kinds of signals when the animal inside the scanner was replaced with a tube of water. It was clear DIANA’s functional signals were not arising from neural activity.

Phi Van traced the source of the specious signals to the pulse program that directs DIANA’s imaging process, detailing the sequence of steps the MRI scanner uses to collect data. Embedded within DIANA’s pulse program was a trigger for the device that delivers sensory input to the animal inside the scanner. That synchronizes the two processes, so the stimulation occurs at a precise moment during data acquisition. That trigger appeared to be causing signals that DIANA’s developers had concluded indicated neural activity.

It was clear DIANA’s functional signals were not arising from neural activity.

Phi Van altered the pulse program, changing the way the stimulator was triggered. Using the updated program, the MRI scanner detected no functional signal in the brain in response to the same paw stimulation that had produced a signal before. “If you take this part of the code out, then the signal will also be gone. So that means the signal we see is an artifact of the trigger,” she says.

Jasanoff and Phi Van went on to find reasons why other researchers have struggled to reproduce the results of the original DIANA report, noting that the trigger-generated signals can disappear with slight variations in the imaging process. With their postdoctoral colleague Sajal Sen, they also found evidence that cellular changes that DIANA’s developers had proposed might give rise to a functional MRI signal were not related to neuronal activity.

Jasanoff and Phi Van say it was important to share their findings with the research community, particularly as efforts continue to develop new neuroimaging methods. “If people want to try to repeat any part of the study or implement any kind of approach like this, they have to avoid falling into these pits,” Jasanoff says. He adds that they admire the authors of the original study for their ambition: “The community needs scientists who are willing to take risks to move the field ahead.”

Beyond the brain

This story also appears in the Spring 2024 issue of BrainScan.

___

Like many people, graduate student Guillermo Herrera-Arcos found himself working from home in the spring of 2020. Surrounded by equipment he’d hastily borrowed from the lab, he began testing electrical components he would need to control muscles in a new way. If it worked, he and colleagues in Hugh Herr’s lab might have found a promising strategy for restoring movement when signals from the brain fail to reach the muscles, such as after a spinal cord injury or stroke.

Man holds a fiber that is illuminated with blue light at its tip.
Guillermo Herrera-Arcos, a graduate student in Hugh Herr’s lab, is developing an optical technology with the potential to restore movement in people with spinal cord injury or stroke. Photo: Steph Stevens

Herrera-Arcos and Herr’s work is one way McGovern neuroscientists are working at the interface of brain and machine. Such work aims to enable better ways of understanding and treating injury and disease, offering scientists tools to manipulate neural signaling as well as to replace its function when it is lost.

Restoring movement

The system Herrera-Arcos and Herr were developing wouldn’t be the first to bypass the brain to move muscles. Neuroprosthetic devices that use electricity to stimulate muscle-activating motor neurons are sometimes used during rehabilitation from an injury, helping patients maintain muscle mass when they can’t use their muscles on their own. But existing neuroprostheses lack the precision of the body’s natural movement system. They send all-or-nothing signals that quickly tire muscles out.

TWo men looking at a computer screen, one points to the image on the screen.
Hugh Herr (left) and graduate student Guillermo Herrera-Arco at work in the lab. Photo: Steph Stevens

Researchers attribute that fatigue to an unnatural recruitment of neurons and muscle fibers. Electrical signals go straight to the largest, most powerful components of the system, even when smaller units could do the job. “You turn up the stimulus and you get no force, and then suddenly, you get too much force. And then fatigue, a lack of controllability, and so on,” Herr explains. The nervous system, in contrast, calls first on small motor units and recruits larger ones only when needed to generate more force.

Optical solution

In hopes of recreating this strategic pattern of muscle activation, Herr and Herrera-Arcos turned to a technique pioneered by McGovern Investigator Edward Boyden that has become common research: controlling neural activity with light. To put neurons under their control, researchers equip them with light-sensitive proteins. The cells can then be switched on or off within milliseconds using an optic fiber.

When a return to the lab enabled Herr and Herrera-Arcos to test their idea, they were thrilled with the results. Using light to switch on motor neurons and stimulate a single muscle in mice, they recreated the nervous system’s natural muscle activation pattern. Consequently, fatigue did not set in nearly as quickly as it would with an electrically-activated system. Herrera-Arcos says he set out to measure the force generated by the muscle and how long it took to fatigue, and he had to keep extending his experiments: After an hour of light stimulation, it was still going strong.

To optimize the force generated by the system, the researchers used feedback from the muscle to modulate the intensity of the neuron-activating light. Their success suggests this type of closed-loop system could enable fatigue-resistant neuroprostheses for muscle control.

“The field has been struggling for many decades with the challenge of how to control living muscle tissue,” Herr says. “So the idea that this could be solved is very, very exciting.”

There’s work to be done to translate what the team has learned into practical neuroprosthetics for people who need them. To use light to stimulate human motor neurons, light-sensitive proteins will need to be delivered to those cells. Figuring out how to do that safely is a high priority at the K. Lisa Yang Center for Bionics, which Herr co-directs with Boyden, and might lead to better ways of obtaining tactile and proprioceptive feedback from prosthetic limbs, as well as to control muscles for the restoration of natural movements after spinal cord injury. “It would be a game changer for a number of conditions,” Herr says.

Gut-brain connection

While Herr’s team works where the nervous system meets the muscle, researchers in Polina Anikeeva’s lab are exploring the brain’s relationship with an often-overlooked part of the nervous system — the hundreds of millions of neurons in the gut.

“Classically, when we think of brain function in neuroscience, it is always studied in the framework of how the brain interacts with the surrounding environment and how it integrates different stimuli,” says Atharva Sahasrabudhe, a graduate student in the group. “But the brain does not function in a vacuum. It’s constantly getting and integrating signals from the peripheral organs.”

Man smiles at camera while holding up tiny devices.
Atharva Sahasrabudhe holds some of the fiber technology he developed in the Anikeeva lab. Photo: Steph Stevens

The nervous system has a particularly pronounced presence in the gut. Neurons embedded within the walls of the gastrointestinal (GI) tract monitor local conditions and relay information to the brain. This mind-body connection may help explain the GI symptoms associated with some brain-related conditions, including Parkinson’s disease, mood disorders, and autism. Researchers have yet to untangle whether GI symptoms help drive these conditions, are a consequence of them, or are coincidental. Either way, Anikeeva says, “if there is a GI connection, maybe we can tap into this connection to improve the quality of life of affected individuals.”

Flexible fibers

At the K. Lisa Yang Brain-Body Center that Anikeeva directs, studying how the gut communicates with the brain is a high priority. But most of neuroscientists’ tools are designed specifically to investigate the brain. To explore new territory, Sahasrabudhe devised a device that is compatible with the long and twisty GI tract of a mouse.

The new tool is a slender, flexible fiber equipped with light emitters for activating subsets of cells and tiny channels for delivering nutrients or drugs. To access neurons dispersed throughout the GI tract, its wirelessly controlled components are embedded along its length. A more rigid probe at one end of the device is designed to monitor and manipulate neural activity in the brain, so researchers can follow the nervous system’s swift communications across the gut-brain axis.

Scientists on Anikeeva’s team are deploying the device to investigate how gut-brain communications contribute to several conditions. Postdoctoral researcher Sharmelee Selvaraji is focused on Parkinson’s disease. Like many scientists, she wonders whether the neurodegenerative movement disorder might actually start in the gut. There’s a molecular link: the misshapen protein that sickens brain cells in patients with Parkinson’s disease has been found aggregating in the gut, too. And the constipation and other GI problems that are common complaints for people with Parkinson’s disease usually start decades before the onset of motor symptoms. She hopes that by investigating gut-brain communications in a mouse model of the disease, she will uncover important clues about its origins and progression.

“We’re trying to observe the effects of Parkinson’s in the gut, and then eventually, we may be able to intervene at an earlier stage to slow down the disease progression, or even cure it,” says Selvaraji.

Meanwhile, colleagues in the lab are exploring related questions about gut-brain communications in mouse models of autism, anxiety disorders, and addiction. Others continue to focus on technology development, adding new capabilities to the gut-brain probe or applying similar engineering principles to new problems.

“We are realizing that the brain is very much connected to the rest of the body,” Anikeeva says. “There is now a lot of effort in the lab to create technology suitable for a variety of really interesting organs that will help us study brain-body connections.”

Researchers reveal roadmap for AI innovation in brain and language learning

One of the hallmarks of humanity is language, but now, powerful new artificial intelligence tools also compose poetry, write songs, and have extensive conversations with human users. Tools like ChatGPT and Gemini are widely available at the tap of a button — but just how smart are these AIs?

A new multidisciplinary research effort co-led by Anna (Anya) Ivanova, assistant professor in the School of Psychology at Georgia Tech, alongside Kyle Mahowald, an assistant professor in the Department of Linguistics at the University of Texas at Austin, is working to uncover just that.

Their results could lead to innovative AIs that are more similar to the human brain than ever before — and also help neuroscientists and psychologists who are unearthing the secrets of our own minds.

The study, “Dissociating Language and Thought in Large Language Models,” is published this week in the scientific journal Trends in Cognitive Sciences. The work is already making waves in the scientific community: an earlier preprint of the paper, released in January 2023, has already been cited more than 150 times by fellow researchers. The research team has continued to refine the research for this final journal publication.

“ChatGPT became available while we were finalizing the preprint,” explains Ivanova, who conducted the research while a postdoctoral researcher at MIT’s McGovern Institute. “Over the past year, we’ve had an opportunity to update our arguments in light of this newer generation of models, now including ChatGPT.”

Form versus function

The study focuses on large language models (LLMs), which include AIs like ChatGPT. LLMs are text prediction models, and create writing by predicting which word comes next in a sentence — just like how a cell phone or email service like Gmail might suggest what next word you might want to write. However, while this type of language learning is extremely effective at creating coherent sentences, that doesn’t necessarily signify intelligence.

Ivanova’s team argues that formal competence — creating a well-structured, grammatically correct sentence — should be differentiated from functional competence — answering the right question, communicating the correct information, or appropriately communicating. They also found that while LLMs trained on text prediction are often very good at formal skills, they still struggle with functional skills.

“We humans have the tendency to conflate language and thought,” Ivanova says. “I think that’s an important thing to keep in mind as we’re trying to figure out what these models are capable of, because using that ability to be good at language, to be good at formal competence, leads many people to assume that AIs are also good at thinking — even when that’s not the case.

It’s a heuristic that we developed when interacting with other humans over thousands of years of evolution, but now in some respects, that heuristic is broken,” Ivanova explains.

The distinction between formal and functional competence is also vital in rigorously testing an AI’s capabilities, Ivanova adds. Evaluations often don’t distinguish formal and functional competence, making it difficult to assess what factors are determining a model’s success or failure. The need to develop distinct tests is one of the team’s more widely accepted findings, and one that some researchers in the field have already begun to implement.

Creating a modular system

While the human tendency to conflate functional and formal competence may have hindered understanding of LLMs in the past, our human brains could also be the key to unlocking more powerful AIs.

Leveraging the tools of cognitive neuroscience while a postdoctoral associate at Massachusetts Institute of Technology (MIT), Ivanova and her team studied brain activity in neurotypical individuals via fMRI, and used behavioral assessments of individuals with brain damage to test the causal role of brain regions in language and cognition — both conducting new research and drawing on previous studies. The team’s results showed that human brains use different regions for functional and formal competence, further supporting this distinction in AIs.

“Our research shows that in the brain, there is a language processing module and separate modules for reasoning,” Ivanova says. This modularity could also serve as a blueprint for how to develop future AIs.

“Building on insights from human brains — where the language processing system is sharply distinct from the systems that support our ability to think — we argue that the language-thought distinction is conceptually important for thinking about, evaluating, and improving large language models, especially given recent efforts to imbue these models with human-like intelligence,” says Ivanova’s former advisor and study co-author Evelina Fedorenko, a professor of brain and cognitive sciences at MIT and a member of the McGovern Institute for Brain Research.

Developing AIs in the pattern of the human brain could help create more powerful systems — while also helping them dovetail more naturally with human users. “Generally, differences in a mechanism’s internal structure affect behavior,” Ivanova says. “Building a system that has a broad macroscopic organization similar to that of the human brain could help ensure that it might be more aligned with humans down the road.”

In the rapidly developing world of AI, these systems are ripe for experimentation. After the team’s preprint was published, OpenAI announced their intention to add plug-ins to their GPT models.

“That plug-in system is actually very similar to what we suggest,” Ivanova adds. “It takes a modularity approach where the language model can be an interface to another specialized module within a system.”

While the OpenAI plug-in system will include features like booking flights and ordering food, rather than cognitively inspired features, it demonstrates that “the approach has a lot of potential,” Ivanova says.

The future of AI — and what it can tell us about ourselves

While our own brains might be the key to unlocking better, more powerful AIs, these AIs might also help us better understand ourselves. “When researchers try to study the brain and cognition, it’s often useful to have some smaller system where you can actually go in and poke around and see what’s going on before you get to the immense complexity,” Ivanova explains.

However, since human language is unique, model or animal systems are more difficult to relate. That’s where LLMs come in.

“There are lots of surprising similarities between how one would approach the study of the brain and the study of an artificial neural network” like a large language model, she adds. “They are both information processing systems that have biological or artificial neurons to perform computations.”

In many ways, the human brain is still a black box, but openly available AIs offer a unique opportunity to see the synthetic system’s inner workings and modify variables, and explore these corresponding systems like never before.

“It’s a really wonderful model that we have a lot of control over,” Ivanova says. “Neural networks — they are amazing.”

Along with Anna (Anya) Ivanova, Kyle Mahowald, and Evelina Fedorenko, the research team also includes Idan Blank (University of California, Los Angeles), as well as Nancy Kanwisher and Joshua Tenenbaum (Massachusetts Institute of Technology).

Honoring a visionary

Today marks the 10th anniversary of the passing of Pat McGovern, an extraordinary visionary and philanthropist whose legacy continues to inspire and impact the world. As the founder of International Data Group (IDG)—a premier information technology organization—McGovern was not just a pioneering figure in the technology media world, but also a passionate advocate for using technology for the greater good.

Under McGovern’s leadership, IDG became a global powerhouse, launching iconic publications such as Computerworld, Macworld, and PCWorld. His foresight also led to the creation of IDG Ventures, a network of venture funds around the world, including the notable IDG Capital in Beijing.

Beyond his remarkable business acumen, McGovern, with his wife, Lore, co-founded the McGovern Institute for Brain Research at MIT in 2000. This institute has been at the forefront of neuroscience research, contributing to groundbreaking advancements in perception, attention, memory, and artificial intelligence (AI), as well as discoveries with direct translational impact, such as CRISPR technology. CRISPR discoveries made at the McGovern Institute are now licensed for the first clinical application of genome editing in sickle cell disease.

Pat McGovern’s commitment to bettering humanity is further evidenced by the Patrick J. McGovern Foundation, which works in partnership with public, private, and social institutions to drive progress on our most pressing challenges through the use of artificial intelligence, data science, and key emerging technologies.

Remembering Pat McGovern

On this solemn anniversary, we reflect on Pat McGovern’s enduring influence through the words of those who knew him best.

Lore Harp McGovern
Co-founder and board member of the McGovern Institute for Brain Research

“Technology was Pat’s medium, the platform on which he built his amazing company 60 years ago. But it was people who truly motivated Pat, and he empowered and encouraged them to reach for the stars. He lived by the motto, ‘let’s try it,’ and believed that nothing was out bounds. His goal was to help create a more just and peaceful world, and establishing the McGovern Institute was our way to give back meaningfully to this world. I know he would be so proud of what has been achieved and what is yet to come.”

Robert Desimone
Director of the McGovern Institute for Brain Research

“Pat McGovern had a vision for an international community of scientists and students drawn together to collaborate on understanding the brain.  This vision has been realized in the McGovern Institute, and we are now seeing the profound advances in our understanding of the brain and even clinical applications that Pat predicted would follow.”

Hugo Shong
Chairman of IDG Capital

“Pat’s impact on technology, science and research is immeasurable. A man of tremendous vision, he grew IDG out of Massachusetts and made it into one of the world’s most recognized brands in its space, forging partnerships and winning friends wherever he went. He applied that very same vision and energy to the McGovern Institute and the Patrick J. McGovern Foundation, in support of their impressive and necessary causes. I know he would be extremely proud of what both organizations have achieved thus far, and particularly how their work has broken technological frontiers and bettered the lives of millions.”

Vilas Dhar
President of the Patrick J. McGovern Foundation

“Patrick J. McGovern was more than a tech mogul; he was a visionary who believed in the power of information to empower people and improve societies. His work has had a profound effect on public policy and education, laying the groundwork for a more informed and connected world and guiding our work to ensure that artificial intelligence is used to sustain a human-centered world that creates economic and social opportunity for all.  On a personal level, Pat’s leadership was characterized by a genuine care for his employees and a belief in their potential. He created a culture of curiosity, encouraging humanity to explore, innovate, and dream big. His spirit lives on in every philanthropic activity we undertake.”

Genevieve Juillard
CEO of IDG 

The legacy of Pat McGovern is felt not just in Boston, but around the world—by the thousands of IDG customers and by people like me who have the privilege to work at IDG, 60 years after he founded it. His innovative spirit and unwavering commitment to excellence continue to inspire and guide us.”

Sudhir Sethi
Founder and Chairman of Chiratae Ventures (formally IDG Ventures)

“Pat McGovern was a visionary who foresaw the potential of technology in India and nurtured the ecosystem as an active participant. Pat enabled a launchpad for Chiratae Ventures, empowering our journey to become the leading home-grown venture capital fund in India today. Pat is a role model to entrepreneurs worldwide, and we honor his legacy with our annual ‘Chiratae Ventures Patrick J. McGovern Awards’ that celebrate courage and the spirit of entrepreneurship.”

Marc Benioff
Founder and CEO of Salesforce
wrote in the book “Future Forward that “Pat McGovern was a gift to us all, a trailblazing visionary who showed an entire generation of entrepreneurs what it means to be a principle-based leader and how to lead with higher values.”

Pat McGovern’s memory lives on not just in the institutions and innovations he fostered, but in the countless lives he touched and transformed. Today, we celebrate a man who saw the future and helped us all move towards it with hope and determination.

Do we only use 10 percent of our brain?

Movies like “Limitless” and “Lucy” play on the notion that humans use only 10 percent of their brains—and those who unlock a higher percentage wield powers like infinite memory or telekinesis. It’s enticing to think that so much of the brain remains untapped and is ripe for boosting human potential.

But the idea that we use 10 percent of our brain is 100 percent a myth.

In fact, scientists believe that we use our entire brain every day. Mila Halgren is a graduate student in the lab of Mark Harnett, an associate professor of brain and cognitive sciences and an investigator at the McGovern Institute. The Harnett lab studies the computational power of neurons, that is, how neural networks rapidly process massive amounts of information.

“All of our brain is constantly in use and consumes a tremendous amount of energy,” Halgren says. “Despite making up only two percent of our body weight, it devours 20 percent of our calories.” This doesn’t appear to change significantly with different tasks, from typing on a computer to doing yoga. “Even while we sleep, our entire brain remains intensely active.”

When did this myth take root?

Portrait of scientist Mila Halgren
Mila Halgren is a PhD student in MIT’s Department of Brain and Cognitive Sciences. Photo: Mila Halgren

The myth is thought to have gained traction when scientists first began exploring the brain’s abilities but lacked the tools to capture its exact workings. In 1907, William James, a founder of American psychology, suggested in his book “The Energies of Men” that “we are making use of only a small part of our possible mental and physical resources.” This influential work likely sparked the idea that humans access a mere fraction of the brain—setting this common misconception ablaze.

Brainpower lore even suggests that Albert Einstein credited his genius to being able to access more than 10 percent of his brain. However, no such quote has been documented and this too is perhaps a myth of cosmic proportion.

Halgren believes that there may be some fact backing this fiction. “People may think our brain is underutilized in the sense that some neurons fire very infrequently—once every few minutes or less. But this isn’t true of most neurons, some of which fire hundreds of times per second,” she says.

In the nascent years of neuroscience, scientists also argued that a large portion of the brain must be inactive because some people experience brain injuries and can still function at a high level, like the famous case of Phineas Gage. Halgren points to the brain’s remarkable plasticity—the reshaping of neural connections. “Entire brain hemispheres can be removed during early childhood and the rest of the brain will rewire and compensate for the loss. In other words, the brain will use 100 percent of what it has, but can make do with less depending on which structures are damaged.”

Is there a limit to the brain?

If we indeed use our entire brain, can humans tease out any problem? Or, are there enigmas in the world that we will never unravel?

“This is still in contention,” Halgren says. “There may be certain problems that the human brain is fundamentally unable to solve, like how a mouse will never understand chemistry and a chimpanzee can’t do calculus.”

Can we increase our brainpower?

The brain may have its limits, but there are ways to boost our cognitive prowess to ace that midterm or crank up productivity in the workplace. According to Halgren, “You can increase your brainpower, but there’s no ‘trick’ that will allow you to do so. Like any organ in your body, the brain works best with proper sleep, exercise, low stress, and a well-balanced diet.”

The truth is, we may never rearrange furniture with our minds or foresee which team will win the Super Bowl. The idea of a largely latent brain is draped in fantasy, but debunking this myth speaks to the immense growth of neuroscience over the years—and the allure of other misconceptions that scientists have yet to demystify.