Nancy Kanwisher Shares 2024 Kavli Prize in Neuroscience

The Norwegian Academy of Science and Letters today announced the 2024 Kavli Prize Laureates in the fields of astrophysics, nanoscience, and neuroscience. The 2024 Kavli Prize in Neuroscience honors Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience at MIT and an investigator at the McGovern Institute, along with UC Berkeley neurobiologist Doris Tsao, and Rockefeller University neuroscientist Winrich Freiwald for their discovery of a highly localized and specialized system for representation of faces in human and non-human primate neocortex. The neuroscience laureates will share $1 million USD.

“Kanwisher, Freiwald, and Tsao together discovered a localized and specialized neocortical system for face recognition,” says Kristine Walhovd, Chair of the Kavli Neuroscience Committee. “Their outstanding research will ultimately further our understanding of recognition not only of faces, but objects and scenes.”

Overcoming failure

As a graduate student at MIT in the early days of functional brain imaging, Kanwisher was fascinated by the potential of the emerging technology to answer a suite of questions about the human mind. But a lack of brain imaging resources and a series of failed experiments led Kanwisher consider leaving the field for good. She credits her advisor, MIT Professor of Psychology Molly Potter, for supporting her through this challenging time and for teaching her how to make powerful inferences about the inner workings of the mind from behavioral data alone.

After receiving her PhD from MIT, Kanwisher spent a year studying nuclear strategy with a MacArthur Foundation Fellowship in Peace and International Security, but eventually returned to science by accepting a faculty position at Harvard University where she could use the latest brain imaging technology to pursue the scientific questions that had always fascinated her.

Zeroing in on faces

Recognizing faces is important for social interaction in many animals. Previous work in human psychology and animal research had suggested the existence of a functionally specialized system for face recognition, but this system had not clearly been identified with brain imaging technology. It is here that Kanwisher saw her opportunity.

Using a new method at the time, called functional magnetic resonance imaging or fMRI, Kanwisher’s team scanned people while they looked at faces and while they looked at objects, and searched for brain regions that responded more to one than the other. They found a small patch of neocortex, now called the fusiform face area (FFA), that is dedicated specifically to the task of face recognition. She found individual differences in the location of this area and devised an analysis technique to effectively localize specialized functional regions in the brain. This technique is now widely used and applied to domains beyond the face recognition system. Notably, Kanwisher’s first FFA paper was co-authored with Josh McDermott, who was an undergrad at Harvard University at the time, and is now an associate investigator at the McGovern Institute and holds a faculty position alongside Kanwisher in MIT’s Department of Brain and Cognitive Sciences.

A group of five scientists standing and smiling in front of a whiteboard.
The Kanwisher lab at Harvard University circa 1996. From left to right: Nancy Kanwisher, Josh McDermott (then an undergrad), Marvin Chun (postdoc), Ewa Wojciulik (postdoc), and Jody Culham (grad student). Photo: Nancy Kanwisher

From humans to monkeys

Inspired by Kanwisher´s findings, Winrich Freiwald and Doris Tsao together used fMRI to localize similar face patches in macaque monkeys. They mapped out six distinct brain regions, known as the face patch system, including these regions’ functional specialization and how they are connected. By recording the activity of individual brain cells, they revealed how cells in some face patches specialize in faces with particular views.

Tsao proceeded to identify how the face patches work together to identify a face, through a specific code that enables single cells to identify faces by assembling information of facial features. For example, some cells respond to the presence of hair, others to the distance between the eyes. Freiwald uncovered that a separate brain region, called the temporal pole, accelerates our recognition of familiar faces, and that some cells are selectively responsive to familiar faces.

“It was a special thrill for me when Doris and Winrich found face patches in monkeys using fMRI,” says Kanwisher, whose lab at MIT’s McGovern Institute has gone on to uncover many other regions of the human brain that engage in specific aspects of perception and cognition. “They are scientific heroes to me, and it is a thrill to receive the Kavli Prize in neuroscience jointly with them.”

“Nancy and her students have identified neocortical subregions that differentially engage in the perception of faces, places, music and even what others think,” says McGovern Institute Director Robert Desimone. “We are delighted that her groundbreaking work into the functional organization of the human brain is being honored this year with the Kavli Prize.”

Together, the laureates, with their work on neocortical specialization for face recognition, have provided basic principles of neural organization which will further our understanding of how we perceive the world around us.

About the Kavli Prize

The Kavli Prize is a partnership among The Norwegian Academy of Science and Letters, The Norwegian Ministry of Education and Research, and The Kavli Foundation (USA). The Kavli Prize honors scientists for breakthroughs in astrophysics, nanoscience and neuroscience that transform our understanding of the big, the small and the complex. Three one-million-dollar prizes are awarded every other year in each of the three fields. The Norwegian Academy of Science and Letters selects the laureates based on recommendations from three independent prize committees whose members are nominated by The Chinese Academy of Sciences, The French Academy of Sciences, The Max Planck Society of Germany, The U.S. National Academy of Sciences, and The Royal Society, UK.

What is consciousness?

In the hit T.V. show “Westworld,” Dolores Abernathy, a golden-tressed belle, lives in the days when Manifest Destiny still echoed in America. She begins to notice unusual stirrings shaking up her quaint western town—and soon discovers that her skin is synthetic, and her mind, metal. She’s a cyborg meant to entertain humans. The key to her autonomy lies in reaching consciousness.

Shows like “Westworld” and other media probe the idea of consciousness, attempting to nail down a definition of the concept. However, though humans have ruminated on consciousness for centuries, we still don’t have a solid definition (even the Merriam-Webster dictionary lists five). One framework suggests that consciousness is any experience, from eating a candy bar to heartbreak. Another argues that it is how certain stimuli influence one’s behavior.

MIT graduate student Adam Eisen.

While some search for a philosophical explanation, MIT graduate student Adam Eisen seeks a scientific one.

Eisen studies consciousness in the labs of Ila Fiete, an associate investigator at the McGovern Institute, and Earl Miller, an investigator at the Picower Institute for Learning and Memory. His work melds seemingly opposite fields, using mathematical models to quantitatively explain, and thereby ground, the loftiness of consciousness.

In the Fiete lab, Eisen leverages computational methods to compare the brain’s electrical signals in an awake, conscious state to those in an unconscious state via anesthesia—which dampens communication between neurons so people feel no pain or become unconscious.

“What’s nice about anesthesia is that we have a reliable way of turning off consciousness,” says Eisen.

“So we’re now able to ask: What’s the fluctuation of electrical activity in a conscious versus unconscious brain? By characterizing how these states vary—with the precision enabled by computational models—we can start to build a better intuition for what underlies consciousness.”

Theories of consciousness

How are scientists thinking about consciousness? Eisen says that there are four major theories circulating in the neuroscience sphere. These theories are outlined below.

Global workspace theory

Consider the placement of your tongue in your mouth. This sensory information is always there, but you only notice the sensation when you make the effort to think about it. How does this happen?

“Global workspace theory seeks to explain how information becomes available to our consciousness,” he says. “This is called access consciousness—the kind that stores information in your mind and makes it available for verbal report. In this view, sensory information is broadcasted to higher-level regions of the brain by a process called ignition.” The theory proposes that widespread jolts of neuronal activity or “spiking” are essential for ignition, like how a few claps can lead to an audience applause. It’s through ignition that we reach consciousness.

Eisen’s research in anesthesia suggests, though, that not just any spiking will do. There needs to be a balance: enough activity to spark ignition, but also enough stability such that the brain doesn’t lose its ability to respond to inputs and produce reliable computations to reach consciousness.

Higher order theories

Let’s say you’re listening to “Here Comes The Sun” by The Beatles. Your brain processes the medley of auditory stimuli; you hear the bouncy guitar, upbeat drums, and George Harrison’s perky vocals. You’re having a musical experience—what it’s like to listen to music. According to higher-order theories, such an experience unlocks consciousness.

“Higher-order theories posit that a conscious mental state involves having higher-order mental representations of stimuli—usually in the higher levels of the brain responsible for cognition—to experience the world,” Eisen says.

Integrated information theory

“Imagine jumping into a lake on a warm summer day. All components of that experience—the feeling of the sun on your skin and the coolness of the water as you submerge—come together to form your ‘phenomenal consciousness,’” Eisen says. If the day was slightly less sunny or the water a fraction warmer, he explains, the experience would be different.

“Integrated information theory suggests that phenomenal consciousness involves an experience that is irreducible, meaning that none of the components of that experience can be separated or altered without changing the experience itself,” he says.

Attention schema theory

Attention schema theory, Eisen explains, says ‘attention’ is the information that we are focused on in the world, while ‘awareness’ is the model we have of our attention. He cites an interesting psychology study to disentangle attention and awareness.

In the study, the researchers showed human subjects a mixed sequence of two numbers and six letters on a computer. The participants were asked to report back what the numbers were. While they were doing this task, faintly detectable dots moved across the screen in the background. The interesting part, Eisen notes, is that people weren’t aware of the dots—that is, they didn’t report that they saw them. But despite saying they didn’t see the dots, people performed worse on the task when the dots were present.

“This suggests that some of the subjects’ attention was allocated towards the dots, limiting their available attention for the actual task,” he says. “In this case, people’s awareness didn’t track their attention. The subjects were not aware of the dots, even though the study shows that the dots did indeed affect their attention.”

The science behind consciousness

Eisen notes that a solid understanding of the neural basis of consciousness has yet to be cemented. However, he and his research team are advancing in this quest. “In our work, we found that brain activity is more ‘unstable’ under anesthesia, meaning that it lacks the ability to recover from disturbances—like distractions or random fluctuations in activity—and regain a normal state,” he says.

He and his fellow researchers believe this is because the unconscious brain can’t reliably engage in computations like the conscious brain does, and sensory information gets lost in the noise. This crucial finding points to how the brain’s stability may be a cornerstone of consciousness.

There’s still more work to do, Eisen says. But eventually, he hopes that this research can help crack the enduring mystery of how consciousness shapes human existence. “There is so much complexity and depth to human experience, emotion, and thought. Through rigorous research, we may one day reveal the machinery that gives us our common humanity.”

Reevaluating an approach to functional brain imaging

A new way of imaging the brain with magnetic resonance imaging (MRI) does not directly detect neural activity as originally reported, according to scientists at MIT’s McGovern Institute. The method, first described in 2022, generated excitement within the neuroscience community as a potentially transformative approach. But a study from the lab of McGovern Associate Investigator Alan Jasanoff, reported March 27, 2024, in the journal Science Advances, demonstrates that MRI signals produced by the new method are generated in large part by the imaging process itself, not neuronal activity.

A man stands with his arms crossed in front of a board with mathematical equations written on it.
Alan Jasanoff, associate member of the McGovern Institute, and a professor of brain and cognitive sciences, biological engineering, and nuclear science and engineering at MIT. Photo: Justin Knight

Jasanoff explains that having a noninvasive means of seeing neuronal activity in the brain is a long-sought goal for neuroscientists. The functional MRI methods that researchers currently use to monitor brain activity don’t actually detect neural signaling. Instead, they use blood flow changes triggered by brain activity as a proxy. This reveals which parts of the brain are engaged during imaging, but it cannot pinpoint neural activity to precise locations, and it is too slow to truly track neurons’ rapid-fire communications.

So when a team of scientists reported in Science a new MRI method called DIANA, for “direct imaging of neuronal activity,” neuroscientists paid attention. The authors claimed that DIANA detected MRI signals in the brain that corresponded to the electrical signals of neurons, and that it acquired signals far faster than the methods now used for functional MRI.

“Everyone wants this,” Jasanoff says. “If we could look at the whole brain and follow its activity with millisecond precision and know that all the signals that we’re seeing have to do with cellular activity, this would be just wonderful. It could tell us all kinds of things about how the brain works and what goes wrong in disease.”

Jasanoff adds that from the initial report, it was not clear what brain changes DIANA was detecting to produce such a rapid readout of neural activity. Curious, he and his team began to experiment with the method. “We wanted to reproduce it, and we wanted to understand how it worked,” he says.

Decoding DIANA

Recreating the MRI procedure reported by DIANA’s developers, postdoctoral researcher Valerie Doan Phi Van imaged the brain of a rat as an electric stimulus was delivered to one paw. Phi Van says she was excited to see an MRI signal appear in the brain’s sensory cortex, exactly when and where neurons were expected to respond to the sensation on the paw. “I was able to reproduce it,” she says. “I could see the signal.”

With further tests of the system, however, her enthusiasm waned. To investigate the source of the signal, she disconnected the device used to stimulate the animal’s paw, then repeated the imaging. Again, signals showed up in the sensory processing part of the brain. But this time, there was no reason for neurons in that area to be activated. In fact, Phi Van found, the MRI produced the same kinds of signals when the animal inside the scanner was replaced with a tube of water. It was clear DIANA’s functional signals were not arising from neural activity.

Phi Van traced the source of the specious signals to the pulse program that directs DIANA’s imaging process, detailing the sequence of steps the MRI scanner uses to collect data. Embedded within DIANA’s pulse program was a trigger for the device that delivers sensory input to the animal inside the scanner. That synchronizes the two processes, so the stimulation occurs at a precise moment during data acquisition. That trigger appeared to be causing signals that DIANA’s developers had concluded indicated neural activity.

It was clear DIANA’s functional signals were not arising from neural activity.

Phi Van altered the pulse program, changing the way the stimulator was triggered. Using the updated program, the MRI scanner detected no functional signal in the brain in response to the same paw stimulation that had produced a signal before. “If you take this part of the code out, then the signal will also be gone. So that means the signal we see is an artifact of the trigger,” she says.

Jasanoff and Phi Van went on to find reasons why other researchers have struggled to reproduce the results of the original DIANA report, noting that the trigger-generated signals can disappear with slight variations in the imaging process. With their postdoctoral colleague Sajal Sen, they also found evidence that cellular changes that DIANA’s developers had proposed might give rise to a functional MRI signal were not related to neuronal activity.

Jasanoff and Phi Van say it was important to share their findings with the research community, particularly as efforts continue to develop new neuroimaging methods. “If people want to try to repeat any part of the study or implement any kind of approach like this, they have to avoid falling into these pits,” Jasanoff says. He adds that they admire the authors of the original study for their ambition: “The community needs scientists who are willing to take risks to move the field ahead.”

Beyond the brain

This story also appears in the Spring 2024 issue of BrainScan.

___

Like many people, graduate student Guillermo Herrera-Arcos found himself working from home in the spring of 2020. Surrounded by equipment he’d hastily borrowed from the lab, he began testing electrical components he would need to control muscles in a new way. If it worked, he and colleagues in Hugh Herr’s lab might have found a promising strategy for restoring movement when signals from the brain fail to reach the muscles, such as after a spinal cord injury or stroke.

Man holds a fiber that is illuminated with blue light at its tip.
Guillermo Herrera-Arcos, a graduate student in Hugh Herr’s lab, is developing an optical technology with the potential to restore movement in people with spinal cord injury or stroke. Photo: Steph Stevens

Herrera-Arcos and Herr’s work is one way McGovern neuroscientists are working at the interface of brain and machine. Such work aims to enable better ways of understanding and treating injury and disease, offering scientists tools to manipulate neural signaling as well as to replace its function when it is lost.

Restoring movement

The system Herrera-Arcos and Herr were developing wouldn’t be the first to bypass the brain to move muscles. Neuroprosthetic devices that use electricity to stimulate muscle-activating motor neurons are sometimes used during rehabilitation from an injury, helping patients maintain muscle mass when they can’t use their muscles on their own. But existing neuroprostheses lack the precision of the body’s natural movement system. They send all-or-nothing signals that quickly tire muscles out.

TWo men looking at a computer screen, one points to the image on the screen.
Hugh Herr (left) and graduate student Guillermo Herrera-Arco at work in the lab. Photo: Steph Stevens

Researchers attribute that fatigue to an unnatural recruitment of neurons and muscle fibers. Electrical signals go straight to the largest, most powerful components of the system, even when smaller units could do the job. “You turn up the stimulus and you get no force, and then suddenly, you get too much force. And then fatigue, a lack of controllability, and so on,” Herr explains. The nervous system, in contrast, calls first on small motor units and recruits larger ones only when needed to generate more force.

Optical solution

In hopes of recreating this strategic pattern of muscle activation, Herr and Herrera-Arcos turned to a technique pioneered by McGovern Investigator Edward Boyden that has become common research: controlling neural activity with light. To put neurons under their control, researchers equip them with light-sensitive proteins. The cells can then be switched on or off within milliseconds using an optic fiber.

When a return to the lab enabled Herr and Herrera-Arcos to test their idea, they were thrilled with the results. Using light to switch on motor neurons and stimulate a single muscle in mice, they recreated the nervous system’s natural muscle activation pattern. Consequently, fatigue did not set in nearly as quickly as it would with an electrically-activated system. Herrera-Arcos says he set out to measure the force generated by the muscle and how long it took to fatigue, and he had to keep extending his experiments: After an hour of light stimulation, it was still going strong.

To optimize the force generated by the system, the researchers used feedback from the muscle to modulate the intensity of the neuron-activating light. Their success suggests this type of closed-loop system could enable fatigue-resistant neuroprostheses for muscle control.

“The field has been struggling for many decades with the challenge of how to control living muscle tissue,” Herr says. “So the idea that this could be solved is very, very exciting.”

There’s work to be done to translate what the team has learned into practical neuroprosthetics for people who need them. To use light to stimulate human motor neurons, light-sensitive proteins will need to be delivered to those cells. Figuring out how to do that safely is a high priority at the K. Lisa Yang Center for Bionics, which Herr co-directs with Boyden, and might lead to better ways of obtaining tactile and proprioceptive feedback from prosthetic limbs, as well as to control muscles for the restoration of natural movements after spinal cord injury. “It would be a game changer for a number of conditions,” Herr says.

Gut-brain connection

While Herr’s team works where the nervous system meets the muscle, researchers in Polina Anikeeva’s lab are exploring the brain’s relationship with an often-overlooked part of the nervous system — the hundreds of millions of neurons in the gut.

“Classically, when we think of brain function in neuroscience, it is always studied in the framework of how the brain interacts with the surrounding environment and how it integrates different stimuli,” says Atharva Sahasrabudhe, a graduate student in the group. “But the brain does not function in a vacuum. It’s constantly getting and integrating signals from the peripheral organs.”

Man smiles at camera while holding up tiny devices.
Atharva Sahasrabudhe holds some of the fiber technology he developed in the Anikeeva lab. Photo: Steph Stevens

The nervous system has a particularly pronounced presence in the gut. Neurons embedded within the walls of the gastrointestinal (GI) tract monitor local conditions and relay information to the brain. This mind-body connection may help explain the GI symptoms associated with some brain-related conditions, including Parkinson’s disease, mood disorders, and autism. Researchers have yet to untangle whether GI symptoms help drive these conditions, are a consequence of them, or are coincidental. Either way, Anikeeva says, “if there is a GI connection, maybe we can tap into this connection to improve the quality of life of affected individuals.”

Flexible fibers

At the K. Lisa Yang Brain-Body Center that Anikeeva directs, studying how the gut communicates with the brain is a high priority. But most of neuroscientists’ tools are designed specifically to investigate the brain. To explore new territory, Sahasrabudhe devised a device that is compatible with the long and twisty GI tract of a mouse.

The new tool is a slender, flexible fiber equipped with light emitters for activating subsets of cells and tiny channels for delivering nutrients or drugs. To access neurons dispersed throughout the GI tract, its wirelessly controlled components are embedded along its length. A more rigid probe at one end of the device is designed to monitor and manipulate neural activity in the brain, so researchers can follow the nervous system’s swift communications across the gut-brain axis.

Scientists on Anikeeva’s team are deploying the device to investigate how gut-brain communications contribute to several conditions. Postdoctoral researcher Sharmelee Selvaraji is focused on Parkinson’s disease. Like many scientists, she wonders whether the neurodegenerative movement disorder might actually start in the gut. There’s a molecular link: the misshapen protein that sickens brain cells in patients with Parkinson’s disease has been found aggregating in the gut, too. And the constipation and other GI problems that are common complaints for people with Parkinson’s disease usually start decades before the onset of motor symptoms. She hopes that by investigating gut-brain communications in a mouse model of the disease, she will uncover important clues about its origins and progression.

“We’re trying to observe the effects of Parkinson’s in the gut, and then eventually, we may be able to intervene at an earlier stage to slow down the disease progression, or even cure it,” says Selvaraji.

Meanwhile, colleagues in the lab are exploring related questions about gut-brain communications in mouse models of autism, anxiety disorders, and addiction. Others continue to focus on technology development, adding new capabilities to the gut-brain probe or applying similar engineering principles to new problems.

“We are realizing that the brain is very much connected to the rest of the body,” Anikeeva says. “There is now a lot of effort in the lab to create technology suitable for a variety of really interesting organs that will help us study brain-body connections.”

Researchers reveal roadmap for AI innovation in brain and language learning

One of the hallmarks of humanity is language, but now, powerful new artificial intelligence tools also compose poetry, write songs, and have extensive conversations with human users. Tools like ChatGPT and Gemini are widely available at the tap of a button — but just how smart are these AIs?

A new multidisciplinary research effort co-led by Anna (Anya) Ivanova, assistant professor in the School of Psychology at Georgia Tech, alongside Kyle Mahowald, an assistant professor in the Department of Linguistics at the University of Texas at Austin, is working to uncover just that.

Their results could lead to innovative AIs that are more similar to the human brain than ever before — and also help neuroscientists and psychologists who are unearthing the secrets of our own minds.

The study, “Dissociating Language and Thought in Large Language Models,” is published this week in the scientific journal Trends in Cognitive Sciences. The work is already making waves in the scientific community: an earlier preprint of the paper, released in January 2023, has already been cited more than 150 times by fellow researchers. The research team has continued to refine the research for this final journal publication.

“ChatGPT became available while we were finalizing the preprint,” explains Ivanova, who conducted the research while a postdoctoral researcher at MIT’s McGovern Institute. “Over the past year, we’ve had an opportunity to update our arguments in light of this newer generation of models, now including ChatGPT.”

Form versus function

The study focuses on large language models (LLMs), which include AIs like ChatGPT. LLMs are text prediction models, and create writing by predicting which word comes next in a sentence — just like how a cell phone or email service like Gmail might suggest what next word you might want to write. However, while this type of language learning is extremely effective at creating coherent sentences, that doesn’t necessarily signify intelligence.

Ivanova’s team argues that formal competence — creating a well-structured, grammatically correct sentence — should be differentiated from functional competence — answering the right question, communicating the correct information, or appropriately communicating. They also found that while LLMs trained on text prediction are often very good at formal skills, they still struggle with functional skills.

“We humans have the tendency to conflate language and thought,” Ivanova says. “I think that’s an important thing to keep in mind as we’re trying to figure out what these models are capable of, because using that ability to be good at language, to be good at formal competence, leads many people to assume that AIs are also good at thinking — even when that’s not the case.

It’s a heuristic that we developed when interacting with other humans over thousands of years of evolution, but now in some respects, that heuristic is broken,” Ivanova explains.

The distinction between formal and functional competence is also vital in rigorously testing an AI’s capabilities, Ivanova adds. Evaluations often don’t distinguish formal and functional competence, making it difficult to assess what factors are determining a model’s success or failure. The need to develop distinct tests is one of the team’s more widely accepted findings, and one that some researchers in the field have already begun to implement.

Creating a modular system

While the human tendency to conflate functional and formal competence may have hindered understanding of LLMs in the past, our human brains could also be the key to unlocking more powerful AIs.

Leveraging the tools of cognitive neuroscience while a postdoctoral associate at Massachusetts Institute of Technology (MIT), Ivanova and her team studied brain activity in neurotypical individuals via fMRI, and used behavioral assessments of individuals with brain damage to test the causal role of brain regions in language and cognition — both conducting new research and drawing on previous studies. The team’s results showed that human brains use different regions for functional and formal competence, further supporting this distinction in AIs.

“Our research shows that in the brain, there is a language processing module and separate modules for reasoning,” Ivanova says. This modularity could also serve as a blueprint for how to develop future AIs.

“Building on insights from human brains — where the language processing system is sharply distinct from the systems that support our ability to think — we argue that the language-thought distinction is conceptually important for thinking about, evaluating, and improving large language models, especially given recent efforts to imbue these models with human-like intelligence,” says Ivanova’s former advisor and study co-author Evelina Fedorenko, a professor of brain and cognitive sciences at MIT and a member of the McGovern Institute for Brain Research.

Developing AIs in the pattern of the human brain could help create more powerful systems — while also helping them dovetail more naturally with human users. “Generally, differences in a mechanism’s internal structure affect behavior,” Ivanova says. “Building a system that has a broad macroscopic organization similar to that of the human brain could help ensure that it might be more aligned with humans down the road.”

In the rapidly developing world of AI, these systems are ripe for experimentation. After the team’s preprint was published, OpenAI announced their intention to add plug-ins to their GPT models.

“That plug-in system is actually very similar to what we suggest,” Ivanova adds. “It takes a modularity approach where the language model can be an interface to another specialized module within a system.”

While the OpenAI plug-in system will include features like booking flights and ordering food, rather than cognitively inspired features, it demonstrates that “the approach has a lot of potential,” Ivanova says.

The future of AI — and what it can tell us about ourselves

While our own brains might be the key to unlocking better, more powerful AIs, these AIs might also help us better understand ourselves. “When researchers try to study the brain and cognition, it’s often useful to have some smaller system where you can actually go in and poke around and see what’s going on before you get to the immense complexity,” Ivanova explains.

However, since human language is unique, model or animal systems are more difficult to relate. That’s where LLMs come in.

“There are lots of surprising similarities between how one would approach the study of the brain and the study of an artificial neural network” like a large language model, she adds. “They are both information processing systems that have biological or artificial neurons to perform computations.”

In many ways, the human brain is still a black box, but openly available AIs offer a unique opportunity to see the synthetic system’s inner workings and modify variables, and explore these corresponding systems like never before.

“It’s a really wonderful model that we have a lot of control over,” Ivanova says. “Neural networks — they are amazing.”

Along with Anna (Anya) Ivanova, Kyle Mahowald, and Evelina Fedorenko, the research team also includes Idan Blank (University of California, Los Angeles), as well as Nancy Kanwisher and Joshua Tenenbaum (Massachusetts Institute of Technology).

Honoring a visionary

Today marks the 10th anniversary of the passing of Pat McGovern, an extraordinary visionary and philanthropist whose legacy continues to inspire and impact the world. As the founder of International Data Group (IDG)—a premier information technology organization—McGovern was not just a pioneering figure in the technology media world, but also a passionate advocate for using technology for the greater good.

Under McGovern’s leadership, IDG became a global powerhouse, launching iconic publications such as Computerworld, Macworld, and PCWorld. His foresight also led to the creation of IDG Ventures, a network of venture funds around the world, including the notable IDG Capital in Beijing.

Beyond his remarkable business acumen, McGovern, with his wife, Lore, co-founded the McGovern Institute for Brain Research at MIT in 2000. This institute has been at the forefront of neuroscience research, contributing to groundbreaking advancements in perception, attention, memory, and artificial intelligence (AI), as well as discoveries with direct translational impact, such as CRISPR technology. CRISPR discoveries made at the McGovern Institute are now licensed for the first clinical application of genome editing in sickle cell disease.

Pat McGovern’s commitment to bettering humanity is further evidenced by the Patrick J. McGovern Foundation, which works in partnership with public, private, and social institutions to drive progress on our most pressing challenges through the use of artificial intelligence, data science, and key emerging technologies.

Remembering Pat McGovern

On this solemn anniversary, we reflect on Pat McGovern’s enduring influence through the words of those who knew him best.

Lore Harp McGovern
Co-founder and board member of the McGovern Institute for Brain Research

“Technology was Pat’s medium, the platform on which he built his amazing company 60 years ago. But it was people who truly motivated Pat, and he empowered and encouraged them to reach for the stars. He lived by the motto, ‘let’s try it,’ and believed that nothing was out bounds. His goal was to help create a more just and peaceful world, and establishing the McGovern Institute was our way to give back meaningfully to this world. I know he would be so proud of what has been achieved and what is yet to come.”

Robert Desimone
Director of the McGovern Institute for Brain Research

“Pat McGovern had a vision for an international community of scientists and students drawn together to collaborate on understanding the brain.  This vision has been realized in the McGovern Institute, and we are now seeing the profound advances in our understanding of the brain and even clinical applications that Pat predicted would follow.”

Hugo Shong
Chairman of IDG Capital

“Pat’s impact on technology, science and research is immeasurable. A man of tremendous vision, he grew IDG out of Massachusetts and made it into one of the world’s most recognized brands in its space, forging partnerships and winning friends wherever he went. He applied that very same vision and energy to the McGovern Institute and the Patrick J. McGovern Foundation, in support of their impressive and necessary causes. I know he would be extremely proud of what both organizations have achieved thus far, and particularly how their work has broken technological frontiers and bettered the lives of millions.”

Vilas Dhar
President of the Patrick J. McGovern Foundation

“Patrick J. McGovern was more than a tech mogul; he was a visionary who believed in the power of information to empower people and improve societies. His work has had a profound effect on public policy and education, laying the groundwork for a more informed and connected world and guiding our work to ensure that artificial intelligence is used to sustain a human-centered world that creates economic and social opportunity for all.  On a personal level, Pat’s leadership was characterized by a genuine care for his employees and a belief in their potential. He created a culture of curiosity, encouraging humanity to explore, innovate, and dream big. His spirit lives on in every philanthropic activity we undertake.”

Genevieve Juillard
CEO of IDG 

The legacy of Pat McGovern is felt not just in Boston, but around the world—by the thousands of IDG customers and by people like me who have the privilege to work at IDG, 60 years after he founded it. His innovative spirit and unwavering commitment to excellence continue to inspire and guide us.”

Sudhir Sethi
Founder and Chairman of Chiratae Ventures (formally IDG Ventures)

“Pat McGovern was a visionary who foresaw the potential of technology in India and nurtured the ecosystem as an active participant. Pat enabled a launchpad for Chiratae Ventures, empowering our journey to become the leading home-grown venture capital fund in India today. Pat is a role model to entrepreneurs worldwide, and we honor his legacy with our annual ‘Chiratae Ventures Patrick J. McGovern Awards’ that celebrate courage and the spirit of entrepreneurship.”

Marc Benioff
Founder and CEO of Salesforce
wrote in the book “Future Forward that “Pat McGovern was a gift to us all, a trailblazing visionary who showed an entire generation of entrepreneurs what it means to be a principle-based leader and how to lead with higher values.”

Pat McGovern’s memory lives on not just in the institutions and innovations he fostered, but in the countless lives he touched and transformed. Today, we celebrate a man who saw the future and helped us all move towards it with hope and determination.

Do we only use 10 percent of our brain?

Movies like “Limitless” and “Lucy” play on the notion that humans use only 10 percent of their brains—and those who unlock a higher percentage wield powers like infinite memory or telekinesis. It’s enticing to think that so much of the brain remains untapped and is ripe for boosting human potential.

But the idea that we use 10 percent of our brain is 100 percent a myth.

In fact, scientists believe that we use our entire brain every day. Mila Halgren is a graduate student in the lab of Mark Harnett, an associate professor of brain and cognitive sciences and an investigator at the McGovern Institute. The Harnett lab studies the computational power of neurons, that is, how neural networks rapidly process massive amounts of information.

“All of our brain is constantly in use and consumes a tremendous amount of energy,” Halgren says. “Despite making up only two percent of our body weight, it devours 20 percent of our calories.” This doesn’t appear to change significantly with different tasks, from typing on a computer to doing yoga. “Even while we sleep, our entire brain remains intensely active.”

When did this myth take root?

Portrait of scientist Mila Halgren
Mila Halgren is a PhD student in MIT’s Department of Brain and Cognitive Sciences. Photo: Mila Halgren

The myth is thought to have gained traction when scientists first began exploring the brain’s abilities but lacked the tools to capture its exact workings. In 1907, William James, a founder of American psychology, suggested in his book “The Energies of Men” that “we are making use of only a small part of our possible mental and physical resources.” This influential work likely sparked the idea that humans access a mere fraction of the brain—setting this common misconception ablaze.

Brainpower lore even suggests that Albert Einstein credited his genius to being able to access more than 10 percent of his brain. However, no such quote has been documented and this too is perhaps a myth of cosmic proportion.

Halgren believes that there may be some fact backing this fiction. “People may think our brain is underutilized in the sense that some neurons fire very infrequently—once every few minutes or less. But this isn’t true of most neurons, some of which fire hundreds of times per second,” she says.

In the nascent years of neuroscience, scientists also argued that a large portion of the brain must be inactive because some people experience brain injuries and can still function at a high level, like the famous case of Phineas Gage. Halgren points to the brain’s remarkable plasticity—the reshaping of neural connections. “Entire brain hemispheres can be removed during early childhood and the rest of the brain will rewire and compensate for the loss. In other words, the brain will use 100 percent of what it has, but can make do with less depending on which structures are damaged.”

Is there a limit to the brain?

If we indeed use our entire brain, can humans tease out any problem? Or, are there enigmas in the world that we will never unravel?

“This is still in contention,” Halgren says. “There may be certain problems that the human brain is fundamentally unable to solve, like how a mouse will never understand chemistry and a chimpanzee can’t do calculus.”

Can we increase our brainpower?

The brain may have its limits, but there are ways to boost our cognitive prowess to ace that midterm or crank up productivity in the workplace. According to Halgren, “You can increase your brainpower, but there’s no ‘trick’ that will allow you to do so. Like any organ in your body, the brain works best with proper sleep, exercise, low stress, and a well-balanced diet.”

The truth is, we may never rearrange furniture with our minds or foresee which team will win the Super Bowl. The idea of a largely latent brain is draped in fantasy, but debunking this myth speaks to the immense growth of neuroscience over the years—and the allure of other misconceptions that scientists have yet to demystify.

The brain runs an internal simulation to keep track of time

Clocks, computers, and metronomes can keep time with exquisite precision. But even in the absence of an external time keeper, we can track time on our own. We know when minutes or hours have elapsed, and we can maintain a rhythm when we dance, sing, or play music. Now, neuroscientists at the National Autonomous University of Mexico and MIT’s McGovern Institute and have discovered one way the brain keeps a beat: It runs an internal simulation, mentally recreating the perception of an external rhythm and preparing an appropriately timed response.

The discovery, reported January 10, 2024, in the journal Science Advances, illustrates how animals can think about imaginary events and use an internal model to guide their interactions with the world. “It’s a real indication of mental states as an independent driver of behavior,” says neuroscientist Mehrdad Jazayeri, an investigator at the McGovern Institute and an associate professor of brain and cognitive sciences at MIT.

Predicting the future

Jazayeri teamed up with Victor de Lafuente, a neuroscientist at the National Autonomous University of Mexico, to investigate the brain’s time-keeping ability. De Lafuente, who led the study, says they were motivated by curiosity about how the brain makes predictions and prepares for future states of the world.

De Lafuente and his team used a visual metronome to teach monkeys a simple rhythm, showing them a circle that moved between two positions on a screen to set a steady tempo. Then the metronome stopped. After a variable and unpredictable pause, the monkeys were asked to indicate where the dot would be if the metronome had carried on.

Monkeys do well at this task, successfully keeping time after the metronome stops. After the waiting period, they are usually able to identify the expected position of the circle, which they communicate by reaching towards a touchscreen.

To find out how the animals were keeping track of the metronome’s rhythm, de Lafuente’s group monitored their brain activity. In several key brain regions, they found rhythmic patterns of activity that oscillated at the same frequency as the metronome. This occurred while the monkeys watched the metronome. More remarkably, it continued after the metronome had stopped.

“The animal is seeing things going and then things stop. What we find in the brain is the continuation of that process in the animal’s mind,” Jazayeri says. “An entire network is replicating what it was doing.”

That was true in the visual cortex, where clusters of neurons respond to stimuli in specific spots within the eyes’ field of view. One set of cells in the visual cortex fired when the metronome’s circle was on the left of the screen; another set fired when the dot was on the right. As a monkey followed the visual metronome, the researchers could see these cells’ activity alternating rhythmically, tracking the movement. When the metronome stopped, the back-and-forth neural activity continued, maintaining the rhythm. “Once the stimulus was no longer visible, they were seeing the stimulus within their minds,” de Lafuente says.

They found something similar in the brain’s motor cortex, where movements are prepared and executed. De Lafuente explains that the monkeys are motionless for most of their time-keeping task; only when they are asked to indicate where the metronome’s circle should be do they move a hand to touch the screen. But the motor cortex was engaged even before it was time to move. “Within their brains there is a signal that is switching from the left to the right,” he says. “So the monkeys are thinking ‘left, right, left, right’—even when they are not moving and the world is constant.”

While some scientists have proposed that the brain may have a central time-keeping mechanism, the team’s findings indicate that entire networks can be called on to track the passage of time. The monkeys’ model of the future was surprisingly explicit, de Lafuente says, representing specific sensory stimuli and plans for movement. “This offers a potential solution to mentally tracking the dynamics in the world, which is to basically think about them in terms of how they actually would have happened,” Jazayeri says.

 

Margaret Livingstone awarded the 2024 Scolnick Prize in Neuroscience

Today the McGovern Institute at MIT announces that the 2024 Edward M. Scolnick Prize in Neuroscience will be awarded to Margaret Livingstone, Takeda Professor of Neurobiology at Harvard Medical School. The Scolnick Prize is awarded annually by the McGovern Institute, for outstanding achievements in neuroscience.

“Margaret Livingstone’s driven curiosity and original experimental approaches have led to fundamental advances in our understanding of visual perception,” says Robert Desimone, director of the McGovern Institute and chair of the selection committee. “In particular, she has made major advances in resolving a long-standing debate over whether the brain domains and neurons that are specifically tuned to detect facial features are present from birth or arise from experience. Her developmental research shows that the cerebral cortex already contains topographic sensory maps at birth but that domain-specific maps, for example to recognize facial-features, require experience and sensory input to develop normally.”

“Margaret Livingstone’s driven curiosity and original experimental approaches have led to fundamental advances in our understanding of visual perception.” — Robert Desimone

Livingstone received a BS from MIT in 1972 and, under the mentorship of Edward Kravitz, a PhD in neurobiology from Harvard University in 1981. Her doctoral research in lobsters showed that the biogenic amines serotonin and octopamine control context-dependent behaviors such as offensive versus defensive postures. She followed up on this discovery as a postdoctoral fellow by researching biogenic amine signaling in learning and memory, with Prof. William Quinn at Princeton University. Using learning and memory mutants created in the fruit fly model she identified defects in dopamine-synthesizing enzymes and calcium-dependent enzymes that produce cAMP. Her results supported the then burgeoning idea that biogenic amines signal through second messengers enable behavioral plasticity.

To test whether biogenic amines also control neuronal function in mammals, Livingstone moved back to Harvard Medical School in 1983 to study the effects of sleep on visual processing with David Hubel, who was studying neuronal activity in the nonhuman primate visual cortex. Over the course of a 20-year collaboration, Livingstone and Hubel showed that the visual system is functionally and anatomically divided into parallel pathways that detect and process the distinct visual features of color, motion, and orientation.

Livingstone quickly rose through the academic ranks at Harvard to be appointed as an instructor and then assistant professor in 1983, associate professor in 1986 and full professor in 1988. With her own laboratory, Livingstone began to explore the organization of face-perception domains in the inferotemporal cortex of nonhuman primates. By combining single-cell recording and fMRI brain imaging data from the same animal, her then graduate student Doris Tsao, in collaboration with Winrich Freiwald, showed that an abundance of individual neurons within the face-recognition domain are tuned to a combination of facial features. These results helped to explain the long-standing question of how individual neurons show such exquisite selectivity to specific faces.

Three images of Mona Lisa, side by side, each with a different filter slightly obscuring the face.
Mona Lisa’s smile has been described as mysterious and fleeting because it seems to disappear when viewers look directly at it. Livingstone showed that Mona Lisa’s smile is more apparent in our peripheral vision than our central (or foveal) vision because our peripheral vision is more sensitive to low spatial frequencies, or shadows and shadings of black and white. These shadows make her lips seem to turn upward into a subtle smile. The three images above show the painting filtered to reveal very low spatial frequency features (left, with the smile more apparent) to high spatial frequency features (right, with the smile being less visible). Image: Margaret Livingstone

In researching face patches, Livingstone became fascinated with the question of whether face-perception domains are present from birth, as many scientists thought at the time. Livingstone and her postdoc Michael Arcaro carried out experiments that showed that the development of face patches requires visual exposure to faces in the early postnatal period. Moreover, they showed that entirely unnatural symbol-specific domains can form in animals that experienced intensive visual exposure to symbols early in development. Thus, experience is both necessary and sufficient for the formation of feature-specific domains in the inferotemporal cortex. Livingtone’s results support a consistent principle for the development of higher-level cortex, from a hard-wired sensory topographic map present at birth to the formation of experience-dependent domains that detect combined, stimulus-specific features.

Livingstone is also known for her scientifically based exploration of the visual arts. Her book “Vision and Art: The Biology of Seeing,” which has sold more than 40,000 copies to date, explores how both the techniques artists use and our anatomy and physiology influence our perception of art. Livingstone has presented this work to audiences around the country, from Pixar Studios, MicroSoft and IBM to The Metropolitan Museum of Art, The National Gallery and The Hirshhorn Museum.

In 2014, Livingstone was awarded the Takeda Professorship of Neurobiology at Harvard Medical School. She was awarded the Mika Salpeter Lifetime Achievement Award from the Society for Neuroscience in 2011, the Grossman Award from the Society of Neurological Surgeons in 2013 and the Roberts Prize for Best Paper in Physics in Medicine and Biology in 2013 and 2016. Livingstone was elected fellow of the American Academy of Arts and Sciences in 2018 and of the National Academy of Science in 2020. She will be awarded the Scolnick Prize in the spring of 2024.

Calling neurons to attention

The world assaults our senses, exposing us to more noise and color and scents and sensations than we can fully comprehend. Our brains keep us tuned in to what’s important, letting less relevant sights and sounds fade into the background while we focus on the most salient features of our surroundings. Now, scientists at MIT’s McGovern Institute have a better understanding of how the brain manages this critical task of directing our attention.

In the January 15, 2023, issue of the journal Neuron, a team led by Diego Mendoza-Halliday, a research scientist in McGovern Institute Director Robert Desimone’s lab, reports on a group of neurons in the brain’s prefrontal cortex that are critical for directing an animal’s visual attention. Their findings not only demonstrate this brain region’s important role in guiding attention, but also help establish attention as a function that is distinct from other cognitive functions, such as short-term memory, in the brain.

Attention and working memory

Mendoza-Halliday, who is now an assistant professor at the University of Pittsburgh, explains that attention has a close relationship to working memory, which the brain uses to temporarily store information after our senses take it in. The two brain functions strongly influence one another: We’re more likely to remember something if we pay attention to it, and paying attention to certain features of our environment may involve representing those features in our working memory. For example, he explains, both attention and working memory are called on when searching for a triangular red keychain on a cluttered desk: “What my brain does is it remembers that my keyholder is red and it’s a triangle, and then builds a working memory representation and uses it as a search template. So now everything that is red and everything that is a triangle receives preferential processing, or is attended to.”

Working memory and attention are so closely associated that some neuroscientists have proposed that the brain calls on the same neural mechanisms to create them. “This has led to the belief that maybe attention and working memory are just two sides of the same coin—that they’re basically the same function in different modes,” Mendoza-Halliday says. His team’s findings, however, say otherwise.

Circuit manipulation

To study the origins of attention in the brain, Mendoza-Halliday and colleagues trained monkeys to focus their attention on a visual feature that matches a cue they have seen before. After seeing a set of dots move across the screen, they must call on their working memory to remember the direction of that movement for a few seconds while the screen goes blank. Then the experimenters present the animals with more moving dots, this time traveling in multiple directions. By focusing on the dots moving in the same direction as the first set they saw, the monkeys are able to recognize when those dots briefly accelerate. Reporting on the speed change earns the animals a reward.

While the monkeys performed this task, the researchers monitored cells in several brain regions, including the prefrontal cortex, which Desimone’s team has proposed plays a role in directing attention. The activity patterns they recorded suggested that distinct groups of cells participated in the attention and working memory aspects of the task.

To better understand those cells’ roles, the researchers manipulated their activity. They used optogenetics, an approach in which a light-sensitive protein is introduced into neurons so that they can be switched on or off with a pulse of light. Desimone’s lab, in collaboration with Edward Boyden, the Y. Eva Tan Professor in Neurotechnology at MIT and a member of the McGovern Institute, pioneered the use of optogenetics in primates. “Optogenetics allows us to distinguish between correlation and causality in neural circuits,” says Desimone, the Doris and Don Berkey Professor of Neuroscience at MIT.  “If we turn off a circuit using optogenetics, and the animal can no longer perform the task, that is good evidence for a causal role of the circuit,” says Desimone, who is also a professor of brain and cognitive sciences at MIT.

Using this optogenetic method, they switched off neurons in a specific portion of the brain’s lateral prefrontal cortex for a few hundred milliseconds at a time as the monkeys performed their dot-tracking task. The researchers found that they could switch off signaling from the lateral prefrontal cortex early, when the monkeys needed their working memory but had no dots to attend to, without interfering with the animals’ ability to complete the task. But when they blocked signaling when the monkeys needed to focus their attention, the animals performed poorly.

The team also monitored activity in the brain visual’s cortex during the moving-dot task. When the lateral prefrontal cortex was shut off, neurons in connected visual areas showed less heightened reactivity to movement in the direction the monkey was attending to. Mendoza-Halliday says this suggests that cells in the lateral prefrontal cortex are important for telling sensory-processing circuits what visual features to pay attention to.

The discovery that at least part of the brain’s lateral prefrontal cortex is critical for attention but not for working memory offers a new view of the relationship between the two. “It is a physiological demonstration that working memory and attention cannot be the same function, since they rely on partially separate neuronal populations and neural mechanisms,” Mendoza-Halliday says.