Reevaluating an approach to functional brain imaging

A new way of imaging the brain with magnetic resonance imaging (MRI) does not directly detect neural activity as originally reported, according to scientists at MIT’s McGovern Institute. The method, first described in 2022, generated excitement within the neuroscience community as a potentially transformative approach. But a study from the lab of McGovern Associate Investigator Alan Jasanoff, reported March 27, 2024, in the journal Science Advances, demonstrates that MRI signals produced by the new method are generated in large part by the imaging process itself, not neuronal activity.

A man stands with his arms crossed in front of a board with mathematical equations written on it.
Alan Jasanoff, associate member of the McGovern Institute, and a professor of brain and cognitive sciences, biological engineering, and nuclear science and engineering at MIT. Photo: Justin Knight

Jasanoff explains that having a noninvasive means of seeing neuronal activity in the brain is a long-sought goal for neuroscientists. The functional MRI methods that researchers currently use to monitor brain activity don’t actually detect neural signaling. Instead, they use blood flow changes triggered by brain activity as a proxy. This reveals which parts of the brain are engaged during imaging, but it cannot pinpoint neural activity to precise locations, and it is too slow to truly track neurons’ rapid-fire communications.

So when a team of scientists reported in Science a new MRI method called DIANA, for “direct imaging of neuronal activity,” neuroscientists paid attention. The authors claimed that DIANA detected MRI signals in the brain that corresponded to the electrical signals of neurons, and that it acquired signals far faster than the methods now used for functional MRI.

“Everyone wants this,” Jasanoff says. “If we could look at the whole brain and follow its activity with millisecond precision and know that all the signals that we’re seeing have to do with cellular activity, this would be just wonderful. It could tell us all kinds of things about how the brain works and what goes wrong in disease.”

Jasanoff adds that from the initial report, it was not clear what brain changes DIANA was detecting to produce such a rapid readout of neural activity. Curious, he and his team began to experiment with the method. “We wanted to reproduce it, and we wanted to understand how it worked,” he says.

Decoding DIANA

Recreating the MRI procedure reported by DIANA’s developers, postdoctoral researcher Valerie Doan Phi Van imaged the brain of a rat as an electric stimulus was delivered to one paw. Phi Van says she was excited to see an MRI signal appear in the brain’s sensory cortex, exactly when and where neurons were expected to respond to the sensation on the paw. “I was able to reproduce it,” she says. “I could see the signal.”

With further tests of the system, however, her enthusiasm waned. To investigate the source of the signal, she disconnected the device used to stimulate the animal’s paw, then repeated the imaging. Again, signals showed up in the sensory processing part of the brain. But this time, there was no reason for neurons in that area to be activated. In fact, Phi Van found, the MRI produced the same kinds of signals when the animal inside the scanner was replaced with a tube of water. It was clear DIANA’s functional signals were not arising from neural activity.

Phi Van traced the source of the specious signals to the pulse program that directs DIANA’s imaging process, detailing the sequence of steps the MRI scanner uses to collect data. Embedded within DIANA’s pulse program was a trigger for the device that delivers sensory input to the animal inside the scanner. That synchronizes the two processes, so the stimulation occurs at a precise moment during data acquisition. That trigger appeared to be causing signals that DIANA’s developers had concluded indicated neural activity.

It was clear DIANA’s functional signals were not arising from neural activity.

Phi Van altered the pulse program, changing the way the stimulator was triggered. Using the updated program, the MRI scanner detected no functional signal in the brain in response to the same paw stimulation that had produced a signal before. “If you take this part of the code out, then the signal will also be gone. So that means the signal we see is an artifact of the trigger,” she says.

Jasanoff and Phi Van went on to find reasons why other researchers have struggled to reproduce the results of the original DIANA report, noting that the trigger-generated signals can disappear with slight variations in the imaging process. With their postdoctoral colleague Sajal Sen, they also found evidence that cellular changes that DIANA’s developers had proposed might give rise to a functional MRI signal were not related to neuronal activity.

Jasanoff and Phi Van say it was important to share their findings with the research community, particularly as efforts continue to develop new neuroimaging methods. “If people want to try to repeat any part of the study or implement any kind of approach like this, they have to avoid falling into these pits,” Jasanoff says. He adds that they admire the authors of the original study for their ambition: “The community needs scientists who are willing to take risks to move the field ahead.”

Beyond the brain

This story also appears in the Spring 2024 issue of BrainScan.

___

Like many people, graduate student Guillermo Herrera-Arcos found himself working from home in the spring of 2020. Surrounded by equipment he’d hastily borrowed from the lab, he began testing electrical components he would need to control muscles in a new way. If it worked, he and colleagues in Hugh Herr’s lab might have found a promising strategy for restoring movement when signals from the brain fail to reach the muscles, such as after a spinal cord injury or stroke.

Man holds a fiber that is illuminated with blue light at its tip.
Guillermo Herrera-Arcos, a graduate student in Hugh Herr’s lab, is developing an optical technology with the potential to restore movement in people with spinal cord injury or stroke. Photo: Steph Stevens

Herrera-Arcos and Herr’s work is one way McGovern neuroscientists are working at the interface of brain and machine. Such work aims to enable better ways of understanding and treating injury and disease, offering scientists tools to manipulate neural signaling as well as to replace its function when it is lost.

Restoring movement

The system Herrera-Arcos and Herr were developing wouldn’t be the first to bypass the brain to move muscles. Neuroprosthetic devices that use electricity to stimulate muscle-activating motor neurons are sometimes used during rehabilitation from an injury, helping patients maintain muscle mass when they can’t use their muscles on their own. But existing neuroprostheses lack the precision of the body’s natural movement system. They send all-or-nothing signals that quickly tire muscles out.

TWo men looking at a computer screen, one points to the image on the screen.
Hugh Herr (left) and graduate student Guillermo Herrera-Arco at work in the lab. Photo: Steph Stevens

Researchers attribute that fatigue to an unnatural recruitment of neurons and muscle fibers. Electrical signals go straight to the largest, most powerful components of the system, even when smaller units could do the job. “You turn up the stimulus and you get no force, and then suddenly, you get too much force. And then fatigue, a lack of controllability, and so on,” Herr explains. The nervous system, in contrast, calls first on small motor units and recruits larger ones only when needed to generate more force.

Optical solution

In hopes of recreating this strategic pattern of muscle activation, Herr and Herrera-Arcos turned to a technique pioneered by McGovern Investigator Edward Boyden that has become common research: controlling neural activity with light. To put neurons under their control, researchers equip them with light-sensitive proteins. The cells can then be switched on or off within milliseconds using an optic fiber.

When a return to the lab enabled Herr and Herrera-Arcos to test their idea, they were thrilled with the results. Using light to switch on motor neurons and stimulate a single muscle in mice, they recreated the nervous system’s natural muscle activation pattern. Consequently, fatigue did not set in nearly as quickly as it would with an electrically-activated system. Herrera-Arcos says he set out to measure the force generated by the muscle and how long it took to fatigue, and he had to keep extending his experiments: After an hour of light stimulation, it was still going strong.

To optimize the force generated by the system, the researchers used feedback from the muscle to modulate the intensity of the neuron-activating light. Their success suggests this type of closed-loop system could enable fatigue-resistant neuroprostheses for muscle control.

“The field has been struggling for many decades with the challenge of how to control living muscle tissue,” Herr says. “So the idea that this could be solved is very, very exciting.”

There’s work to be done to translate what the team has learned into practical neuroprosthetics for people who need them. To use light to stimulate human motor neurons, light-sensitive proteins will need to be delivered to those cells. Figuring out how to do that safely is a high priority at the K. Lisa Yang Center for Bionics, which Herr co-directs with Boyden, and might lead to better ways of obtaining tactile and proprioceptive feedback from prosthetic limbs, as well as to control muscles for the restoration of natural movements after spinal cord injury. “It would be a game changer for a number of conditions,” Herr says.

Gut-brain connection

While Herr’s team works where the nervous system meets the muscle, researchers in Polina Anikeeva’s lab are exploring the brain’s relationship with an often-overlooked part of the nervous system — the hundreds of millions of neurons in the gut.

“Classically, when we think of brain function in neuroscience, it is always studied in the framework of how the brain interacts with the surrounding environment and how it integrates different stimuli,” says Atharva Sahasrabudhe, a graduate student in the group. “But the brain does not function in a vacuum. It’s constantly getting and integrating signals from the peripheral organs.”

Man smiles at camera while holding up tiny devices.
Atharva Sahasrabudhe holds some of the fiber technology he developed in the Anikeeva lab. Photo: Steph Stevens

The nervous system has a particularly pronounced presence in the gut. Neurons embedded within the walls of the gastrointestinal (GI) tract monitor local conditions and relay information to the brain. This mind-body connection may help explain the GI symptoms associated with some brain-related conditions, including Parkinson’s disease, mood disorders, and autism. Researchers have yet to untangle whether GI symptoms help drive these conditions, are a consequence of them, or are coincidental. Either way, Anikeeva says, “if there is a GI connection, maybe we can tap into this connection to improve the quality of life of affected individuals.”

Flexible fibers

At the K. Lisa Yang Brain-Body Center that Anikeeva directs, studying how the gut communicates with the brain is a high priority. But most of neuroscientists’ tools are designed specifically to investigate the brain. To explore new territory, Sahasrabudhe devised a device that is compatible with the long and twisty GI tract of a mouse.

The new tool is a slender, flexible fiber equipped with light emitters for activating subsets of cells and tiny channels for delivering nutrients or drugs. To access neurons dispersed throughout the GI tract, its wirelessly controlled components are embedded along its length. A more rigid probe at one end of the device is designed to monitor and manipulate neural activity in the brain, so researchers can follow the nervous system’s swift communications across the gut-brain axis.

Scientists on Anikeeva’s team are deploying the device to investigate how gut-brain communications contribute to several conditions. Postdoctoral researcher Sharmelee Selvaraji is focused on Parkinson’s disease. Like many scientists, she wonders whether the neurodegenerative movement disorder might actually start in the gut. There’s a molecular link: the misshapen protein that sickens brain cells in patients with Parkinson’s disease has been found aggregating in the gut, too. And the constipation and other GI problems that are common complaints for people with Parkinson’s disease usually start decades before the onset of motor symptoms. She hopes that by investigating gut-brain communications in a mouse model of the disease, she will uncover important clues about its origins and progression.

“We’re trying to observe the effects of Parkinson’s in the gut, and then eventually, we may be able to intervene at an earlier stage to slow down the disease progression, or even cure it,” says Selvaraji.

Meanwhile, colleagues in the lab are exploring related questions about gut-brain communications in mouse models of autism, anxiety disorders, and addiction. Others continue to focus on technology development, adding new capabilities to the gut-brain probe or applying similar engineering principles to new problems.

“We are realizing that the brain is very much connected to the rest of the body,” Anikeeva says. “There is now a lot of effort in the lab to create technology suitable for a variety of really interesting organs that will help us study brain-body connections.”

Researchers reveal roadmap for AI innovation in brain and language learning

One of the hallmarks of humanity is language, but now, powerful new artificial intelligence tools also compose poetry, write songs, and have extensive conversations with human users. Tools like ChatGPT and Gemini are widely available at the tap of a button — but just how smart are these AIs?

A new multidisciplinary research effort co-led by Anna (Anya) Ivanova, assistant professor in the School of Psychology at Georgia Tech, alongside Kyle Mahowald, an assistant professor in the Department of Linguistics at the University of Texas at Austin, is working to uncover just that.

Their results could lead to innovative AIs that are more similar to the human brain than ever before — and also help neuroscientists and psychologists who are unearthing the secrets of our own minds.

The study, “Dissociating Language and Thought in Large Language Models,” is published this week in the scientific journal Trends in Cognitive Sciences. The work is already making waves in the scientific community: an earlier preprint of the paper, released in January 2023, has already been cited more than 150 times by fellow researchers. The research team has continued to refine the research for this final journal publication.

“ChatGPT became available while we were finalizing the preprint,” explains Ivanova, who conducted the research while a postdoctoral researcher at MIT’s McGovern Institute. “Over the past year, we’ve had an opportunity to update our arguments in light of this newer generation of models, now including ChatGPT.”

Form versus function

The study focuses on large language models (LLMs), which include AIs like ChatGPT. LLMs are text prediction models, and create writing by predicting which word comes next in a sentence — just like how a cell phone or email service like Gmail might suggest what next word you might want to write. However, while this type of language learning is extremely effective at creating coherent sentences, that doesn’t necessarily signify intelligence.

Ivanova’s team argues that formal competence — creating a well-structured, grammatically correct sentence — should be differentiated from functional competence — answering the right question, communicating the correct information, or appropriately communicating. They also found that while LLMs trained on text prediction are often very good at formal skills, they still struggle with functional skills.

“We humans have the tendency to conflate language and thought,” Ivanova says. “I think that’s an important thing to keep in mind as we’re trying to figure out what these models are capable of, because using that ability to be good at language, to be good at formal competence, leads many people to assume that AIs are also good at thinking — even when that’s not the case.

It’s a heuristic that we developed when interacting with other humans over thousands of years of evolution, but now in some respects, that heuristic is broken,” Ivanova explains.

The distinction between formal and functional competence is also vital in rigorously testing an AI’s capabilities, Ivanova adds. Evaluations often don’t distinguish formal and functional competence, making it difficult to assess what factors are determining a model’s success or failure. The need to develop distinct tests is one of the team’s more widely accepted findings, and one that some researchers in the field have already begun to implement.

Creating a modular system

While the human tendency to conflate functional and formal competence may have hindered understanding of LLMs in the past, our human brains could also be the key to unlocking more powerful AIs.

Leveraging the tools of cognitive neuroscience while a postdoctoral associate at Massachusetts Institute of Technology (MIT), Ivanova and her team studied brain activity in neurotypical individuals via fMRI, and used behavioral assessments of individuals with brain damage to test the causal role of brain regions in language and cognition — both conducting new research and drawing on previous studies. The team’s results showed that human brains use different regions for functional and formal competence, further supporting this distinction in AIs.

“Our research shows that in the brain, there is a language processing module and separate modules for reasoning,” Ivanova says. This modularity could also serve as a blueprint for how to develop future AIs.

“Building on insights from human brains — where the language processing system is sharply distinct from the systems that support our ability to think — we argue that the language-thought distinction is conceptually important for thinking about, evaluating, and improving large language models, especially given recent efforts to imbue these models with human-like intelligence,” says Ivanova’s former advisor and study co-author Evelina Fedorenko, a professor of brain and cognitive sciences at MIT and a member of the McGovern Institute for Brain Research.

Developing AIs in the pattern of the human brain could help create more powerful systems — while also helping them dovetail more naturally with human users. “Generally, differences in a mechanism’s internal structure affect behavior,” Ivanova says. “Building a system that has a broad macroscopic organization similar to that of the human brain could help ensure that it might be more aligned with humans down the road.”

In the rapidly developing world of AI, these systems are ripe for experimentation. After the team’s preprint was published, OpenAI announced their intention to add plug-ins to their GPT models.

“That plug-in system is actually very similar to what we suggest,” Ivanova adds. “It takes a modularity approach where the language model can be an interface to another specialized module within a system.”

While the OpenAI plug-in system will include features like booking flights and ordering food, rather than cognitively inspired features, it demonstrates that “the approach has a lot of potential,” Ivanova says.

The future of AI — and what it can tell us about ourselves

While our own brains might be the key to unlocking better, more powerful AIs, these AIs might also help us better understand ourselves. “When researchers try to study the brain and cognition, it’s often useful to have some smaller system where you can actually go in and poke around and see what’s going on before you get to the immense complexity,” Ivanova explains.

However, since human language is unique, model or animal systems are more difficult to relate. That’s where LLMs come in.

“There are lots of surprising similarities between how one would approach the study of the brain and the study of an artificial neural network” like a large language model, she adds. “They are both information processing systems that have biological or artificial neurons to perform computations.”

In many ways, the human brain is still a black box, but openly available AIs offer a unique opportunity to see the synthetic system’s inner workings and modify variables, and explore these corresponding systems like never before.

“It’s a really wonderful model that we have a lot of control over,” Ivanova says. “Neural networks — they are amazing.”

Along with Anna (Anya) Ivanova, Kyle Mahowald, and Evelina Fedorenko, the research team also includes Idan Blank (University of California, Los Angeles), as well as Nancy Kanwisher and Joshua Tenenbaum (Massachusetts Institute of Technology).

Honoring a visionary

Today marks the 10th anniversary of the passing of Pat McGovern, an extraordinary visionary and philanthropist whose legacy continues to inspire and impact the world. As the founder of International Data Group (IDG)—a premier information technology organization—McGovern was not just a pioneering figure in the technology media world, but also a passionate advocate for using technology for the greater good.

Under McGovern’s leadership, IDG became a global powerhouse, launching iconic publications such as Computerworld, Macworld, and PCWorld. His foresight also led to the creation of IDG Ventures, a network of venture funds around the world, including the notable IDG Capital in Beijing.

Beyond his remarkable business acumen, McGovern, with his wife, Lore, co-founded the McGovern Institute for Brain Research at MIT in 2000. This institute has been at the forefront of neuroscience research, contributing to groundbreaking advancements in perception, attention, memory, and artificial intelligence (AI), as well as discoveries with direct translational impact, such as CRISPR technology. CRISPR discoveries made at the McGovern Institute are now licensed for the first clinical application of genome editing in sickle cell disease.

Pat McGovern’s commitment to bettering humanity is further evidenced by the Patrick J. McGovern Foundation, which works in partnership with public, private, and social institutions to drive progress on our most pressing challenges through the use of artificial intelligence, data science, and key emerging technologies.

Remembering Pat McGovern

On this solemn anniversary, we reflect on Pat McGovern’s enduring influence through the words of those who knew him best.

Lore Harp McGovern
Co-founder and board member of the McGovern Institute for Brain Research

“Technology was Pat’s medium, the platform on which he built his amazing company 60 years ago. But it was people who truly motivated Pat, and he empowered and encouraged them to reach for the stars. He lived by the motto, ‘let’s try it,’ and believed that nothing was out bounds. His goal was to help create a more just and peaceful world, and establishing the McGovern Institute was our way to give back meaningfully to this world. I know he would be so proud of what has been achieved and what is yet to come.”

Robert Desimone
Director of the McGovern Institute for Brain Research

“Pat McGovern had a vision for an international community of scientists and students drawn together to collaborate on understanding the brain.  This vision has been realized in the McGovern Institute, and we are now seeing the profound advances in our understanding of the brain and even clinical applications that Pat predicted would follow.”

Hugo Shong
Chairman of IDG Capital

“Pat’s impact on technology, science and research is immeasurable. A man of tremendous vision, he grew IDG out of Massachusetts and made it into one of the world’s most recognized brands in its space, forging partnerships and winning friends wherever he went. He applied that very same vision and energy to the McGovern Institute and the Patrick J. McGovern Foundation, in support of their impressive and necessary causes. I know he would be extremely proud of what both organizations have achieved thus far, and particularly how their work has broken technological frontiers and bettered the lives of millions.”

Vilas Dhar
President of the Patrick J. McGovern Foundation

“Patrick J. McGovern was more than a tech mogul; he was a visionary who believed in the power of information to empower people and improve societies. His work has had a profound effect on public policy and education, laying the groundwork for a more informed and connected world and guiding our work to ensure that artificial intelligence is used to sustain a human-centered world that creates economic and social opportunity for all.  On a personal level, Pat’s leadership was characterized by a genuine care for his employees and a belief in their potential. He created a culture of curiosity, encouraging humanity to explore, innovate, and dream big. His spirit lives on in every philanthropic activity we undertake.”

Genevieve Juillard
CEO of IDG 

The legacy of Pat McGovern is felt not just in Boston, but around the world—by the thousands of IDG customers and by people like me who have the privilege to work at IDG, 60 years after he founded it. His innovative spirit and unwavering commitment to excellence continue to inspire and guide us.”

Sudhir Sethi
Founder and Chairman of Chiratae Ventures (formally IDG Ventures)

“Pat McGovern was a visionary who foresaw the potential of technology in India and nurtured the ecosystem as an active participant. Pat enabled a launchpad for Chiratae Ventures, empowering our journey to become the leading home-grown venture capital fund in India today. Pat is a role model to entrepreneurs worldwide, and we honor his legacy with our annual ‘Chiratae Ventures Patrick J. McGovern Awards’ that celebrate courage and the spirit of entrepreneurship.”

Marc Benioff
Founder and CEO of Salesforce
wrote in the book “Future Forward that “Pat McGovern was a gift to us all, a trailblazing visionary who showed an entire generation of entrepreneurs what it means to be a principle-based leader and how to lead with higher values.”

Pat McGovern’s memory lives on not just in the institutions and innovations he fostered, but in the countless lives he touched and transformed. Today, we celebrate a man who saw the future and helped us all move towards it with hope and determination.

Do we only use 10 percent of our brain?

Movies like “Limitless” and “Lucy” play on the notion that humans use only 10 percent of their brains—and those who unlock a higher percentage wield powers like infinite memory or telekinesis. It’s enticing to think that so much of the brain remains untapped and is ripe for boosting human potential.

But the idea that we use 10 percent of our brain is 100 percent a myth.

In fact, scientists believe that we use our entire brain every day. Mila Halgren is a graduate student in the lab of Mark Harnett, an associate professor of brain and cognitive sciences and an investigator at the McGovern Institute. The Harnett lab studies the computational power of neurons, that is, how neural networks rapidly process massive amounts of information.

“All of our brain is constantly in use and consumes a tremendous amount of energy,” Halgren says. “Despite making up only two percent of our body weight, it devours 20 percent of our calories.” This doesn’t appear to change significantly with different tasks, from typing on a computer to doing yoga. “Even while we sleep, our entire brain remains intensely active.”

When did this myth take root?

Portrait of scientist Mila Halgren
Mila Halgren is a PhD student in MIT’s Department of Brain and Cognitive Sciences. Photo: Mila Halgren

The myth is thought to have gained traction when scientists first began exploring the brain’s abilities but lacked the tools to capture its exact workings. In 1907, William James, a founder of American psychology, suggested in his book “The Energies of Men” that “we are making use of only a small part of our possible mental and physical resources.” This influential work likely sparked the idea that humans access a mere fraction of the brain—setting this common misconception ablaze.

Brainpower lore even suggests that Albert Einstein credited his genius to being able to access more than 10 percent of his brain. However, no such quote has been documented and this too is perhaps a myth of cosmic proportion.

Halgren believes that there may be some fact backing this fiction. “People may think our brain is underutilized in the sense that some neurons fire very infrequently—once every few minutes or less. But this isn’t true of most neurons, some of which fire hundreds of times per second,” she says.

In the nascent years of neuroscience, scientists also argued that a large portion of the brain must be inactive because some people experience brain injuries and can still function at a high level, like the famous case of Phineas Gage. Halgren points to the brain’s remarkable plasticity—the reshaping of neural connections. “Entire brain hemispheres can be removed during early childhood and the rest of the brain will rewire and compensate for the loss. In other words, the brain will use 100 percent of what it has, but can make do with less depending on which structures are damaged.”

Is there a limit to the brain?

If we indeed use our entire brain, can humans tease out any problem? Or, are there enigmas in the world that we will never unravel?

“This is still in contention,” Halgren says. “There may be certain problems that the human brain is fundamentally unable to solve, like how a mouse will never understand chemistry and a chimpanzee can’t do calculus.”

Can we increase our brainpower?

The brain may have its limits, but there are ways to boost our cognitive prowess to ace that midterm or crank up productivity in the workplace. According to Halgren, “You can increase your brainpower, but there’s no ‘trick’ that will allow you to do so. Like any organ in your body, the brain works best with proper sleep, exercise, low stress, and a well-balanced diet.”

The truth is, we may never rearrange furniture with our minds or foresee which team will win the Super Bowl. The idea of a largely latent brain is draped in fantasy, but debunking this myth speaks to the immense growth of neuroscience over the years—and the allure of other misconceptions that scientists have yet to demystify.

The brain runs an internal simulation to keep track of time

Clocks, computers, and metronomes can keep time with exquisite precision. But even in the absence of an external time keeper, we can track time on our own. We know when minutes or hours have elapsed, and we can maintain a rhythm when we dance, sing, or play music. Now, neuroscientists at the National Autonomous University of Mexico and MIT’s McGovern Institute and have discovered one way the brain keeps a beat: It runs an internal simulation, mentally recreating the perception of an external rhythm and preparing an appropriately timed response.

The discovery, reported January 10, 2024, in the journal Science Advances, illustrates how animals can think about imaginary events and use an internal model to guide their interactions with the world. “It’s a real indication of mental states as an independent driver of behavior,” says neuroscientist Mehrdad Jazayeri, an investigator at the McGovern Institute and an associate professor of brain and cognitive sciences at MIT.

Predicting the future

Jazayeri teamed up with Victor de Lafuente, a neuroscientist at the National Autonomous University of Mexico, to investigate the brain’s time-keeping ability. De Lafuente, who led the study, says they were motivated by curiosity about how the brain makes predictions and prepares for future states of the world.

De Lafuente and his team used a visual metronome to teach monkeys a simple rhythm, showing them a circle that moved between two positions on a screen to set a steady tempo. Then the metronome stopped. After a variable and unpredictable pause, the monkeys were asked to indicate where the dot would be if the metronome had carried on.

Monkeys do well at this task, successfully keeping time after the metronome stops. After the waiting period, they are usually able to identify the expected position of the circle, which they communicate by reaching towards a touchscreen.

To find out how the animals were keeping track of the metronome’s rhythm, de Lafuente’s group monitored their brain activity. In several key brain regions, they found rhythmic patterns of activity that oscillated at the same frequency as the metronome. This occurred while the monkeys watched the metronome. More remarkably, it continued after the metronome had stopped.

“The animal is seeing things going and then things stop. What we find in the brain is the continuation of that process in the animal’s mind,” Jazayeri says. “An entire network is replicating what it was doing.”

That was true in the visual cortex, where clusters of neurons respond to stimuli in specific spots within the eyes’ field of view. One set of cells in the visual cortex fired when the metronome’s circle was on the left of the screen; another set fired when the dot was on the right. As a monkey followed the visual metronome, the researchers could see these cells’ activity alternating rhythmically, tracking the movement. When the metronome stopped, the back-and-forth neural activity continued, maintaining the rhythm. “Once the stimulus was no longer visible, they were seeing the stimulus within their minds,” de Lafuente says.

They found something similar in the brain’s motor cortex, where movements are prepared and executed. De Lafuente explains that the monkeys are motionless for most of their time-keeping task; only when they are asked to indicate where the metronome’s circle should be do they move a hand to touch the screen. But the motor cortex was engaged even before it was time to move. “Within their brains there is a signal that is switching from the left to the right,” he says. “So the monkeys are thinking ‘left, right, left, right’—even when they are not moving and the world is constant.”

While some scientists have proposed that the brain may have a central time-keeping mechanism, the team’s findings indicate that entire networks can be called on to track the passage of time. The monkeys’ model of the future was surprisingly explicit, de Lafuente says, representing specific sensory stimuli and plans for movement. “This offers a potential solution to mentally tracking the dynamics in the world, which is to basically think about them in terms of how they actually would have happened,” Jazayeri says.

 

Margaret Livingstone awarded the 2024 Scolnick Prize in Neuroscience

Today the McGovern Institute at MIT announces that the 2024 Edward M. Scolnick Prize in Neuroscience will be awarded to Margaret Livingstone, Takeda Professor of Neurobiology at Harvard Medical School. The Scolnick Prize is awarded annually by the McGovern Institute, for outstanding achievements in neuroscience.

“Margaret Livingstone’s driven curiosity and original experimental approaches have led to fundamental advances in our understanding of visual perception,” says Robert Desimone, director of the McGovern Institute and chair of the selection committee. “In particular, she has made major advances in resolving a long-standing debate over whether the brain domains and neurons that are specifically tuned to detect facial features are present from birth or arise from experience. Her developmental research shows that the cerebral cortex already contains topographic sensory maps at birth but that domain-specific maps, for example to recognize facial-features, require experience and sensory input to develop normally.”

“Margaret Livingstone’s driven curiosity and original experimental approaches have led to fundamental advances in our understanding of visual perception.” — Robert Desimone

Livingstone received a BS from MIT in 1972 and, under the mentorship of Edward Kravitz, a PhD in neurobiology from Harvard University in 1981. Her doctoral research in lobsters showed that the biogenic amines serotonin and octopamine control context-dependent behaviors such as offensive versus defensive postures. She followed up on this discovery as a postdoctoral fellow by researching biogenic amine signaling in learning and memory, with Prof. William Quinn at Princeton University. Using learning and memory mutants created in the fruit fly model she identified defects in dopamine-synthesizing enzymes and calcium-dependent enzymes that produce cAMP. Her results supported the then burgeoning idea that biogenic amines signal through second messengers enable behavioral plasticity.

To test whether biogenic amines also control neuronal function in mammals, Livingstone moved back to Harvard Medical School in 1983 to study the effects of sleep on visual processing with David Hubel, who was studying neuronal activity in the nonhuman primate visual cortex. Over the course of a 20-year collaboration, Livingstone and Hubel showed that the visual system is functionally and anatomically divided into parallel pathways that detect and process the distinct visual features of color, motion, and orientation.

Livingstone quickly rose through the academic ranks at Harvard to be appointed as an instructor and then assistant professor in 1983, associate professor in 1986 and full professor in 1988. With her own laboratory, Livingstone began to explore the organization of face-perception domains in the inferotemporal cortex of nonhuman primates. By combining single-cell recording and fMRI brain imaging data from the same animal, her then graduate student Doris Tsao, in collaboration with Winrich Freiwald, showed that an abundance of individual neurons within the face-recognition domain are tuned to a combination of facial features. These results helped to explain the long-standing question of how individual neurons show such exquisite selectivity to specific faces.

Three images of Mona Lisa, side by side, each with a different filter slightly obscuring the face.
Mona Lisa’s smile has been described as mysterious and fleeting because it seems to disappear when viewers look directly at it. Livingstone showed that Mona Lisa’s smile is more apparent in our peripheral vision than our central (or foveal) vision because our peripheral vision is more sensitive to low spatial frequencies, or shadows and shadings of black and white. These shadows make her lips seem to turn upward into a subtle smile. The three images above show the painting filtered to reveal very low spatial frequency features (left, with the smile more apparent) to high spatial frequency features (right, with the smile being less visible). Image: Margaret Livingstone

In researching face patches, Livingstone became fascinated with the question of whether face-perception domains are present from birth, as many scientists thought at the time. Livingstone and her postdoc Michael Arcaro carried out experiments that showed that the development of face patches requires visual exposure to faces in the early postnatal period. Moreover, they showed that entirely unnatural symbol-specific domains can form in animals that experienced intensive visual exposure to symbols early in development. Thus, experience is both necessary and sufficient for the formation of feature-specific domains in the inferotemporal cortex. Livingtone’s results support a consistent principle for the development of higher-level cortex, from a hard-wired sensory topographic map present at birth to the formation of experience-dependent domains that detect combined, stimulus-specific features.

Livingstone is also known for her scientifically based exploration of the visual arts. Her book “Vision and Art: The Biology of Seeing,” which has sold more than 40,000 copies to date, explores how both the techniques artists use and our anatomy and physiology influence our perception of art. Livingstone has presented this work to audiences around the country, from Pixar Studios, MicroSoft and IBM to The Metropolitan Museum of Art, The National Gallery and The Hirshhorn Museum.

In 2014, Livingstone was awarded the Takeda Professorship of Neurobiology at Harvard Medical School. She was awarded the Mika Salpeter Lifetime Achievement Award from the Society for Neuroscience in 2011, the Grossman Award from the Society of Neurological Surgeons in 2013 and the Roberts Prize for Best Paper in Physics in Medicine and Biology in 2013 and 2016. Livingstone was elected fellow of the American Academy of Arts and Sciences in 2018 and of the National Academy of Science in 2020. She will be awarded the Scolnick Prize in the spring of 2024.

Calling neurons to attention

The world assaults our senses, exposing us to more noise and color and scents and sensations than we can fully comprehend. Our brains keep us tuned in to what’s important, letting less relevant sights and sounds fade into the background while we focus on the most salient features of our surroundings. Now, scientists at MIT’s McGovern Institute have a better understanding of how the brain manages this critical task of directing our attention.

In the January 15, 2023, issue of the journal Neuron, a team led by Diego Mendoza-Halliday, a research scientist in McGovern Institute Director Robert Desimone’s lab, reports on a group of neurons in the brain’s prefrontal cortex that are critical for directing an animal’s visual attention. Their findings not only demonstrate this brain region’s important role in guiding attention, but also help establish attention as a function that is distinct from other cognitive functions, such as short-term memory, in the brain.

Attention and working memory

Mendoza-Halliday, who is now an assistant professor at the University of Pittsburgh, explains that attention has a close relationship to working memory, which the brain uses to temporarily store information after our senses take it in. The two brain functions strongly influence one another: We’re more likely to remember something if we pay attention to it, and paying attention to certain features of our environment may involve representing those features in our working memory. For example, he explains, both attention and working memory are called on when searching for a triangular red keychain on a cluttered desk: “What my brain does is it remembers that my keyholder is red and it’s a triangle, and then builds a working memory representation and uses it as a search template. So now everything that is red and everything that is a triangle receives preferential processing, or is attended to.”

Working memory and attention are so closely associated that some neuroscientists have proposed that the brain calls on the same neural mechanisms to create them. “This has led to the belief that maybe attention and working memory are just two sides of the same coin—that they’re basically the same function in different modes,” Mendoza-Halliday says. His team’s findings, however, say otherwise.

Circuit manipulation

To study the origins of attention in the brain, Mendoza-Halliday and colleagues trained monkeys to focus their attention on a visual feature that matches a cue they have seen before. After seeing a set of dots move across the screen, they must call on their working memory to remember the direction of that movement for a few seconds while the screen goes blank. Then the experimenters present the animals with more moving dots, this time traveling in multiple directions. By focusing on the dots moving in the same direction as the first set they saw, the monkeys are able to recognize when those dots briefly accelerate. Reporting on the speed change earns the animals a reward.

While the monkeys performed this task, the researchers monitored cells in several brain regions, including the prefrontal cortex, which Desimone’s team has proposed plays a role in directing attention. The activity patterns they recorded suggested that distinct groups of cells participated in the attention and working memory aspects of the task.

To better understand those cells’ roles, the researchers manipulated their activity. They used optogenetics, an approach in which a light-sensitive protein is introduced into neurons so that they can be switched on or off with a pulse of light. Desimone’s lab, in collaboration with Edward Boyden, the Y. Eva Tan Professor in Neurotechnology at MIT and a member of the McGovern Institute, pioneered the use of optogenetics in primates. “Optogenetics allows us to distinguish between correlation and causality in neural circuits,” says Desimone, the Doris and Don Berkey Professor of Neuroscience at MIT.  “If we turn off a circuit using optogenetics, and the animal can no longer perform the task, that is good evidence for a causal role of the circuit,” says Desimone, who is also a professor of brain and cognitive sciences at MIT.

Using this optogenetic method, they switched off neurons in a specific portion of the brain’s lateral prefrontal cortex for a few hundred milliseconds at a time as the monkeys performed their dot-tracking task. The researchers found that they could switch off signaling from the lateral prefrontal cortex early, when the monkeys needed their working memory but had no dots to attend to, without interfering with the animals’ ability to complete the task. But when they blocked signaling when the monkeys needed to focus their attention, the animals performed poorly.

The team also monitored activity in the brain visual’s cortex during the moving-dot task. When the lateral prefrontal cortex was shut off, neurons in connected visual areas showed less heightened reactivity to movement in the direction the monkey was attending to. Mendoza-Halliday says this suggests that cells in the lateral prefrontal cortex are important for telling sensory-processing circuits what visual features to pay attention to.

The discovery that at least part of the brain’s lateral prefrontal cortex is critical for attention but not for working memory offers a new view of the relationship between the two. “It is a physiological demonstration that working memory and attention cannot be the same function, since they rely on partially separate neuronal populations and neural mechanisms,” Mendoza-Halliday says.

Mapping healthy cells’ connections in the brain

Portrait of scientist in a suit and tie.
McGovern Institute Principal Research Scientist Ian Wickersham. Photo: Caitlin Cunningham

A new tool developed by researchers at MIT’s McGovern Institute gives neuroscientists the power to find connected neurons within the brain’s tangled network of cells, and then follow or manipulate those neurons over a prolonged period. Its development, led by Principal Research Scientist Ian Wickersham, transforms a powerful tool for exploring the anatomy of the brain into a sophisticated system for studying brain function.

Wickersham and colleagues have designed their system to enable long-term analysis and experiments on groups of neurons that reach through the brain to signal to select groups of cells. It is described in the January 11, 2024, issue of the journal Nature Neuroscience. “This second-generation system will allow imaging, recording, and control of identified networks of synaptically-connected neurons in the context of behavioral studies and other experimental designs lasting weeks, months, or years,” Wickersham says.

The system builds on an approach to anatomical tracing that Wickersham developed in 2007, as a graduate student in Edward Callaway’s lab at the Salk Institute for Biological Studies. Its key is a modified version of a rabies virus, whose natural—and deadly—life cycle involves traveling through the brain’s neural network.

Viral tracing

The rabies virus is useful for tracing neuronal connections because once it has infected the nervous system, it spreads through the neural network by co-opting the very junctions that neurons use to communicate with one another. Hopping across those junctions, or synapses, the virus can pass from cell to cell. Traveling in the opposite direction of neuronal signals, it reaches the brain, where it continues to spread.

Labeled illustration of rabies virus
Simplified illustration of rabies virus. Image: istockphoto

To use the rabies virus to identify specific connections within the brain, Wickersham modified it to limit its spread. His original tracing system uses a rabies virus that lacks an essential gene. When researchers deliver the modified virus to the neurons whose connections they want to map, they also instruct those neurons to make the protein encoded by the virus’s missing gene. That allows the virus to replicate and travel across the synapses that link an infected cell to others in the network. Once it is inside a new cell, the virus is deprived of the critical protein and can go no farther.

Under a microscope, a fluorescent protein delivered by the modified virus lights up, exposing infected cells: those to which the virus was originally delivered as well as any neurons that send it direct inputs. Because the virus crosses only one synapse after leaving the cell it originally infected, the technique is known as monosynaptic tracing.

Labs around the world now use this method to identify which brain cells send signals to a particular set of neurons. But while the virus used in the original system can’t spread through the brain like a natural rabies virus, it still sickens the cells it does infect. Infected cells usually die in about two weeks, and that has limited scientists’ ability to conduct further studies of the cells whose connections they trace. “If you want to then go on to manipulate those connected populations of cells, you have a very short time window,” Wickersham says.

Reducing toxicity

To keep cells healthy after monosynaptic tracing, Wickersham, postdoctoral researcher Lei Jin, and colleagues devised a new approach. They began by deleting a second gene from the modified virus they use to label cells. That gene encodes an enzyme the rabies virus needs to produce the proteins encoded in its own genome. As with the original system, neurons are instructed to create the virus’s missing proteins, equipping the virus to replicate inside those cells. In this case, this is done in mice that have been genetically modified to produce the second deleted viral gene in specific sets of neurons.

Brightly colored neurons under a microscope.
The initially-infected “starter cells” at the injection site in the substantia nigra, pars compacta. Blue: tyrosine hydroxylase immunostaining, showing dopaminergic cells; green: enhanced green fluorescent protein showing neurons able to be initially infected with the rabies virus; red: the red fluorescent protein tdTomato, reporting the presence of the second-generation rabies virus. Image: Ian Wickersham, Lei Jin

To limit toxicity, Wickersham and his team built in a control that allows researchers to switch off cells’ production of viral proteins once the virus has had time to replicate and begin its spread to connected neurons. With those proteins no longer available to support the viral life cycle, the tracing tool is rendered virtually harmless. After following mice for up to 10 weeks, the researchers detected minimal toxicity in neurons where monosynaptic tracing was initiated. And, Wickersham says, “as far as we can tell, the trans-synaptically labeled cells are completely unscathed.”

Neurons illuminated in red under a microscope
Transsynaptically labeled cells in the striatum, which provides input to the dopaminergic cells of the substantia nigra. These cells show no morphological abnormalities or any other indication of toxicity five weeks after the rabies virus injection. Image: Ian Wickersham, Lei Jin

That means neuroscientists can now pair monosynaptic tracing with many of neuroscience’s most powerful tools for functional studies. To facilitate those experiments, Wickersham’s team encoded enzymes called recombinases into their connection-tracing rabies virus, which enables the introduction of genetically encoded research tools to targeted cells. After tracing cells’ connections, researchers will be able to manipulate those neurons, follow their activity, and explore their contributions to animal behavior. Such experiments will deepen scientists’ understanding of the inputs select groups of neurons receive from elsewhere in the brain, as well as the cells that are sending those signals.

Jin, who is now a principal investigator at Lingang Laboratory in Shanghai, says colleagues are already eager to begin working with the new non-toxic tracing system. Meanwhile, Wickersham’s group has already started experimenting with a third-generation system, which they hope will improve efficiency and be even more powerful.

The promise of gene therapy

Portrait of Bob Desimone wearing a suit and tie.
McGovern Institute Director Robert Desimone. Photo: Steph Stevens

As we start 2024, I hope you can join me in celebrating a historic recent advance: the FDA approval of Casgevy, a bold new treatment for devastating sickle cell disease and the world’s first approved CRISPR gene therapy.

Developed by Vertex Pharmaceuticals and CRISPR Therapeutics, we are proud to share that this pioneering therapy licenses the CRISPR discoveries of McGovern scientist and Poitras Professor of Neuroscience Feng Zhang.

It is amazing to think that Feng’s breakthrough work adapting CRISPR-Cas9 for genome editing in eukaryotic cells was published only 11 years ago today in Science.

Incredibly, CRISPR-Cas9 rapidly transitioned from proof-of-concept experiments to an approved treatment in just over a decade.

McGovern scientists are determined to maintain the momentum!

 

Incredibly, CRISPR-Cas9 rapidly transitioned from proof-of-concept experiments to an approved treatment in just over a decade.

Our labs are creating new gene therapies that are already in clinical trials or preparing to enroll patients in trials. For instance, Feng Zhang’s team has developed therapies currently in clinical trials for lymphoblastic leukemia and beta thalassemia, while another McGovern researcher, Guoping Feng, the Poitras Professor of Brain and Cognitive Sciences at MIT, has made advancements that lay the groundwork for a new gene therapy to treat a severe form of autism spectrum disorder. It is expected to enter clinical trials later this year. Moreover, McGovern fellows Omar Abudayyeh and Jonathan Gootenberg created programmable genomic tools that are now licensed for use in monogenic liver diseases and autoimmune disorders.

These exciting innovations stem from your steadfast support of our high-risk, high-reward research. Your generosity is enabling our scientists to pursue basic research in other areas with potential therapeutic applications in the future, such as mechanisms of pain, addiction, the connections between the brain and gut, the workings of memory and attention, and the bi-directional influence of artificial intelligence on brain research. All of this fundamental research is being fueled by major new advances in technology, many of them developed here.

As we enter a new year filled with anticipation following our inaugural gene therapy, I want to express my heartfelt gratitude for your invaluable support in advancing our research programs. Your role in pushing our research to new heights is valued by all faculty, students, and researchers at the McGovern Institute. We can’t wait to share our continued progress with you.

Thank you again for partnering with us to make great scientific achievements possible.

With appreciation and best wishes,

Robert Desimone, PhD
Director, McGovern Institute
Doris and Don Berkey Professor of Neuroscience, MIT