What is consciousness?

In the hit T.V. show “Westworld,” Dolores Abernathy, a golden-tressed belle, lives in the days when Manifest Destiny still echoed in America. She begins to notice unusual stirrings shaking up her quaint western town—and soon discovers that her skin is synthetic, and her mind, metal. She’s a cyborg meant to entertain humans. The key to her autonomy lies in reaching consciousness.

Shows like “Westworld” and other media probe the idea of consciousness, attempting to nail down a definition of the concept. However, though humans have ruminated on consciousness for centuries, we still don’t have a solid definition (even the Merriam-Webster dictionary lists five). One framework suggests that consciousness is any experience, from eating a candy bar to heartbreak. Another argues that it is how certain stimuli influence one’s behavior.

MIT graduate student Adam Eisen.

While some search for a philosophical explanation, MIT graduate student Adam Eisen seeks a scientific one.

Eisen studies consciousness in the lab of Ila Fiete, an associate investigator at the McGovern Institute. His work melds seemingly opposite fields, using mathematical models to quantitatively explain, and thereby ground, the loftiness of consciousness.

In the Fiete lab, Eisen leverages computational methods to compare the brain’s electrical signals in an awake, conscious state to those in an unconscious state via anesthesia—which dampens communication between neurons so people feel no pain or become unconscious.

“What’s nice about anesthesia is that we have a reliable way of turning off consciousness,” says Eisen.

“So we’re now able to ask: What’s the fluctuation of electrical activity in a conscious versus unconscious brain? By characterizing how these states vary—with the precision enabled by computational models—we can start to build a better intuition for what underlies consciousness.”

Theories of consciousness

How are scientists thinking about consciousness? Eisen says that there are four major theories circulating in the neuroscience sphere. These theories are outlined below.

Global workspace theory

Consider the placement of your tongue in your mouth. This sensory information is always there, but you only notice the sensation when you make the effort to think about it. How does this happen?

“Global workspace theory seeks to explain how information becomes available to our consciousness,” he says. “This is called access consciousness—the kind that stores information in your mind and makes it available for verbal report. In this view, sensory information is broadcasted to higher-level regions of the brain by a process called ignition.” The theory proposes that widespread jolts of neuronal activity or “spiking” are essential for ignition, like how a few claps can lead to an audience applause. It’s through ignition that we reach consciousness.

Eisen’s research in anesthesia suggests, though, that not just any spiking will do. There needs to be a balance: enough activity to spark ignition, but also enough stability such that the brain doesn’t lose its ability to respond to inputs and produce reliable computations to reach consciousness.

Higher order theories

Let’s say you’re listening to “Here Comes The Sun” by The Beatles. Your brain processes the medley of auditory stimuli; you hear the bouncy guitar, upbeat drums, and George Harrison’s perky vocals. You’re having a musical experience—what it’s like to listen to music. According to higher-order theories, such an experience unlocks consciousness.

“Higher-order theories posit that a conscious mental state involves having higher-order mental representations of stimuli—usually in the higher levels of the brain responsible for cognition—to experience the world,” Eisen says.

Integrated information theory

“Imagine jumping into a lake on a warm summer day. All components of that experience—the feeling of the sun on your skin and the coolness of the water as you submerge—come together to form your ‘phenomenal consciousness,’” Eisen says. If the day was slightly less sunny or the water a fraction warmer, he explains, the experience would be different.

“Integrated information theory suggests that phenomenal consciousness involves an experience that is irreducible, meaning that none of the components of that experience can be separated or altered without changing the experience itself,” he says.

Attention schema theory

Attention schema theory, Eisen explains, says ‘attention’ is the information that we are focused on in the world, while ‘awareness’ is the model we have of our attention. He cites an interesting psychology study to disentangle attention and awareness.

In the study, the researchers showed human subjects a mixed sequence of two numbers and six letters on a computer. The participants were asked to report back what the numbers were. While they were doing this task, faintly detectable dots moved across the screen in the background. The interesting part, Eisen notes, is that people weren’t aware of the dots—that is, they didn’t report that they saw them. But despite saying they didn’t see the dots, people performed worse on the task when the dots were present.

“This suggests that some of the subjects’ attention was allocated towards the dots, limiting their available attention for the actual task,” he says. “In this case, people’s awareness didn’t track their attention. The subjects were not aware of the dots, even though the study shows that the dots did indeed affect their attention.”

The science behind consciousness

Eisen notes that a solid understanding of the neural basis of consciousness has yet to be cemented. However, he and his research team are advancing in this quest. “In our work, we found that brain activity is more ‘unstable’ under anesthesia, meaning that it lacks the ability to recover from disturbances—like distractions or random fluctuations in activity—and regain a normal state,” he says.

He and his fellow researchers believe this is because the unconscious brain can’t reliably engage in computations like the conscious brain does, and sensory information gets lost in the noise. This crucial finding points to how the brain’s stability may be a cornerstone of consciousness.

There’s still more work to do, Eisen says. But eventually, he hopes that this research can help crack the enduring mystery of how consciousness shapes human existence. “There is so much complexity and depth to human experience, emotion, and thought. Through rigorous research, we may one day reveal the machinery that gives us our common humanity.”

Women in STEM — A celebration of excellence and curiosity

What better way to commemorate Women’s History Month and International Women’s Day than to give  three of the world’s most accomplished scientists an opportunity to talk about their careers? On March 7, MindHandHeart invited professors Paula Hammond, Ann Graybiel, and Sangeeta Bhatia to share their career journeys, from the progress they have witnessed to the challenges they have faced as women in STEM. Their conversation was moderated by Mary Fuller, chair of the faculty and professor of literature.

Hammond, an Institute professor with appointments in the Department of Chemical Engineering and the Koch Institute for Integrative Cancer Research, reflected on the strides made by women faculty at MIT, while acknowledging ongoing challenges. “I think that we have advanced a great deal in the last few decades in terms of the numbers of women who are present, although we still have a long way to go,” Hammond noted in her opening. “We’ve seen a remarkable increase over the past couple of decades in our undergraduate population here at MIT, and now we’re beginning to see it in the graduate population, which is really exciting.” Hammond was recently appointed to the role of vice provost for faculty.

Ann Graybiel, also an Institute professor, who has appointments in the Department of Brain and Cognitive Sciences and the McGovern Institute for Brain Research, described growing up in the Deep South. “Girls can’t do science,” she remembers being told in school, and they “can’t do research.” Yet her father, a physician scientist, often took her with him to work and had her assist from a young age, eventually encouraging her directly to pursue a career in science. Graybiel, who first came to MIT in 1973, noted that she continued to face barriers and rejection throughout her career long after leaving the South, but that individual gestures of inspiration, generosity, or simple statements of “You can do it” from her peers helped her power through and continue in her scientific pursuits.

Sangeeta Bhatia, the John and Dorothy Wilson Professor of Health Sciences and Technology and Electrical Engineering and Computer Science, director of the Marble Center for Cancer Nanomedicine at the Koch Institute for Integrative Cancer Research, and a member of the Institute for Medical Engineering and Science, is also the mother of two teenage girls. She shared her perspective on balancing career and family life: “I wanted to pick up my kids from school and I wanted to know their friends. … I had a vision for the life that I wanted.” Setting boundaries at work, she noted, empowered her to achieve both personal and professional goals. Bhatia also described her collaboration with President Emerita Susan Hockfield and MIT Amgen Professor of Biology Emerita Nancy Hopkins to spearhead the Future Founders Initiative, which aims to boost the representation of female faculty members pursuing biotechnology ventures.

A video of the full panel discussion is available on the MindHandHeart YouTube channel.

From neurons to learning and memory

Mark Harnett, an associate professor at MIT, still remembers the first time he saw electrical activity spiking from a living neuron.

He was a senior at Reed College and had spent weeks building a patch clamp rig — an experimental setup with an electrode that can be used to gently probe a neuron and measure its electrical activity.

“The first time I stuck one of these electrodes onto one of these cells and could see the electrical activity happening in real time on the oscilloscope, I thought, ‘Oh my God, this is what I’m going to do for the rest of my life. This is the coolest thing I’ve ever seen!’” Harnett says.

Harnett, who recently earned tenure in MIT’s Department of Brain and Cognitive Sciences, now studies the electrical properties of neurons and how these properties enable neural circuits to perform the computations that give rise to brain functions such as learning, memory, and sensory perception.

“My lab’s ultimate goal is to understand how the cortex works,” Harnett says. “What are the computations? How do the cells and the circuits and the synapses support those computations? What are the molecular and structural substrates of learning and memory? How do those things interact with circuit dynamics to produce flexible, context-dependent computation?”

“We go after that by looking at molecules, like synaptic receptors and ion channels, all the way up to animal behavior, and building theoretical models of neural circuits,” he adds.

Influence on the mind

Harnett’s interest in science was sparked in middle school, when he had a teacher who made the subject come to life. “It was middle school science, which was a lot of just mixing random things together. It wasn’t anything particularly advanced, but it was really fun,” he says. “Our teacher was just super encouraging and inspirational, and she really sparked what became my lifelong interest in science.”

When Harnett was 11, his father got a new job at a technology company in Minneapolis and the family moved from New Jersey to Minnesota, which proved to be a difficult adjustment. When choosing a college, Harnett decided to go far away, and ended up choosing Reed College, a school in Portland, Oregon, that encourages a great deal of independence in both academics and personal development.

“Reed was really free,” he recalls. “It let you grow into who you wanted to be, and try things, both for what you wanted to do academically or artistically, but also the kind of person you wanted to be.”

While in college, Harnett enjoyed both biology and English, especially Shakespeare. His English professors encouraged him to go into science, believing that the field needed scientists who could write and think creatively. He was interested in neuroscience, but Reed didn’t have a neuroscience department, so he took the closest subject he could find — a course in neuropharmacology.

“That class totally blew my mind. It was just fascinating to think about all these pharmacological agents, be they from plants or synthetic or whatever, influencing how your mind worked,” Harnett says. “That class really changed my whole way of thinking about what I wanted to do, and that’s when I decided I wanted to become a neuroscientist.”

For his senior research thesis, Harnett joined an electrophysiology lab at Oregon Health Sciences University (OHSU), working with Professor Larry Trussell, who studies synaptic transmission in the auditory system. That lab was where he first built and used a patch clamp rig to measure neuron activity.

After graduating from college, he spent a year as a research technician in a lab at the University of Minnesota, then returned to OHSU to work in a different research lab studying ion channels and synaptic physiology. Eventually he decided to go to graduate school, ending up at the University of Texas at Austin, where his future wife was studying public policy.

For his PhD research, he studied the neurons that release the neuromodulator dopamine and how they are affected by drugs of abuse and addiction. However, once he finished his degree, he decided to return to studying the biophysics of computation, which he pursued during a postdoc at the Howard Hughes Medical Institute Janelia Research Campus with Jeff Magee.

A broad approach

When he started his lab at MIT’s McGovern Institute in 2015, Harnett set out to expand his focus. While the physiology of ion channels and synapses forms the basis of much of his lab’s work, they connect these processes to neuronal computation, cortical circuit operation, and higher-level cognitive functions.

Electrical impulses that flow between neurons, allowing them to communicate with each other, are produced by ion channels that control the flow of ions such as potassium and sodium. In a 2021 study, Harnett and his students discovered that human neurons have a much smaller number of these channels than expected, compared to the neurons of other mammals.

This reduction in density may have evolved to help the brain operate more efficiently, allowing it to divert resources to other energy-intensive processes that are required to perform complex cognitive tasks. Harnett’s lab has also found that in human neurons, electrical signals weaken as they flow along dendrites, meaning that small sections of dendrites can form units that perform individual computations within a neuron.

Harnett’s lab also recently discovered, to their surprise, that the adult brain contains millions of “silent synapses” — immature connections that remain inactive until they’re recruited to help form new memories. The existence of these synapses offers a clue to how the adult brain is able to continually form new memories and learn new things without having to modify mature synapses.

Many of these projects fall into areas that Harnett didn’t necessarily envision himself working on when he began his faculty career, but they naturally grew out of the broad approach he wanted to take to studying the cortex. To that end, he sought to bring people to the lab who wanted to work at different levels — from molecular physiology up to behavior and computational modeling.

As a postdoc studying electrophysiology, Harnett spent most of his time working alone with his patch clamp device and two-photon microscope. While that type of work still goes on his lab, the overall atmosphere is much more collaborative and convivial, and as a mentor, he likes to give his students broad leeway to come up with their own projects that fit in with the lab’s overall mission.

“I have this incredible, dynamic group that has been really great to work with. We take a broad approach to studying the cortex, and I think that’s what makes it fun,” he says. “Working with the folks that I’ve been able to recruit — grad students, techs, undergrads, and postdocs — is probably the thing that really matters the most to me.”

A new computational technique could make it easier to engineer useful proteins

To engineer proteins with useful functions, researchers usually begin with a natural protein that has a desirable function, such as emitting fluorescent light, and put it through many rounds of random mutation that eventually generate an optimized version of the protein.

This process has yielded optimized versions of many important proteins, including green fluorescent protein (GFP). However, for other proteins, it has proven difficult to generate an optimized version. MIT researchers have now developed a computational approach that makes it easier to predict mutations that will lead to better proteins, based on a relatively small amount of data.

Using this model, the researchers generated proteins with mutations that were predicted to lead to improved versions of GFP and a protein from adeno-associated virus (AAV), which is used to deliver DNA for gene therapy. They hope it could also be used to develop additional tools for neuroscience research and medical applications.

Woman gestures with her hand in front of a glass wall with equations written on it.
MIT Professor of Brain and Cognitive Sciences Ila Fiete in her lab at the McGovern Institute. Photo: Steph Stevens

“Protein design is a hard problem because the mapping from DNA sequence to protein structure and function is really complex. There might be a great protein 10 changes away in the sequence, but each intermediate change might correspond to a totally nonfunctional protein. It’s like trying to find your way to the river basin in a mountain range, when there are craggy peaks along the way that block your view. The current work tries to make the riverbed easier to find,” says Ila Fiete, a professor of brain and cognitive sciences at MIT, a member of MIT’s McGovern Institute for Brain Research, director of the K. Lisa Yang Integrative Computational Neuroscience Center, and one of the senior authors of the study.

Regina Barzilay, the School of Engineering Distinguished Professor for AI and Health at MIT, and Tommi Jaakkola, the Thomas Siebel Professor of Electrical Engineering and Computer Science at MIT, are also senior authors of an open-access paper on the work, which will be presented at the International Conference on Learning Representations in May. MIT graduate students Andrew Kirjner and Jason Yim are the lead authors of the study. Other authors include Shahar Bracha, an MIT postdoc, and Raman Samusevich, a graduate student at Czech Technical University.

Optimizing proteins

Many naturally occurring proteins have functions that could make them useful for research or medical applications, but they need a little extra engineering to optimize them. In this study, the researchers were originally interested in developing proteins that could be used in living cells as voltage indicators. These proteins, produced by some bacteria and algae, emit fluorescent light when an electric potential is detected. If engineered for use in mammalian cells, such proteins could allow researchers to measure neuron activity without using electrodes.

While decades of research have gone into engineering these proteins to produce a stronger fluorescent signal, on a faster timescale, they haven’t become effective enough for widespread use. Bracha, who works in Edward Boyden’s lab at the McGovern Institute, reached out to Fiete’s lab to see if they could work together on a computational approach that might help speed up the process of optimizing the proteins.

“This work exemplifies the human serendipity that characterizes so much science discovery,” Fiete says.

“This work grew out of the Yang Tan Collective retreat, a scientific meeting of researchers from multiple centers at MIT with distinct missions unified by the shared support of K. Lisa Yang. We learned that some of our interests and tools in modeling how brains learn and optimize could be applied in the totally different domain of protein design, as being practiced in the Boyden lab.”

For any given protein that researchers might want to optimize, there is a nearly infinite number of possible sequences that could generated by swapping in different amino acids at each point within the sequence. With so many possible variants, it is impossible to test all of them experimentally, so researchers have turned to computational modeling to try to predict which ones will work best.

In this study, the researchers set out to overcome those challenges, using data from GFP to develop and test a computational model that could predict better versions of the protein.

They began by training a type of model known as a convolutional neural network (CNN) on experimental data consisting of GFP sequences and their brightness — the feature that they wanted to optimize.

The model was able to create a “fitness landscape” — a three-dimensional map that depicts the fitness of a given protein and how much it differs from the original sequence — based on a relatively small amount of experimental data (from about 1,000 variants of GFP).

These landscapes contain peaks that represent fitter proteins and valleys that represent less fit proteins. Predicting the path that a protein needs to follow to reach the peaks of fitness can be difficult, because often a protein will need to undergo a mutation that makes it less fit before it reaches a nearby peak of higher fitness. To overcome this problem, the researchers used an existing computational technique to “smooth” the fitness landscape.

Once these small bumps in the landscape were smoothed, the researchers retrained the CNN model and found that it was able to reach greater fitness peaks more easily. The model was able to predict optimized GFP sequences that had as many as seven different amino acids from the protein sequence they started with, and the best of these proteins were estimated to be about 2.5 times fitter than the original.

“Once we have this landscape that represents what the model thinks is nearby, we smooth it out and then we retrain the model on the smoother version of the landscape,” Kirjner says. “Now there is a smooth path from your starting point to the top, which the model is now able to reach by iteratively making small improvements. The same is often impossible for unsmoothed landscapes.”

Proof-of-concept

The researchers also showed that this approach worked well in identifying new sequences for the viral capsid of adeno-associated virus (AAV), a viral vector that is commonly used to deliver DNA. In that case, they optimized the capsid for its ability to package a DNA payload.

“We used GFP and AAV as a proof-of-concept to show that this is a method that works on data sets that are very well-characterized, and because of that, it should be applicable to other protein engineering problems,” Bracha says.

The researchers now plan to use this computational technique on data that Bracha has been generating on voltage indicator proteins.

“Dozens of labs having been working on that for two decades, and still there isn’t anything better,” she says. “The hope is that now with generation of a smaller data set, we could train a model in silico and make predictions that could be better than the past two decades of manual testing.”

The research was funded, in part, by the U.S. National Science Foundation, the Machine Learning for Pharmaceutical Discovery and Synthesis consortium, the Abdul Latif Jameel Clinic for Machine Learning in Health, the DTRA Discovery of Medical Countermeasures Against New and Emerging threats program, the DARPA Accelerated Molecular Discovery program, the Sanofi Computational Antibody Design grant, the U.S. Office of Naval Research, the Howard Hughes Medical Institute, the National Institutes of Health, the K. Lisa Yang ICoN Center, and the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics at MIT.

Reevaluating an approach to functional brain imaging

A new way of imaging the brain with magnetic resonance imaging (MRI) does not directly detect neural activity as originally reported, according to scientists at MIT’s McGovern Institute. The method, first described in 2022, generated excitement within the neuroscience community as a potentially transformative approach. But a study from the lab of McGovern Associate Investigator Alan Jasanoff, reported March 27, 2024, in the journal Science Advances, demonstrates that MRI signals produced by the new method are generated in large part by the imaging process itself, not neuronal activity.

A man stands with his arms crossed in front of a board with mathematical equations written on it.
Alan Jasanoff, associate member of the McGovern Institute, and a professor of brain and cognitive sciences, biological engineering, and nuclear science and engineering at MIT. Photo: Justin Knight

Jasanoff explains that having a noninvasive means of seeing neuronal activity in the brain is a long-sought goal for neuroscientists. The functional MRI methods that researchers currently use to monitor brain activity don’t actually detect neural signaling. Instead, they use blood flow changes triggered by brain activity as a proxy. This reveals which parts of the brain are engaged during imaging, but it cannot pinpoint neural activity to precise locations, and it is too slow to truly track neurons’ rapid-fire communications.

So when a team of scientists reported in Science a new MRI method called DIANA, for “direct imaging of neuronal activity,” neuroscientists paid attention. The authors claimed that DIANA detected MRI signals in the brain that corresponded to the electrical signals of neurons, and that it acquired signals far faster than the methods now used for functional MRI.

“Everyone wants this,” Jasanoff says. “If we could look at the whole brain and follow its activity with millisecond precision and know that all the signals that we’re seeing have to do with cellular activity, this would be just wonderful. It could tell us all kinds of things about how the brain works and what goes wrong in disease.”

Jasanoff adds that from the initial report, it was not clear what brain changes DIANA was detecting to produce such a rapid readout of neural activity. Curious, he and his team began to experiment with the method. “We wanted to reproduce it, and we wanted to understand how it worked,” he says.

Decoding DIANA

Recreating the MRI procedure reported by DIANA’s developers, postdoctoral researcher Valerie Doan Phi Van imaged the brain of a rat as an electric stimulus was delivered to one paw. Phi Van says she was excited to see an MRI signal appear in the brain’s sensory cortex, exactly when and where neurons were expected to respond to the sensation on the paw. “I was able to reproduce it,” she says. “I could see the signal.”

With further tests of the system, however, her enthusiasm waned. To investigate the source of the signal, she disconnected the device used to stimulate the animal’s paw, then repeated the imaging. Again, signals showed up in the sensory processing part of the brain. But this time, there was no reason for neurons in that area to be activated. In fact, Phi Van found, the MRI produced the same kinds of signals when the animal inside the scanner was replaced with a tube of water. It was clear DIANA’s functional signals were not arising from neural activity.

Phi Van traced the source of the specious signals to the pulse program that directs DIANA’s imaging process, detailing the sequence of steps the MRI scanner uses to collect data. Embedded within DIANA’s pulse program was a trigger for the device that delivers sensory input to the animal inside the scanner. That synchronizes the two processes, so the stimulation occurs at a precise moment during data acquisition. That trigger appeared to be causing signals that DIANA’s developers had concluded indicated neural activity.

It was clear DIANA’s functional signals were not arising from neural activity.

Phi Van altered the pulse program, changing the way the stimulator was triggered. Using the updated program, the MRI scanner detected no functional signal in the brain in response to the same paw stimulation that had produced a signal before. “If you take this part of the code out, then the signal will also be gone. So that means the signal we see is an artifact of the trigger,” she says.

Jasanoff and Phi Van went on to find reasons why other researchers have struggled to reproduce the results of the original DIANA report, noting that the trigger-generated signals can disappear with slight variations in the imaging process. With their postdoctoral colleague Sajal Sen, they also found evidence that cellular changes that DIANA’s developers had proposed might give rise to a functional MRI signal were not related to neuronal activity.

Jasanoff and Phi Van say it was important to share their findings with the research community, particularly as efforts continue to develop new neuroimaging methods. “If people want to try to repeat any part of the study or implement any kind of approach like this, they have to avoid falling into these pits,” Jasanoff says. He adds that they admire the authors of the original study for their ambition: “The community needs scientists who are willing to take risks to move the field ahead.”

Beyond the brain

This story also appears in the Spring 2024 issue of BrainScan.

___

Like many people, graduate student Guillermo Herrera-Arcos found himself working from home in the spring of 2020. Surrounded by equipment he’d hastily borrowed from the lab, he began testing electrical components he would need to control muscles in a new way. If it worked, he and colleagues in Hugh Herr’s lab might have found a promising strategy for restoring movement when signals from the brain fail to reach the muscles, such as after a spinal cord injury or stroke.

Man holds a fiber that is illuminated with blue light at its tip.
Guillermo Herrera-Arcos, a graduate student in Hugh Herr’s lab, is developing an optical technology with the potential to restore movement in people with spinal cord injury or stroke. Photo: Steph Stevens

Herrera-Arcos and Herr’s work is one way McGovern neuroscientists are working at the interface of brain and machine. Such work aims to enable better ways of understanding and treating injury and disease, offering scientists tools to manipulate neural signaling as well as to replace its function when it is lost.

Restoring movement

The system Herrera-Arcos and Herr were developing wouldn’t be the first to bypass the brain to move muscles. Neuroprosthetic devices that use electricity to stimulate muscle-activating motor neurons are sometimes used during rehabilitation from an injury, helping patients maintain muscle mass when they can’t use their muscles on their own. But existing neuroprostheses lack the precision of the body’s natural movement system. They send all-or-nothing signals that quickly tire muscles out.

TWo men looking at a computer screen, one points to the image on the screen.
Hugh Herr (left) and graduate student Guillermo Herrera-Arco at work in the lab. Photo: Steph Stevens

Researchers attribute that fatigue to an unnatural recruitment of neurons and muscle fibers. Electrical signals go straight to the largest, most powerful components of the system, even when smaller units could do the job. “You turn up the stimulus and you get no force, and then suddenly, you get too much force. And then fatigue, a lack of controllability, and so on,” Herr explains. The nervous system, in contrast, calls first on small motor units and recruits larger ones only when needed to generate more force.

Optical solution

In hopes of recreating this strategic pattern of muscle activation, Herr and Herrera-Arcos turned to a technique pioneered by McGovern Investigator Edward Boyden that has become common research: controlling neural activity with light. To put neurons under their control, researchers equip them with light-sensitive proteins. The cells can then be switched on or off within milliseconds using an optic fiber.

When a return to the lab enabled Herr and Herrera-Arcos to test their idea, they were thrilled with the results. Using light to switch on motor neurons and stimulate a single muscle in mice, they recreated the nervous system’s natural muscle activation pattern. Consequently, fatigue did not set in nearly as quickly as it would with an electrically-activated system. Herrera-Arcos says he set out to measure the force generated by the muscle and how long it took to fatigue, and he had to keep extending his experiments: After an hour of light stimulation, it was still going strong.

To optimize the force generated by the system, the researchers used feedback from the muscle to modulate the intensity of the neuron-activating light. Their success suggests this type of closed-loop system could enable fatigue-resistant neuroprostheses for muscle control.

“The field has been struggling for many decades with the challenge of how to control living muscle tissue,” Herr says. “So the idea that this could be solved is very, very exciting.”

There’s work to be done to translate what the team has learned into practical neuroprosthetics for people who need them. To use light to stimulate human motor neurons, light-sensitive proteins will need to be delivered to those cells. Figuring out how to do that safely is a high priority at the K. Lisa Yang Center for Bionics, which Herr co-directs with Boyden, and might lead to better ways of obtaining tactile and proprioceptive feedback from prosthetic limbs, as well as to control muscles for the restoration of natural movements after spinal cord injury. “It would be a game changer for a number of conditions,” Herr says.

Gut-brain connection

While Herr’s team works where the nervous system meets the muscle, researchers in Polina Anikeeva’s lab are exploring the brain’s relationship with an often-overlooked part of the nervous system — the hundreds of millions of neurons in the gut.

“Classically, when we think of brain function in neuroscience, it is always studied in the framework of how the brain interacts with the surrounding environment and how it integrates different stimuli,” says Atharva Sahasrabudhe, a graduate student in the group. “But the brain does not function in a vacuum. It’s constantly getting and integrating signals from the peripheral organs.”

Man smiles at camera while holding up tiny devices.
Atharva Sahasrabudhe holds some of the fiber technology he developed in the Anikeeva lab. Photo: Steph Stevens

The nervous system has a particularly pronounced presence in the gut. Neurons embedded within the walls of the gastrointestinal (GI) tract monitor local conditions and relay information to the brain. This mind-body connection may help explain the GI symptoms associated with some brain-related conditions, including Parkinson’s disease, mood disorders, and autism. Researchers have yet to untangle whether GI symptoms help drive these conditions, are a consequence of them, or are coincidental. Either way, Anikeeva says, “if there is a GI connection, maybe we can tap into this connection to improve the quality of life of affected individuals.”

Flexible fibers

At the K. Lisa Yang Brain-Body Center that Anikeeva directs, studying how the gut communicates with the brain is a high priority. But most of neuroscientists’ tools are designed specifically to investigate the brain. To explore new territory, Sahasrabudhe devised a device that is compatible with the long and twisty GI tract of a mouse.

The new tool is a slender, flexible fiber equipped with light emitters for activating subsets of cells and tiny channels for delivering nutrients or drugs. To access neurons dispersed throughout the GI tract, its wirelessly controlled components are embedded along its length. A more rigid probe at one end of the device is designed to monitor and manipulate neural activity in the brain, so researchers can follow the nervous system’s swift communications across the gut-brain axis.

Scientists on Anikeeva’s team are deploying the device to investigate how gut-brain communications contribute to several conditions. Postdoctoral researcher Sharmelee Selvaraji is focused on Parkinson’s disease. Like many scientists, she wonders whether the neurodegenerative movement disorder might actually start in the gut. There’s a molecular link: the misshapen protein that sickens brain cells in patients with Parkinson’s disease has been found aggregating in the gut, too. And the constipation and other GI problems that are common complaints for people with Parkinson’s disease usually start decades before the onset of motor symptoms. She hopes that by investigating gut-brain communications in a mouse model of the disease, she will uncover important clues about its origins and progression.

“We’re trying to observe the effects of Parkinson’s in the gut, and then eventually, we may be able to intervene at an earlier stage to slow down the disease progression, or even cure it,” says Selvaraji.

Meanwhile, colleagues in the lab are exploring related questions about gut-brain communications in mouse models of autism, anxiety disorders, and addiction. Others continue to focus on technology development, adding new capabilities to the gut-brain probe or applying similar engineering principles to new problems.

“We are realizing that the brain is very much connected to the rest of the body,” Anikeeva says. “There is now a lot of effort in the lab to create technology suitable for a variety of really interesting organs that will help us study brain-body connections.”

Researchers reveal roadmap for AI innovation in brain and language learning

One of the hallmarks of humanity is language, but now, powerful new artificial intelligence tools also compose poetry, write songs, and have extensive conversations with human users. Tools like ChatGPT and Gemini are widely available at the tap of a button — but just how smart are these AIs?

A new multidisciplinary research effort co-led by Anna (Anya) Ivanova, assistant professor in the School of Psychology at Georgia Tech, alongside Kyle Mahowald, an assistant professor in the Department of Linguistics at the University of Texas at Austin, is working to uncover just that.

Their results could lead to innovative AIs that are more similar to the human brain than ever before — and also help neuroscientists and psychologists who are unearthing the secrets of our own minds.

The study, “Dissociating Language and Thought in Large Language Models,” is published this week in the scientific journal Trends in Cognitive Sciences. The work is already making waves in the scientific community: an earlier preprint of the paper, released in January 2023, has already been cited more than 150 times by fellow researchers. The research team has continued to refine the research for this final journal publication.

“ChatGPT became available while we were finalizing the preprint,” explains Ivanova, who conducted the research while a postdoctoral researcher at MIT’s McGovern Institute. “Over the past year, we’ve had an opportunity to update our arguments in light of this newer generation of models, now including ChatGPT.”

Form versus function

The study focuses on large language models (LLMs), which include AIs like ChatGPT. LLMs are text prediction models, and create writing by predicting which word comes next in a sentence — just like how a cell phone or email service like Gmail might suggest what next word you might want to write. However, while this type of language learning is extremely effective at creating coherent sentences, that doesn’t necessarily signify intelligence.

Ivanova’s team argues that formal competence — creating a well-structured, grammatically correct sentence — should be differentiated from functional competence — answering the right question, communicating the correct information, or appropriately communicating. They also found that while LLMs trained on text prediction are often very good at formal skills, they still struggle with functional skills.

“We humans have the tendency to conflate language and thought,” Ivanova says. “I think that’s an important thing to keep in mind as we’re trying to figure out what these models are capable of, because using that ability to be good at language, to be good at formal competence, leads many people to assume that AIs are also good at thinking — even when that’s not the case.

It’s a heuristic that we developed when interacting with other humans over thousands of years of evolution, but now in some respects, that heuristic is broken,” Ivanova explains.

The distinction between formal and functional competence is also vital in rigorously testing an AI’s capabilities, Ivanova adds. Evaluations often don’t distinguish formal and functional competence, making it difficult to assess what factors are determining a model’s success or failure. The need to develop distinct tests is one of the team’s more widely accepted findings, and one that some researchers in the field have already begun to implement.

Creating a modular system

While the human tendency to conflate functional and formal competence may have hindered understanding of LLMs in the past, our human brains could also be the key to unlocking more powerful AIs.

Leveraging the tools of cognitive neuroscience while a postdoctoral associate at Massachusetts Institute of Technology (MIT), Ivanova and her team studied brain activity in neurotypical individuals via fMRI, and used behavioral assessments of individuals with brain damage to test the causal role of brain regions in language and cognition — both conducting new research and drawing on previous studies. The team’s results showed that human brains use different regions for functional and formal competence, further supporting this distinction in AIs.

“Our research shows that in the brain, there is a language processing module and separate modules for reasoning,” Ivanova says. This modularity could also serve as a blueprint for how to develop future AIs.

“Building on insights from human brains — where the language processing system is sharply distinct from the systems that support our ability to think — we argue that the language-thought distinction is conceptually important for thinking about, evaluating, and improving large language models, especially given recent efforts to imbue these models with human-like intelligence,” says Ivanova’s former advisor and study co-author Evelina Fedorenko, a professor of brain and cognitive sciences at MIT and a member of the McGovern Institute for Brain Research.

Developing AIs in the pattern of the human brain could help create more powerful systems — while also helping them dovetail more naturally with human users. “Generally, differences in a mechanism’s internal structure affect behavior,” Ivanova says. “Building a system that has a broad macroscopic organization similar to that of the human brain could help ensure that it might be more aligned with humans down the road.”

In the rapidly developing world of AI, these systems are ripe for experimentation. After the team’s preprint was published, OpenAI announced their intention to add plug-ins to their GPT models.

“That plug-in system is actually very similar to what we suggest,” Ivanova adds. “It takes a modularity approach where the language model can be an interface to another specialized module within a system.”

While the OpenAI plug-in system will include features like booking flights and ordering food, rather than cognitively inspired features, it demonstrates that “the approach has a lot of potential,” Ivanova says.

The future of AI — and what it can tell us about ourselves

While our own brains might be the key to unlocking better, more powerful AIs, these AIs might also help us better understand ourselves. “When researchers try to study the brain and cognition, it’s often useful to have some smaller system where you can actually go in and poke around and see what’s going on before you get to the immense complexity,” Ivanova explains.

However, since human language is unique, model or animal systems are more difficult to relate. That’s where LLMs come in.

“There are lots of surprising similarities between how one would approach the study of the brain and the study of an artificial neural network” like a large language model, she adds. “They are both information processing systems that have biological or artificial neurons to perform computations.”

In many ways, the human brain is still a black box, but openly available AIs offer a unique opportunity to see the synthetic system’s inner workings and modify variables, and explore these corresponding systems like never before.

“It’s a really wonderful model that we have a lot of control over,” Ivanova says. “Neural networks — they are amazing.”

Along with Anna (Anya) Ivanova, Kyle Mahowald, and Evelina Fedorenko, the research team also includes Idan Blank (University of California, Los Angeles), as well as Nancy Kanwisher and Joshua Tenenbaum (Massachusetts Institute of Technology).

Honoring a visionary

Today marks the 10th anniversary of the passing of Pat McGovern, an extraordinary visionary and philanthropist whose legacy continues to inspire and impact the world. As the founder of International Data Group (IDG)—a premier information technology organization—McGovern was not just a pioneering figure in the technology media world, but also a passionate advocate for using technology for the greater good.

Under McGovern’s leadership, IDG became a global powerhouse, launching iconic publications such as Computerworld, Macworld, and PCWorld. His foresight also led to the creation of IDG Ventures, a network of venture funds around the world, including the notable IDG Capital in Beijing.

Beyond his remarkable business acumen, McGovern, with his wife, Lore, co-founded the McGovern Institute for Brain Research at MIT in 2000. This institute has been at the forefront of neuroscience research, contributing to groundbreaking advancements in perception, attention, memory, and artificial intelligence (AI), as well as discoveries with direct translational impact, such as CRISPR technology. CRISPR discoveries made at the McGovern Institute are now licensed for the first clinical application of genome editing in sickle cell disease.

Pat McGovern’s commitment to bettering humanity is further evidenced by the Patrick J. McGovern Foundation, which works in partnership with public, private, and social institutions to drive progress on our most pressing challenges through the use of artificial intelligence, data science, and key emerging technologies.

Remembering Pat McGovern

On this solemn anniversary, we reflect on Pat McGovern’s enduring influence through the words of those who knew him best.

Lore Harp McGovern
Co-founder and board member of the McGovern Institute for Brain Research

“Technology was Pat’s medium, the platform on which he built his amazing company 60 years ago. But it was people who truly motivated Pat, and he empowered and encouraged them to reach for the stars. He lived by the motto, ‘let’s try it,’ and believed that nothing was out bounds. His goal was to help create a more just and peaceful world, and establishing the McGovern Institute was our way to give back meaningfully to this world. I know he would be so proud of what has been achieved and what is yet to come.”

Robert Desimone
Director of the McGovern Institute for Brain Research

“Pat McGovern had a vision for an international community of scientists and students drawn together to collaborate on understanding the brain.  This vision has been realized in the McGovern Institute, and we are now seeing the profound advances in our understanding of the brain and even clinical applications that Pat predicted would follow.”

Hugo Shong
Chairman of IDG Capital

“Pat’s impact on technology, science and research is immeasurable. A man of tremendous vision, he grew IDG out of Massachusetts and made it into one of the world’s most recognized brands in its space, forging partnerships and winning friends wherever he went. He applied that very same vision and energy to the McGovern Institute and the Patrick J. McGovern Foundation, in support of their impressive and necessary causes. I know he would be extremely proud of what both organizations have achieved thus far, and particularly how their work has broken technological frontiers and bettered the lives of millions.”

Vilas Dhar
President of the Patrick J. McGovern Foundation

“Patrick J. McGovern was more than a tech mogul; he was a visionary who believed in the power of information to empower people and improve societies. His work has had a profound effect on public policy and education, laying the groundwork for a more informed and connected world and guiding our work to ensure that artificial intelligence is used to sustain a human-centered world that creates economic and social opportunity for all.  On a personal level, Pat’s leadership was characterized by a genuine care for his employees and a belief in their potential. He created a culture of curiosity, encouraging humanity to explore, innovate, and dream big. His spirit lives on in every philanthropic activity we undertake.”

Genevieve Juillard
CEO of IDG 

The legacy of Pat McGovern is felt not just in Boston, but around the world—by the thousands of IDG customers and by people like me who have the privilege to work at IDG, 60 years after he founded it. His innovative spirit and unwavering commitment to excellence continue to inspire and guide us.”

Sudhir Sethi
Founder and Chairman of Chiratae Ventures (formally IDG Ventures)

“Pat McGovern was a visionary who foresaw the potential of technology in India and nurtured the ecosystem as an active participant. Pat enabled a launchpad for Chiratae Ventures, empowering our journey to become the leading home-grown venture capital fund in India today. Pat is a role model to entrepreneurs worldwide, and we honor his legacy with our annual ‘Chiratae Ventures Patrick J. McGovern Awards’ that celebrate courage and the spirit of entrepreneurship.”

Marc Benioff
Founder and CEO of Salesforce
wrote in the book “Future Forward that “Pat McGovern was a gift to us all, a trailblazing visionary who showed an entire generation of entrepreneurs what it means to be a principle-based leader and how to lead with higher values.”

Pat McGovern’s memory lives on not just in the institutions and innovations he fostered, but in the countless lives he touched and transformed. Today, we celebrate a man who saw the future and helped us all move towards it with hope and determination.

For people who speak many languages, there’s something special about their native tongue

A new study of people who speak many languages has found that there is something special about how the brain processes their native language.

In the brains of these polyglots — people who speak five or more languages — the same language regions light up when they listen to any of the languages that they speak. In general, this network responds more strongly to languages in which the speaker is more proficient, with one notable exception: the speaker’s native language. When listening to one’s native language, language network activity drops off significantly.

The findings suggest there is something unique about the first language one acquires, which allows the brain to process it with minimal effort, the researchers say.

“Something makes it a little bit easier to process — maybe it’s that you’ve spent more time using that language — and you get a dip in activity for the native language compared to other languages that you speak proficiently,” says Evelina Fedorenko, an associate professor of neuroscience at MIT, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Saima Malik-Moraleda, a graduate student in the Speech and Hearing Bioscience and Technology Program at Harvard University, and Olessia Jouravlev, a former MIT postdoc who is now an associate professor at Carleton University, are the lead authors of the paper, which appears today in the journal Cerebral Cortex.

Many languages, one network

McGovern Investivator Ev Fedorenko in the Martinos Imaging Center at MIT. Photo: Caitlin Cunningham

The brain’s language processing network, located primarily in the left hemisphere, includes regions in the frontal and temporal lobes. In a 2021 study, Fedorenko’s lab found that in the brains of polyglots, the language network was less active when listening to their native language than the language networks of people who speak only one language.

In the new study, the researchers wanted to expand on that finding and explore what happens in the brains of polyglots as they listen to languages in which they have varying levels of proficiency. Studying polyglots can help researchers learn more about the functions of the language network, and how languages learned later in life might be represented differently than a native language or languages.

“With polyglots, you can do all of the comparisons within one person. You have languages that vary along a continuum, and you can try to see how the brain modulates responses as a function of proficiency,” Fedorenko says.

For the study, the researchers recruited 34 polyglots, each of whom had at least some degree of proficiency in five or more languages but were not bilingual or multilingual from infancy. Sixteen of the participants spoke 10 or more languages, including one who spoke 54 languages with at least some proficiency.

Each participant was scanned with functional magnetic resonance imaging (fMRI) as they listened to passages read in eight different languages. These included their native language, a language they were highly proficient in, a language they were moderately proficient in, and a language in which they described themselves as having low proficiency.

They were also scanned while listening to four languages they didn’t speak at all. Two of these were languages from the same family (such as Romance languages) as a language they could speak, and two were languages completely unrelated to any languages they spoke.

The passages used for the study came from two different sources, which the researchers had previously developed for other language studies. One was a set of Bible stories recorded in many different languages, and the other consisted of passages from “Alice in Wonderland” translated into many languages.

Brain scans revealed that the language network lit up the most when participants listened to languages in which they were the most proficient. However, that did not hold true for the participants’ native languages, which activated the language network much less than non-native languages in which they had similar proficiency. This suggests that people are so proficient in their native language that the language network doesn’t need to work very hard to interpret it.

“As you increase proficiency, you can engage linguistic computations to a greater extent, so you get these progressively stronger responses. But then if you compare a really high-proficiency language and a native language, it may be that the native language is just a little bit easier, possibly because you’ve had more experience with it,” Fedorenko says.

Brain engagement

The researchers saw a similar phenomenon when polyglots listened to languages that they don’t speak: Their language network was more engaged when listening to languages related to a language that they could understand, than compared to listening to completely unfamiliar languages.

“Here we’re getting a hint that the response in the language network scales up with how much you understand from the input,” Malik-Moraleda says. “We didn’t quantify the level of understanding here, but in the future we’re planning to evaluate how much people are truly understanding the passages that they’re listening to, and then see how that relates to the activation.”

The researchers also found that a brain network known as the multiple demand network, which turns on whenever the brain is performing a cognitively demanding task, also becomes activated when listening to languages other than one’s native language.

“What we’re seeing here is that the language regions are engaged when we process all these languages, and then there’s this other network that comes in for non-native languages to help you out because it’s a harder task,” Malik-Moraleda says.

In this study, most of the polyglots began studying their non-native languages as teenagers or adults, but in future work, the researchers hope to study people who learned multiple languages from a very young age. They also plan to study people who learned one language from infancy but moved to the United States at a very young age and began speaking English as their dominant language, while becoming less proficient in their native language, to help disentangle the effects of proficiency versus age of acquisition on brain responses.

The research was funded by the McGovern Institute for Brain Research, MIT’s Department of Brain and Cognitive Sciences, and the Simons Center for the Social Brain.

How the brain coordinates speaking and breathing

MIT researchers have discovered a brain circuit that drives vocalization and ensures that you talk only when you breathe out, and stop talking when you breathe in.

McGovern Investigator Fan Wang. Photo: Caitliin Cunningham

The newly discovered circuit controls two actions that are required for vocalization: narrowing of the larynx and exhaling air from the lungs. The researchers also found that this vocalization circuit is under the command of a brainstem region that regulates the breathing rhythm, which ensures that breathing remains dominant over speech.

“When you need to breathe in, you have to stop vocalization. We found that the neurons that control vocalization receive direct inhibitory input from the breathing rhythm generator,” says Fan Wang, an MIT professor of brain and cognitive sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Jaehong Park, a Duke University graduate student who is currently a visiting student at MIT, is the lead author of the study, which appears today in Science. Other authors of the paper include MIT technical associates Seonmi Choi and Andrew Harrahill, former MIT research scientist Jun Takatoh, and Duke University researchers Shengli Zhao and Bao-Xia Han.

Vocalization control

Located in the larynx, the vocal cords are two muscular bands that can open and close. When they are mostly closed, or adducted, air exhaled from the lungs generates sound as it passes through the cords.

The MIT team set out to study how the brain controls this vocalization process, using a mouse model. Mice communicate with each other using sounds known as ultrasonic vocalizations (USVs), which they produce using the unique whistling mechanism of exhaling air through a small hole between nearly closed vocal cords.

“We wanted to understand what are the neurons that control the vocal cord adduction, and then how do those neurons interact with the breathing circuit?” Wang says.

To figure that out, the researchers used a technique that allows them to map the synaptic connections between neurons. They knew that vocal cord adduction is controlled by laryngeal motor neurons, so they began by tracing backward to find the neurons that innervate those motor neurons.

This revealed that one major source of input is a group of premotor neurons found in the hindbrain region called the retroambiguus nucleus (RAm). Previous studies have shown that this area is involved in vocalization, but it wasn’t known exactly which part of the RAm was required or how it enabled sound production.

Image of green and magenta cells under a microscope.
Laryngeal premotor neurons (green) and Fos (magenta) labeling in the RAm. Image: Fan Wang

The researchers found that these synaptic tracing-labeled RAm neurons were strongly activated during USVs. This observation prompted the team to use an activity-dependent method to target these vocalization-specific RAm neurons, termed as RAmVOC. They used chemogenetics and optogenetics to explore what would happen if they silenced or stimulated their activity. When the researchers blocked the RAmVOC neurons, the mice were no longer able to produce USVs or any other kind of vocalization. Their vocal cords did not close, and their abdominal muscles did not contract, as they normally do during exhalation for vocalization.

Conversely, when the RAmVOC neurons were activated, the vocal cords closed, the mice exhaled, and USVs were produced. However, if the stimulation lasted two seconds or longer, these USVs would be interrupted by inhalations, suggesting that the process is under control of the same part of the brain that regulates breathing.

“Breathing is a survival need,” Wang says. “Even though these neurons are sufficient to elicit vocalization, they are under the control of breathing, which can override our optogenetic stimulation.”

Rhythm generation

Additional synaptic mapping revealed that neurons in a part of the brainstem called the pre-Bötzinger complex, which acts as a rhythm generator for inhalation, provide direct inhibitory input to the RAmVOC neurons.

“The pre-Bötzinger complex generates inhalation rhythms automatically and continuously, and the inhibitory neurons in that region project to these vocalization premotor neurons and essentially can shut them down,” Wang says.

This ensures that breathing remains dominant over speech production, and that we have to pause to breathe while speaking.

The researchers believe that although human speech production is more complex than mouse vocalization, the circuit they identified in mice plays the conserved role in speech production and breathing in humans.

“Even though the exact mechanism and complexity of vocalization in mice and humans is really different, the fundamental vocalization process, called phonation, which requires vocal cord closure and the exhalation of air, is shared in both the human and the mouse,” Park says.

The researchers now hope to study how other functions such as coughing and swallowing food may be affected by the brain circuits that control breathing and vocalization.

The research was funded by the National Institutes of Health.