Mutations of a gene called Foxp2 have been linked to a type of speech disorder called apraxia that makes it difficult to produce sequences of sound. A new study from MIT and National Yang Ming Chiao Tung University sheds light on how this gene controls the ability to produce speech.
In a study of mice, the researchers found that mutations in Foxp2 disrupt the formation of dendrites and neuronal synapses in the brain’s striatum, which plays important roles in the control of movement. Mice with these mutations also showed impairments in their ability to produce the high-frequency sounds that they use to communicate with other mice.
Those malfunctions arise because Foxp2 mutations prevent the proper assembly of motor proteins, which move molecules within cells, the researchers found.
“These mice have abnormal vocalizations, and in the striatum there are many cellular abnormalities,” says Ann Graybiel, an MIT Institute Professor, a member of MIT’s McGovern Institute for Brain Research, and an author of the paper. “This was an exciting finding. Who would have thought that a speech problem might come from little motors inside cells?”
Fu-Chin Liu PhD ’91, a professor at National Yang Ming Chiao Tung University in Taiwan, is the senior author of the study, which appears today in the journal Brain. Liu and Graybiel also worked together on a 2016 study of the potential link between Foxp2 and autism spectrum disorder. The lead authors of the new Brain paper are Hsiao-Ying Kuo and Shih-Yun Chen of National Yang Ming Chiao Tung University.
Speech control
Children with Foxp2-associated apraxia tend to begin speaking later than other children, and their speech is often difficult to understand. The disorder is believed to arise from impairments in brain regions, such as the striatum, that control the movements of the lips, mouth, and tongue. Foxp2 is also expressed in the brains of songbirds such as zebra finches and is critical to those birds’ ability to learn songs.
Foxp2 encodes a transcription factor, meaning that it can control the expression of many other target genes. Many species express Foxp2, but humans have a special form of Foxp2. In a 2014 study, Graybiel and colleagues found evidence that the human form of Foxp2, when expressed in mice, allowed the mice to accelerate the switch from declarative to procedural types of learning.
In that study, the researchers showed that mice engineered to express the human version of Foxp2, which differs from the mouse version by only two DNA base pairs, were much better at learning mazes and performing other tasks that require turning repeated actions into behavioral routines. Mice with human-like Foxp2 also had longer dendrites — the slender extensions that help neurons form synapses — in the striatum, which is involved in habit formation as well as motor control.
In the new study, the researchers wanted to explore how the Foxp2 mutation that has been linked with apraxia affects speech production, using ultrasonic vocalizations in mice as a proxy for speech. Many rodents and other animals such as bats produce these vocalizations to communicate with each other.
While previous studies, including the work by Liu and Graybiel in 2016, had suggested that Foxp2 affects dendrite growth and synapse formation, the mechanism for how that occurs was not known. In the new study, led by Liu, the researchers investigated one proposed mechanism, which is that Foxp2 affects motor proteins.
One of these molecular motors is the dynein protein complex, a large cluster of proteins that is responsible for shuttling molecules along microtubule scaffolds within cells.
“All kinds of molecules get shunted around to different places in our cells, and that’s certainly true of neurons,” Graybiel says. “There’s an army of tiny molecules that move molecules around in the cytoplasm or put them into the membrane. In a neuron, they may send molecules from the cell body all the way down the axons.”
A delicate balance
The dynein complex is made up of several other proteins. The most important of these is a protein called dynactin1, which interacts with microtubules, enabling the dynein motor to move along microtubules. In the new study, the researchers found that dynactin1 is one of the major targets of the Foxp2 transcription factor.
The researchers focused on the striatum, one of the regions where Foxp2 is most often found, and showed that the mutated version of Foxp2 is unable to suppress dynactin1 production. Without that brake in place, cells generate too much dynactin1. This upsets the delicate balance of dynein-dynactin1, which prevents the dynein motor from moving along microtubules.
Those motors are needed to shuttle molecules that are necessary for dendrite growth and synapse formation on dendrites. With those molecules stranded in the cell body, neurons are unable to form synapses to generate the proper electrophysiological signals they need to make speech production possible.
Mice with the mutated version of Foxp2 had abnormal ultrasonic vocalizations, which typically have a frequency of around 22 to 50 kilohertz. The researchers showed that they could reverse these vocalization impairments and the deficits in the molecular motor activity, dendritic growth, and electrophysiological activity by turning down the gene that encodes dynactin1.
Mutations of Foxp2 can also contribute to autism spectrum disorders and Huntington’s disease, through mechanisms that Liu and Graybiel previously studied in their 2016 paper and that many other research groups are now exploring. Liu’s lab is also investigating the potential role of abnormal Foxp2 expression in the subthalamic nucleus of the brain as a possible factor in Parkinson’s disease.
The research was funded by the Ministry of Science and Technology of Taiwan, the Ministry of Education of Taiwan, the U.S. National Institute of Mental Health, the Saks Kavanaugh Foundation, the Kristin R. Pressman and Jessica J. Pourian ’13 Fund, and Stephen and Anne Kott.
When he turned his ankle five years ago as an undergraduate playing pickup basketball at the University of Illinois, Wei-Chen (Eric) Wang SM ’22 knew his life would change in certain ways. For one thing, Wang, then a computer science major, wouldn’t be playing basketball anytime soon. He also assumed, correctly, that he might require physical therapy (PT).
What he did not foresee was that this minor injury would influence his career trajectory. While lying on the PT bench, Wang began to wonder: “Can I replicate what the therapist is doing using a robot?” It was an idle thought at the time. Today, however, his research involves robots and movement, closely related to what had seemed a passing fancy.
Wang continued his focus on computer science as an MIT graduate student, receiving his master’s in 2022 before deciding to pursue work of a more applied nature. He met Nidhi Seethapathi, who had joined MIT’s faculty as an assistant professor in electrical engineering and computer science and brain and cognitive science a few months earlier, and was intrigued by the notion of creating robots that could illuminate the key principles of movement—knowledge that might someday help people regain the ability to move comfortably after suffering from injury, stroke, or disease.
As the first PhD student in Seethapathi’s group and a MathWorks Fellow, Wang is charged with building machine learning-based models that can accurately predict and reproduce human movements. He will then use computer-simulated environments to visualize and evaluate the performance of these models.
To begin, he needs to gather data about specific human movements. One potential data collection method involves the placement of sensors or markers on different parts of the body to pinpoint their precise positions at any given moment. He can then try to calculate those positions in the future, as dictated by the equations of motion in physics.
The other method relies on computer vision-powered software that can automatically convert video footage to motion data. Wang prefers the latter approach, which he considers more natural. “We just look at what humans are doing and try to learn from that directly,” he explains. That’s also where machine learning comes in. “We use machine-learning tools to extract data from the video, and those data become the input to our model,” he adds. The model, in this case, is just another term for the robot brain.
The near-term goal is not to make robots more natural, Wang notes. “We’re using [simulated] robots to understand how humans are moving and eventually to explain any kind of movement—or at least that’s the hope. That said, based on the general principles we’re able to abstract, we might someday build robots that can move more naturally.”
Wang is also collaborating on a project headed by postdoctoral fellow Antoine De Comité that focuses on robotic retrieval of objects—the movements required to remove books from a library shelf, for example, or to grab a drink from a refrigerator. While robots routinely excel at tasks such as grasping an object on a tabletop, performing naturalistic movements in three dimensions remains challenging.
Wang describes a video shown by a Stanford University scientist in which a robot destroyed a refrigerator while attempting to extract a beer. He and De Comité hope for better results with robots that have undergone reinforcement learning—an approach using deep learning in which desired motions are rewarded or reinforced whereas unwanted motions are discouraged.
If they succeed in designing a robot that can safely retrieve a beer, Wang says, then more important and delicate tasks could be within reach. Someday, a robot at PT might guide a patient through knee exercises or apply ultrasound to an arthritic elbow.
In Sierra Leone, war and illness have left up to 40,000 people requiring orthotics and prosthetics services, but there is a profound lack of access to specialized care, says Francesca Riccio-Ackerman, a biomedical engineer and PhD student studying health equity and health systems. There is just one fully certified prosthetist available for the thousands of patients in the African nation who are living with amputation, she notes. The ideal number is one for every 250, according to the World Health Organization and the International Society of Orthotics and Prosthetics.
The data point is significant for Riccio-Ackerman, who conducts research in the MIT Media Lab’s Biomechatronics Group and in the K. Lisa Yang Center for Bionics, both of which aim to improve translation of assistive technologies to people with disabilities. “We’re really focused on improving and augmenting human mobility,” she says. For Riccio-Ackerman, part of the quest to improve human mobility means ensuring that the people who need access to prosthetic care can get it—for the duration of their lives.
“We’re really focused on improving and augmenting human mobility,” says Riccio-Ackerman.
In September 2021, the Yang Center provided funding for Riccio-Ackerman to travel to Sierra Leone, where she witnessed the lingering physical effects of a brutal decade-long civil war that ended in 2002. Prosthetic and orthotic care in the country, where a vast number of patients are also disabled by untreated polio or diabetes, has become more elusive, she says, as global media attention on the war’s aftermath has subsided. “People with amputation need low-level, consistent care for years. There really needs to be a long-term investment in improving this.”
Through the Yang Center and supported by a fellowship from the new MIT Morningside Academy for Design, Riccio-Ackerman is designing and building a sustainable care and delivery model in Sierra Leone that aims to multiply the production of prosthetic limbs and strengthen the country’s prosthetic sector. “[We’re working] to improve access to orthotic and prosthetic services,” she says.
She is also helping to establish a supply chain for prosthetic limb and orthotic brace parts and equipping clinics with machines and infrastructure to serve more patients. In January 2023, her team launched a four-year collaboration with the Sierra Leone Ministry of Health and Sanitation. One of the goals of the joint effort is to enable Sierra Leoneans to obtain professional prosthetics training, so they can care for their own community without leaving home.
From engineering to economics
Riccio-Ackerman was drawn to issues around human mobility after witnessing her aunt suffer from rheumatoid arthritis. “My aunt was young, but she looked like she was 80 or 90. She was sick, in pain, in a wheelchair— a young spirit in an old body,” she says.
As a biomedical engineering undergraduate student at Florida International University, Riccio-Ackerman worked on clinical trials for neural-enabled myoelectric arms controlled by nerves in the body. She says that the technology was thrilling yet heartbreaking. She would often have to explain to patients who participated in testing that they couldn’t take the devices home and that they may never be covered by insurance.
Riccio-Ackerman began asking questions: “What factors determine who gets an amputation? Why are we making devices that are so expensive and inaccessible?” This sense of injustice inspired her to pivot away from device design and toward a master’s degree in health economics and policy at the SDA Bocconi School of Management in Milan.
She began work as a research specialist with Hugh Herr SM ’93, professor of arts and sciences at the MIT Media Lab and codirector of the Yang Center, helping to study communities that were medically neglected in prosthetic care. “I knew that the devices weren’t getting to the people who need them, and I didn’t know if the best way to solve it was through engineering,” Riccio-Ackerman explains.
While Riccio-Ackerman’s PhD should be finished within three years, she’s only at the beginning of her health care equity work. “We’re forging ahead in Sierra Leone and thinking about translating our strategy and methodologies to other communities around the globe that could benefit,” she says. “We hope to be able to do this in many, many countries in the future.”
In early December 2022, a middle-aged woman from California arrived at Boston’s Brigham and Women’s Hospital for the amputation of her right leg below the knee following an accident. This was no ordinary procedure. At the end of her remaining leg, surgeons attached a titanium fixture through which they threaded eight thin, electrically conductive wires. These flexible leads, implanted on her leg muscles, would, in the coming months, connect to a robotic, battery-powered prosthetic ankle and foot.
The goal of this unprecedented surgery, driven by MIT researchers from the K. Lisa Yang Center for Bionics at MIT, was the restoration of near-natural function to the patient, enabling her to sense and control the position and motion of her ankle and foot—even with her eyes closed.
In the K. Lisa Yang Center for Bionics, codirector Hugh Herr SM ’93 and graduate student Christopher Shallal are working to return mobility to people disabled by disease or physical trauma. Photo: Tony Luong
“The brain knows exactly how to control the limb, and it doesn’t matter whether it is flesh and bone or made of titanium, silicon, and carbon composite,” says Hugh Herr SM ’93, professor of media arts and sciences, head of the MIT Media Lab’s Biomechatronics Group, codirector of the Yang Center, and an associate member of MIT’s McGovern Institute for Brain Research.
For Herr, in attendance during that long day, the surgery represented a critical milestone in a decades-long mission to develop technologies returning mobility to people disabled by disease or physical trauma. His research combines a dizzying range of disciplines—electrical, mechanical, tissue, and biomedical engineering, as well as neuroscience and robotics—and has yielded pathbreaking results. Herr’s more than 100 patents include a computer-controlled knee and powered ankle-foot prosthesis and have enabled thousands of people around the world to live more on their own terms, including Herr.
Surmounting catastrophe
For much of Herr’s life, “go” meant “up.”
“Starting when I was eight, I developed an extraordinary passion, an absolute obsession, for climbing; it’s all I thought about in life,” says Herr. He aspired “to be the best climber in the world,” a goal he nearly achieved in his teenage years, enthralled by the “purity” of ascending mountains ropeless and solo in record times, by “a vertical dance, a balance between physicality and mind control.”
McGovern Institute Associate Investigator Hugh Herr. Photo: Jimmy Day / MIT Media Lab
At 17, Herr became disoriented while climbing New Hampshire’s Mt. Washington during a blizzard. Days in the cold permanently damaged his legs, which had to be amputated below his knees. His rescue cost another man’s life, and Herr was despondent, disappointed in himself, and fearful for his future.
Then, following months of rehabilitation, he felt compelled to test himself. His first weekend home, when he couldn’t walk without canes and crutches, he headed back to the mountains. “I hobbled to the base of this vertical cliff and started ascending,” he recalls. “It brought me joy to realize that I was still me, the same person.”
But he also recognized that as a person with amputated limbs, he faced severe disadvantages. “Society doesn’t look kindly on people with unusual bodies; we are viewed as crippled and weak, and that did not sit well with me.” Unable to tolerate both the new physical and social constraints on his life, Herr determined to view his disability not as a loss but as an opportunity. “I think the rage was the catapult that led me to do something that was without precedent,” he says.
Lifelike limb
On hand in the surgical theater in December was a member of Herr’s Biomechatronics Group for whom the bionic limb procedure also held special resonance. Christopher Shallal, a second-year graduate student in the Harvard-MIT Health Sciences and Technology program who received bilateral lower limb amputations at birth, worked alongside surgeon Matthew Carty testing the electric leads before implantation in the patient. Shallal found this, his first direct involvement with a reconstruction surgery, deeply fulfilling.
“Ever since I was a kid, I’ve wanted to do medicine plus engineering,” says Shallal. “I’m really excited to work on this bionic limb reconstruction, which will probably be one of the most advanced systems yet in terms of neural interfacing and control, with a far greater range of motion possible.”
Hugh and Shallal are working on a next-generation, biomimetic limb with implanted sensors that can relay signals between the external prosthesis and muscles in the remaining limb. Photo: Tony Luong
Like other Herr lab designs, the new prosthesis features onboard, battery-powered propulsion, microprocessors, and tunable actuators. But this next-generation, biomimetic limb represents a major leap forward, replacing electrodes sited on a patient’s skin, subject to sweat and other environmental threats, with implanted sensors that can relay signals between the external prosthesis and muscles in the remaining limb.
This system takes advantage of a breakthrough technique invented several years ago by the Herr lab called CMI (for cutaneous mechanoneural interface), which constructs muscle-skin-nerve bundles at the amputation site. Muscle actuators controlled by computers on board the external prosthesis apply forces on skin cells implanted within the amputated residuum when a person with amputation touches an object with their prosthesis.
With CMI and electric leads connecting the prosthesis to these muscle actuators within the residual limb, the researchers hypothesize that a person with an amputation will be able to “feel” their prosthetic leg step onto the ground. This sensory capability is the holy grail for persons with major limb loss. After recovery from her surgery, the woman from California will be wearing Herr’s latest state-of-the-art prosthetic system in the lab.
‘Tinkering’ with the body
Not all artificial limbs emulate those that humans are born with. “You can make them however you want, swapping them in and out depending on what you want to do, and they can take you anywhere,” Herr says. Committed to extreme climbing even after his accident, Herr came up with special limbs that became a commercial hit early in his career. His designs made it possible for someone with amputated legs to run and dance.
But he also knew the day-to-day discomfort of navigating on flatter earth with most prostheses. He won his first patent during his senior year of college for a fluid-controlled socket attachment designed to reduce the pain of walking. Growing up in a Mennonite family skilled in handcrafting things they needed, and in a larger community that was disdainful of technology, Herr says he had “difficulty trusting machines.” Yet by the time he began his master’s program at MIT, intent on liberating persons with limb amputation to live more fully in the world, he had embraced the tools of science and engineering as the means to this end.
“I want to be in the business of designing not more and more powerful tools but designing new bodies,” says Hugh Herr.
For Shallal, Herr was an early icon, and his inventions and climbing exploits served as inspiration. “I’d known about Hugh since middle school; he was famous among those with amputations,” he says. “As a kid, I liked tinkering with things, and I kind of saw my body as a canvas, a place where I could explore different boundaries and expand possibilities for myself and others with amputations.” In school, Shallal sometimes encountered resistance to his prostheses. “People would say I couldn’t do certain things, like running and playing different sports, and I found these barriers frustrating,” he says. “I did things in my own way and didn’t want people to pity me.”
In fact, Shallal felt he could do some things better than his peers. In high school, he used a 3-D printer to make a mobile phone charger case he could plug into his prosthesis. “As a kid, I would wear long pants to hide my legs, but as the technology got cooler, I started wearing shorts,” he says. “I got comfortable and liked kind of showing off my legs.”
Global impact
December’s surgery was the first phase in the bionic limb project. Shallal will be following up with the patient over many months, ensuring that the connections between her limb and implanted sensors function and provide appropriate sensorimotor data for the built-in processor. Research on this and other patients to determine the impact of these limbs on gait and ease of managing slopes, for instance, will form the basis for Shallal’s dissertation.
“After graduation, I’d be really interested in translating technology out of the lab, maybe doing a startup related to neural interfacing technology,” he says. “I watched Inspector Gadget on television when I was a kid. Making the tool you need at the time you need it to fix problems would be my dream.”
Herr will be overseeing Shallal’s work, as well as a suite of research efforts propelled by other graduate students, postdocs, and research scientists that together promise to strengthen the technology behind this generation of biomimetic prostheses.
One example: devising an innovative method for measuring muscle length and velocity with tiny implanted magnets. In work published in November 2022, researchers including Herr; project lead Cameron Taylor SM ’16, PhD ’20, a research associate in the Biomechatronics Group; and Brown University partners demonstrated that this new tool, magnetomicrometry, yields the kind of high-resolution data necessary for even more precise bionic limb control. The Herr lab awaits FDA approval on human implantation of the magnetic beads.
These intertwined initiatives are central to the ambitious mission of the K. Lisa Yang Center for Bionics, established with a $24 million gift from Yang in 2021 to tackle transformative bionic interventions to address an extensive range of human limitations.
Herr is committed to making the broadest possible impact with his technologies. “Shoes and braces hurt, so my group is developing the science of comfort—designing mechanical parts that attach to the body and transfer loads without causing pain.” These inventions may prove useful not just to people living with amputation but to patients suffering from arthritis or other diseases affecting muscles, joints, and bones, whether in lower limbs or arms and hands.
The Yang Center aims to make prosthetic and orthotic devices more accessible globally, so Herr’s group is ramping up services in Sierra Leone, where civil war left tens of thousands missing limbs after devastating machete attacks. “We’re educating clinicians, helping with supply chain infrastructure, introducing novel assistive technology, and developing mobile delivery platforms,” he says.
In the end, says Herr, “I want to be in the business of designing not more and more powerful tools but designing new bodies.” Herr uses himself as an example: “I walk on two very powerful robots, but they’re not linked to my skeleton, or to my brain, so when I walk it feels like I’m on powerful machines that are not me. What I want is such a marriage between human physiology and electromechanics that a person feels at one with the synthetic, designed content of their body.” and control, with a far greater range of motion possible.”
Nidhi Seethapathi was first drawn to using powerful yet simple models to understand elaborate patterns when she learned about Newton’s laws of motion as a high school student in India. She was fascinated by the idea that wonderfully complex behaviors can arise from a set of objects that follow a few elementary rules.
Now an assistant professor at MIT, Seethapathi seeks to capture the intricacies of movement in the real world, using computational modeling as well as input from theory and experimentation. “[Theoretical physicist and Nobel laureate] Richard Feynman ’39 once said, ‘What I cannot create, I do not understand,’” Seethapathi says. “In that same spirit, the way I try to understand movement is by building models that move the way we do.”
Models of locomotion in the real world
Seethapathi—who holds a shared faculty position between the Department of Brain and Cognitive Sciences and the Department of Electrical Engineering and Computer Science’s Faculty of Artificial Intelligence + Decision- Making, which is housed in the Schwarzman College of Computing and the School of Engineering—recalls a moment during her undergraduate years studying mechanical engineering in Mumbai when a professor asked students to pick an aspect of movement to examine in detail. While most of her peers chose to analyze machines, Seethapathi selected the human hand. She was astounded by its versatility, she says, and by the number of variables, referred to by scientists as “degrees of freedom,” that are needed to characterize routine manual tasks. The assignment made her realize that she wanted to explore the diverse ways in which the entire human body can move.
Also an investigator at the McGovern Institute for Brain Research, Seethapathi pursued graduate research at The Ohio State University Movement Lab, where her goal was to identify the key elements of human locomotion. At that time, most people in the field were analyzing simple movements, she says, “but I was interested in broadening the scope of my models to include real-world behavior. Given that movement is so ubiquitous, I wondered: What can this model say about everyday life?”
After earning her PhD from Ohio State in 2018, Seethapathi continued this line of research as a postdoctoral fellow at the University of Pennsylvania. New computer vision tools to track human movement from video footage had just entered the scene, and during her time at UPenn, Seethapathi sought to expand her skillset to include computer vision and applications to movement rehabilitation.
At MIT, Seethapathi continues to extend the range of her studies of human movement, looking at how locomotion can evolve as people grow and age, and how it can adapt to anatomical changes and even adjust to shifts in weather, which can alter ground conditions. Her investigations now encompass other species as part of an effort to determine how creatures with different morphologies and habitats regulate their movements.
The models Seethapathi and her team create make predictions about human movements that can later be verified or refuted by empirical tests. While relatively simple experiments can be carried out on treadmills, her group is developing measurement systems incorporating wearable sensors and video-based sensing to measure movement data that have traditionally been hard to obtain outside the laboratory.
Although Seethapathi says she is primarily driven to uncover the fundamental principles that govern movement behavior, she believes her work also has practical applications.
“When people are treated for a movement disorder, the goal is to impact their movements in the real world,” she says. “We can use our predictive models to see how a particular intervention will affect a person’s trajectory. The hope is that our models can help put the individual on the right track to recovery as early as possible.”
Eight MIT faculty members are among more than 250 leaders from academia, the arts, industry, public policy, and research elected to the American Academy of Arts and Sciences, the academy announced April 19.
One of the nation’s most prestigious honorary societies, the academy is also a leading center for independent policy research. Members contribute to academy publications, as well as studies of science and technology policy, energy and global security, social policy and American institutions, the humanities and culture, and education.
Those elected from MIT in 2023 are:
Arnaud Costinot, professor of economics;
James J. DiCarlo, Peter de Florez Professor of Brain and Cognitive Sciences, director of the MIT Quest for Intelligence, and McGovern Institute Investigator;
Piotr Indyk, the Thomas D. and Virginia W. Cabot Professor of Electrical Engineering and Computer Science;
Senthil Todadri, professor of physics;
Evelyn N. Wang, Ford Professor of Engineering (on leave) and director of the Department of Energy’s Advanced Research Projects Agency-Energy;
Boleslaw Wyslouch, professor of physics and director of the Laboratory for Nuclear Science and Bates Research and Engineering Center;
Yukiko Yamashita, professor of biology and core member of the Whitehead Institute; and
Wei Zhang, professor of mathematics.
“With the election of these members, the academy is honoring excellence, innovation, and leadership and recognizing a broad array of stellar accomplishments. We hope every new member celebrates this achievement and joins our work advancing the common good,” says David W. Oxtoby, president of the academy.
Since its founding in 1780, the academy has elected leading thinkers from each generation, including George Washington and Benjamin Franklin in the 18th century, Maria Mitchell and Daniel Webster in the 19th century, and Toni Morrison and Albert Einstein in the 20th century. The current membership includes more than 250 Nobel and Pulitzer Prize winners.
Real-time feedback about brain activity can help adolescents with depression or anxiety quiet their minds, according to a new study from MIT scientists. The researchers, led by McGovern research affiliate Susan Whitfield-Gabrieli, have used functional magnetic resonance imaging (fMRI) to show patients what’s happening in their brain as they practice mindfulness inside the scanner and to encourage them to focus on the present. They report in the journal Molecular Psychiatry that doing so settles down neural networks that are associated with symptoms of depression.
McGovern research affiliate Susan Whitfield-Gabrieli in the Martinos Imaging Center.
“We know this mindfulness meditation is really good for kids and teens, and we think this real-time fMRI neurofeedback is really a way to engage them and provide a visual representation of how they’re doing,” says Whitfield-Gabrieli. “And once we train people how to do mindfulness meditation, they can do it on their own at any time, wherever they are.”
The approach could be a valuable tool to alleviate or prevent depression in young people, which has been on the rise in recent years and escalated alarmingly during the Covid-19 pandemic. “This has gone from bad to catastrophic, in my perspective,” Whitfield-Gabrieli says. “We have to think out of the box and come up some really innovative ways to help.”
Default mode network
Mindfulness meditation, in which practitioners focus their awareness on the present moment, can modulate activity within the brain’s default mode network, which is so named because it is most active when a person is not focused on any particular task. Two hubs within the default mode network, the medial prefrontal cortex and the posterior cingulate cortex, are of particular interest to Whitfield-Gabrieli and her colleagues, due to a potential role in the symptoms of depression and anxiety.
“These two core hubs are very engaged when we’re thinking about the past or the future and we’re not really engaged in the present moment,” she explains. “If we’re in a healthy state of mind, we may be reminiscing about the past or planning for the future. But if we’re depressed, that reminiscing may turn into rumination or obsessively rehashing the past. If we’re particularly anxious, we may be obsessively worrying about the future.”
Whitfield-Gabrieli explains that these key hubs are often hyperconnected in people with anxiety and depression. The more tightly correlated the activity of the two regions are, the worse a person’s symptoms are likely to be. Mindfulness, she says, can help interrupt that hyperconnectivity.
“Mindfulness really helps to focus on the now, which just precludes all of this mind wandering and repetitive negative thinking,” she explains. In fact, she and her colleagues have found that mindfulness practice can reduce stress and improve attention in children. But she acknowledges that it can be difficult to engage young people and help them focus on the practice.
Tuning the mind
To help people visualize the benefits of their mindfulness practice, the researchers developed a game that can be played while an MRI scanner tracks a person’s brain activity. On a screen inside the scanner, the participant sees a ball and two circles. The circle at the top of the screen represents a desirable state in which the activity of the brain’s default mode network has been reduced, and the activity of a network the brain uses to focus on attention-demanding tasks—the frontal parietal network—has increased. An initial fMRI scan identifies these networks in each individual’s brain, creating a customized mental map on which the game is based.
“They’re training their brain to tune their mind. And they love it.” – Susan Whitfield-Gabrieli
As the person practices mindfulness meditation, which they learn prior to entering the scanner, the default mode network in the brain quiets while the frontal parietal mode activates. When the scanner detects this change, the ball moves and eventually enters its target. With an initial success, the target shrinks, encouraging even more focus. When the participant’s mind wanders from their task, the default mode network activation increases (relative to the frontal parietal network) and the ball moves down towards the second circle, which represents an undesirable state. “Basically, they’re just moving this ball with their brain,” Whitfield-Gabrieli says. “They’re training their brain to tune their mind. And they love it.”
Nine individuals between the ages of 17 and 19 with a history of major depression or anxiety disorders tried this new approach to mindfulness training, and for each of them, Whitfield-Gabrieli’s team saw a reduction in connectivity within the default mode network. Now they are working to determine whether an electroencephalogram, in which brain activity is measured with noninvasive electrodes, can be used to provide similar neurofeedback during mindfulness training—an approach that could be more accessible for broad clinical use.
Whitfield-Gabrieli notes that hyperconnectivity in the default mode network is also associated with psychosis, and she and her team have found that mindfulness meditation with real-time fMRI feedback can help reduce symptoms in adults with schizophrenia. Future studies are planned to investigate how the method impacts teens’ ability to establish a mindfulness practice and its potential effects on depression symptoms.
Researchers at the McGovern Institute and the Broad Institute of MIT and Harvard have harnessed a natural bacterial system to develop a new protein delivery approach that works in human cells and animals. The technology, described today in Nature, can be programmed to deliver a variety of proteins, including ones for gene editing, to different cell types. The system could potentially be a safe and efficient way to deliver gene therapies and cancer therapies.
Led by McGovern Institute investigator and Broad Institute core member Feng Zhang, the team took advantage of a tiny syringe-like injection structure, produced by a bacterium, that naturally binds to insect cells and injects a protein payload into them. The researchers used the artificial intelligence tool AlphaFold to engineer these syringe structures to deliver a range of useful proteins to both human cells and cells in live mice.
“This is a really beautiful example of how protein engineering can alter the biological activity of a natural system,” said Joseph Kreitz, the study’s first author and a graduate student in Zhang’s lab. “I think it substantiates protein engineering as a useful tool in bioengineering and the development of new therapeutic systems.”
“Delivery of therapeutic molecules is a major bottleneck for medicine, and we will need a deep bench of options to get these powerful new therapies into the right cells in the body,” added Zhang. “By learning from how nature transports proteins, we were able to develop a new platform that can help address this gap.”
Zhang is senior author on the study and is also the James and Patricia Poitras Professor of Neuroscience at MIT and an investigator at the Howard Hughes Medical Institute.
Injection via contraction
Graduate student Joseph Kreitz holds a 3D printed bacteriophage. Photo: Steph Stevens
Symbiotic bacteria use the roughly 100-nanometer-long syringe-like machines to inject proteins into host cells to help adjust the biology of their surroundings and enhance their survival. These machines, called extracellular contractile injection systems (eCISs), consist of a rigid tube inside a sheath that contracts, driving a spike on the end of the tube through the cell membrane. This forces protein cargo inside the tube to enter the cell.
On the outside of one end of the eCIS are tail fibers that recognize specific receptors on the cell surface and latch on. Previous research has shown that eCISs can naturally target insect and mouse cells, but Kreitz thought it might be possible to modify them to deliver proteins to human cells by reengineering the tail fibers to bind to different receptors.
Using AlphaFold, which predicts a protein’s structure from its amino acid sequence, the researchers redesigned tail fibers of an eCIS produced by Photorhabdus bacteria to bind to human cells. By reengineering another part of the complex, the scientists tricked the syringe into delivering a protein of their choosing, in some cases with remarkably high efficiency.
The team made eCISs that targeted cancer cells expressing the EGF receptor and showed that they killed almost 100 percent of the cells, but did not affect cells without the receptor. Though efficiency depends in part on the receptor the system is designed to target, Kreitz says that the findings demonstrate the promise of the system with thoughtful engineering.
Photorhabdus virulence cassettes (green) binding to insect cells (blue) prior to injection of payload proteins. Image: Joseph Kreitz | McGovern Institute, Broad Institute
The researchers also used an eCIS to deliver proteins to the brain in live mice — where it didn’t provoke a detectable immune response, suggesting that eCISs could one day be used to safely deliver gene therapies to humans.
Packaging proteins
Kreitz says the eCIS system is versatile, and the team has already used it to deliver a range of cargos including base editor proteins (which can make single-letter changes to DNA), proteins that are toxic to cancer cells, and Cas9, a large DNA-cutting enzyme used in many gene editing systems.
Cancer cells killed by programmed Photorhabdus virulence cassettes (PVCs), imaged with a scanning electron microscope. Image: Joseph Kreitz | McGovern Institute, Broad Institute
In the future, Kreitz says researchers could engineer other components of the eCIS system to tune other properties, or to deliver other cargos such as DNA or RNA. He also wants to better understand the function of these systems in nature.
“We and others have shown that this type of system is incredibly diverse across the biosphere, but they are not very well characterized,” Kreitz said. “And we believe this type of system plays really important roles in biology that are yet to be explored.”
This work was supported in part by the National Institutes of Health, Howard Hughes Medical Institute, Poitras Center for Psychiatric Disorders Research at MIT, Hock E. Tan and K. Lisa Yang Center for Autism Research at MIT, K. Lisa Yang and Hock E. Tan Molecular Therapeutics Center at MIT, K. Lisa Yang Brain-Body Center at MIT, Broad Institute Programmable Therapeutics Gift Donors, The Pershing Square Foundation, William Ackman, Neri Oxman, J. and P. Poitras, Kenneth C. Griffin, BT Charitable Foundation, the Asness Family Foundation, the Phillips family, D. Cheng, and R. Metcalfe.
Artificial intelligence seems to have gotten a lot smarter recently. AI technologies are increasingly integrated into our lives — improving our weather forecasts, finding efficient routes through traffic, personalizing the ads we see and our experiences with social media.
Watercolor image of a robot with a human brain, created using the AI system DALL*E2.
But with the debut of powerful new chatbots like ChatGPT, millions of people have begun interacting with AI tools that seem convincingly human-like. Neuroscientists are taking note — and beginning to dig into what these tools tell us about intelligence and the human brain.
The essence of human intelligence is hard to pin down, let alone engineer. McGovern scientists say there are many kinds of intelligence, and as humans, we call on many different kinds of knowledge and ways of thinking. ChatGPT’s ability to carry on natural conversations with its users has led some to speculate the computer model is sentient, but McGovern neuroscientists insist that the AI technology cannot think for itself.
Still, they say, the field may have reached a turning point.
“I still don’t believe that we can make something that is indistinguishable from a human. I think we’re a long way from that. But for the first time in my life I think there is a small, nonzero chance that it may happen in the next year,” says McGovern founding member Tomaso Poggio, who has studied both human intelligence and machine learning for more than 40 years.
Different sort of intelligence
Developed by the company OpenAI, ChatGPT is an example of a deep neural network, a type of machine learning system that has made its way into virtually every aspect of science and technology. These models learn to perform various tasks by identifying patterns in large datasets. ChatGPT works by scouring texts and detecting and replicating the ways language is used. Drawing on language patterns it finds across the internet, ChatGPT can design you a meal plan, teach you about rocket science, or write a high school-level essay about Mark Twain. With all of the internet as a training tool, models like this have gotten so good at what they do, they can seem all-knowing.
“Engineers have been inventing some of these forms of intelligence since the beginning of the computers. ChatGPT is one. But it is very far from human intelligence.” – Tomaso Poggio
Nonetheless, language models have a restricted skill set. Play with ChatGPT long enough and it will surely give you some wrong information, even if its fluency makes its words deceptively convincing. “These models don’t know about the world, they don’t know about other people’s mental states, they don’t know how things are beyond whatever they can gather from how words go together,” says Postdoctoral Associate Anna Ivanova, who works with McGovern Investigators Evelina Fedorenko and Nancy Kanwisher as well as Jacob Andreas in MIT’s Computer Science and Artificial Intelligence Laboratory.
Such a model, the researchers say, cannot replicate the complex information processing that happens in the human brain. That doesn’t mean language models can’t be intelligent — but theirs is a different sort of intelligence than our own. “I think that there is an infinite number of different forms of intelligence,” says Poggio. “Engineers have been inventing some of these forms of intelligence since the beginning of the computers. ChatGPT is one. But it is very far from human intelligence.”
Under the hood
Just as there are many forms of intelligence, there are also many types of deep learning models — and McGovern researchers are studying the internals of these models to better understand the human brain.
A watercolor painting of a robot generated by DALL*E2.
“These AI models are, in a way, computational hypotheses for what the brain is doing,” Kanwisher says. “Up until a few years ago, we didn’t really have complete computational models of what might be going on in language processing or vision. Once you have a way of generating actual precise models and testing them against real data, you’re kind of off and running in a way that we weren’t ten years ago.”
Artificial neural networks echo the design of the brain in that they are made of densely interconnected networks of simple units that organize themselves — but Poggio says it’s not yet entirely clear how they work.
No one expects that brains and machines will work in exactly the same ways, though some types of deep learning models are more humanlike in their internals than others. For example, a computer vision model developed by McGovern Investigator James DiCarlo responds to images in ways that closely parallel the activity in the visual cortex of animals who are seeing the same thing. DiCarlo’s team can even use their model’s predictions to create an image that will activate specific neurons in an animal’s brain.
“We shouldn’t just automatically assume that if we trained a deep network on a task, that it’s going to look like the brain.” – Ila Fiete
Still, there is reason to be cautious in interpreting what artificial neural networks tell us about biology. “We shouldn’t just automatically assume that if we trained a deep network on a task, that it’s going to look like the brain,” says McGovern Associate Investigator Ila Fiete. Fiete acknowledges that it’s tempting to think of neural networks as models of the brain itself due to their architectural similarities — but she says so far, that idea remains largely untested.
McGovern Institute Associate Investigator Ila Fiete builds theoretical models of the brain. Photo: Caitlin Cunningham
She and her colleagues recently experimented with neural networks that estimate an object’s position in space by integrating information about its changing velocity.
In the brain, specialized neurons known as grid cells carry out this calculation, keeping us aware of where we are as we move through the world. Other researchers had reported that not only can neural networks do this successfully, those that do include components that behave remarkably like grid cells. They had argued that the need to do this kind of path integration must be the reason our brains have grid cells — but Fiete’s team found that artificial networks don’t need to mimic the brain to accomplish this brain-like task. They found that many neural networks can solve the same problem without grid cell-like elements.
One way investigators might generate deep learning models that do work like the brain is to give them a problem that is so complex that there is only one way of solving it, Fiete says.
Language, she acknowledges, might be that complex.
“This is clearly an example of a super-rich task,” she says. “I think on that front, there is a hope that they’re solving such an incredibly difficult task that maybe there is a sense in which they mirror the brain.”
Language parallels
In Fedorenko’s lab, where researchers are focused on identifying and understanding the brain’s language processing circuitry, they have found that some language models do, in fact, mimic certain aspects of human language processing. Many of the most effective models are trained to do a single task: make predictions about word use. That’s what your phone is doing when it suggests words for your text message as you type. Models that are good at this, it turns out, can apply this skill to carrying on conversations, composing essays, and using language in other useful ways. Neuroscientists have found evidence that humans, too, rely on word prediction as a part of language processing.
Fedorenko and her team compared the activity of language models to the brain activity of people as they read or listened to words, sentences, and stories, and found that some models were a better match to human neural responses than others. “The models that do better on this relatively unsophisticated task — just guess what comes next — also do better at capturing human neural responses,” Fedorenko says.
A watercolor painting of a language model, generated by DALL*E2.
It’s a compelling parallel, suggesting computational models and the human brain may have arrived at a similar solution to a problem, even in the face of the biological constraints that have shaped the latter. For Fedorenko and her team, it’s sparked new ideas that they will explore, in part, by modifying existing language models — possibly to more closely mimic the brain.
With so much still unknown about how both human and artificial neural networks learn, Fedorenko says it’s hard to predict what it will take to make language models work and behave more like the human brain. One possibility they are exploring is training a model in a way that more closely mirrors the way children learn language early in life.
Another question, she says, is whether language models might behave more like humans if they had a more limited recall of their own conversations. “All of the state-of-the-art language models keep track of really, really long linguistic contexts. Humans don’t do that,” she says.
Chatbots can retain long strings of dialogue, using those words to tailor their responses as a conversation progresses, she explains. Humans, on the other hand, must cope with a more limited memory. While we can keep track of information as it is conveyed, we only store a string of about eight words as we listen or read. “We get linguistic input, we crunch it up, we extract some kind of meaning representation, presumably in some more abstract format, and then we discard the exact linguistic stream because we don’t need it anymore,” Fedorenko explains.
Language models aren’t able to fill in gaps in conversation with their own knowledge and awareness in the same way a person can, Ivanova adds. “That’s why so far they have to keep track of every single input word,” she says. “If we want a model that models specifically the [human] language network, we don’t need to have this large context window. It would be very cool to train those models on those short windows of context and see if it’s more similar to the language network.”
Multimodal intelligence
Despite these parallels, Fedorenko’s lab has also shown that there are plenty of things language circuits do not do. The brain calls on other circuits to solve math problems, write computer code, and carry out myriad other cognitive processes. Their work makes it clear that in the brain, language and thought are not the same.
That’s borne out by what cognitive neuroscientists like Kanwisher have learned about the functional organization of the human brain, where circuit components are dedicated to surprisingly specific tasks, from language processing to face recognition.
“The upshot of cognitive neuroscience over the last 25 years is that the human brain really has quite a degree of modular organization,” Kanwisher says. “You can look at the brain and say, ‘what does it tell us about the nature of intelligence?’ Well, intelligence is made up of a whole bunch of things.”
In generating this image from the text prompt, “a watercolor painting of a woman looking in a mirror and seeing a robot,” DALL*E2 incorrectly placed the woman (not the robot) in the mirror, highlighting one of the weaknesses of current deep learning models.
In January, Fedorenko, Kanwisher, Ivanova, and colleagues shared an extensive analysis of the capabilities of large language models. After assessing models’ performance on various language-related tasks, they found that despite their mastery of linguistic rules and patterns, such models don’t do a good job using language in real-world situations. From a neuroscience perspective, that kind of functional competence is distinct from formal language competence, calling on not just language-processing circuits but also parts of the brain that store knowledge of the world, reason, and interpret social interactions.
Language is a powerful tool for understanding the world, they say, but it has limits.
“If you train on language prediction alone, you can learn to mimic certain aspects of thinking,” Ivanova says. “But it’s not enough. You need a multimodal system to carry out truly intelligent behavior.”
The team concluded that while AI language models do a very good job using language, they are incomplete models of human thought. For machines to truly think like humans, Ivanova says, they will need a combination of different neural nets all working together, in the same way different networks in the human brain work together to achieve complex cognitive tasks in the real world.
It remains to be seen whether such models would excel in the tech world, but they could prove valuable for revealing insights into human cognition — perhaps in ways that will inform engineers as they strive to build systems that better replicate human intelligence.
The McGovern Institute announced today that the 2023 Edward M. Scolnick Prize in Neuroscience will be awarded to neurobiologist Yang Dan. Dan holds the Nan Fung Life Sciences Chancellor’s Chair in Neuroscience at the University of California, Berkeley, and has been a Howard Hughes Investigator since 2008. The Scolnick Prize is awarded annually by the McGovern Institute for outstanding achievements in neuroscience.
“Yang Dan’s systems-level experimentation to identify the cell types and circuits that control sleep cycles represents the highest level of neuroscience research,” says Robert Desimone, McGovern Institute director and chair of the selection committee. “Her work has defined precise mechanisms for how motor behaviors are suppressed during sleep and activated during arousal, with potential implications for the design of more targeted sedatives and the treatment of sleep disorders.”
Significance of sleep
Dan received a BS in Physics in 1988 from Peking University in China. She then moved to the US to obtain her PhD in neurobiology from Columbia University, in 1994, under the mentorship of Professor Mu-Ming Poo. Her doctoral research focused on mechanisms of plasticity at the neuromuscular synapse and was published in Science, Nature, and Neuron. During this time, she showed that the quantal release of neurotransmitters is not unique to neuronal cell types and, as one example, that retrograde signaling from muscle cells regulates the synaptic strength of the neuromuscular junction. For her postdoctoral training, Dan joined Clay Reid’s lab at The Rockefeller University and then accompanied Reid’s move to Harvard Medical School a short time later. Within just over two years, Yang had collected and analyzed neuronal recording data to support and develop key computational models of visual information coding – her two papers describing this work have been cited, together, over 900 times.
Yang Dan started her own laboratory in January 1997 when she joined the faculty of UC Berkeley’s Department of Molecular and Cell Biology as an assistant professor; she became a full professor in 2005. Dan’s lab became known for discoveries of how sensory inputs, especially visual inputs, are processed by the brain to influence behavior. Using electrophysiological recordings in model animals and computational analyses, her group worked out rules for how synaptic plasticity and neural connectivity, at the microcircuit and brain-wide level, contribute to learning and goal-directed behaviors.
Sleep recordings in various animal models and humans, shown in a research review by Yang Dan (2019 Annual Review of Neuroscience). (a) In nonmammalian animals such as jellyfish, Caenorhabditis elegans, Drosophila, and zebrafish, locomotor assay is used to measure sleep. (b) Examples of mouse EEG and EMG recordings during wakefulness and NREM and REM sleep. (c) Example polysomnography recordings from a healthy human subject during wakefulness and NREM (stage 3) and phasic REM sleep.
The Dan lab carved out a new research direction upon their discovery of mechanisms controlling rapid eye movement (REM) sleep, a state in which the brain is active and neuroplastic despite minimal sensory input. In their 2015 Nature paper, Dan’s group showed that, in mice, optogenetic activation of inhibitory neurons that project forward from the brainstem to the middle of the brain can instantaneously induce REM sleep. Since then, the Dan lab has published nearly a dozen primary research papers on the sleep-wake cycle that capitalize on the latest neural engineering techniques to record and control specific cell types and circuits in the brain. Most recently, she reported the discovery of neurons in the midbrain that receive wide-ranging inputs to coordinate active suppression of movement during REM and non-REM sleep with the release of movement during arousal. This circuit is key to the ability, known to exist in most animals, to experience sleep and even vivid dreaming without acting out. Dan’s discoveries are paving the way to a holistic understanding, from the molecular to macrocircuit levels, of how our bodies regulate sleep, an evolutionarily conserved behavior that is essential for survival.
Awards and honors
Dan was appointed as a Howard Hughes Medical Institute Investigator in 2008 and elected to the US National Academy of Sciences in 2018. She was awarded the Li Ka Shing Women in Science Award in 2007 and a Research Award for Innovation in Neuroscience from the Society for Neuroscience in 2009. She teaches summer courses at institutes around the world and has mentored 16 graduate students and 27 postdoctoral researchers, 25 of whom now run their own independent laboratories. Currently, Dan serves as an editorial board member on top-ranked science journals including Cell, Neuron, PNAS, and Current Opinion in Neurobiology.
Yang Dan will be awarded the Scolnick Prize on Wednesday, June 7, 2023. At 4:00 pm on that day, she will deliver a lecture titled “The how and why of sleep,” to be followed by a reception at the McGovern Institute, 43 Vassar Street (building 46, room 3002) in Cambridge. The event is free and open to the public.