Studies of unusual brains reveal critical insights into brain organization, function

EG (a pseudonym) is an accomplished woman in her early 60s: she is a college graduate and has an advanced professional degree. She has a stellar vocabulary—in the 98th percentile, according to tests—and has mastered a foreign language (Russian) to the point that she sometimes dreams in it.

She also has, likely since birth, been missing her left temporal lobe, a part of the brain known to be critical for language.

In 2016, EG contacted McGovern Institute Investigator Evelina Fedorenko, who studies the computations and brain regions that underlie language processing, to see if her team might be interested in including her in their research.

“EG didn’t know about her missing temporal lobe until age 25, when she had a brain scan for an unrelated reason,” says Fedorenko, the Frederick A. (1971) and Carole J. Middleton Career Development Associate Professor of Neuroscience at MIT. “As with many cases of early brain damage, she had no linguistic or cognitive deficits, but brains like hers are invaluable for understanding how cognitive functions reorganize in the tissue that remains.”

“I told her we definitely wanted to study her brain.” – Ev Fedorenko

Previous studies have shown that language processing relies on an interconnected network of frontal and temporal regions in the left hemisphere of the brain. EG’s unique brain presented an opportunity for Fedorenko’s team to explore how language develops in the absence of the temporal part of these core language regions.

Greta Tuckute, a graduate student in the Fedorenko lab, is the first author of the Neuropsychologia study. Photo: Caitlin Cunningham

Their results appeared recently in the journal Neuropsychologia. They found, for the first time, that temporal language regions appear to be critical for the emergence of frontal language regions in the same hemisphere — meaning, without a left temporal lobe, EG’s intact frontal lobe did not develop a capacity for language.

They also reveal much more: EG’s language system resides happily in her right hemisphere. “Our findings provide both visual and statistical proof of the brain’s remarkable plasticity, its ability to reorganize, in the face of extensive early damage,” says Greta Tuckute, a graduate student in the Fedorenko lab and first author of the paper.

In an introduction to the study, EG herself puts the social implications of the findings starkly. “Please do not call my brain abnormal, that creeps me out,” she . “My brain is atypical. If not for accidentally finding these differences, no one would pick me out of a crowd as likely to have these, or any other differences that make me unique.”

How we process language

The frontal and temporal lobes are part of the cerebrum, the largest part of the brain. The cerebrum controls many functions, including the five senses, language, working memory, personality, movement, learning, and reasoning. It is divided into two hemispheres, the left and the right, by a deep longitudinal fissure. The two hemispheres communicate via a thick bundle of nerve fibers called the corpus callosum. Each hemisphere comprises four main lobes—frontal, parietal, temporal, and occipital. Core parts of the language network reside in the frontal and temporal lobes.

Core parts of the language network (shown in teal) reside in the left frontal and temporal lobes. Image: Ev Fedorenko

In most individuals, the language system develops in both the right and left hemispheres, with the left side dominant from an early age. The frontal lobe develops slower than the temporal lobe. Together, the interconnected frontal and temporal language areas enable us to understand and produce words, phrases, and sentences.

How, then, did EG, with no left temporal lobe, come to speak, comprehend, and remember verbal information (even a foreign language!) with such proficiency?

Simply put, the right hemisphere took over: “EG has a completely well-functioning neurotypical-like language system in her right hemisphere,” says Tuckute. “It is incredible that a person can use a single hemisphere—and the right hemisphere at that, which in most people is not the dominant hemisphere where language is processed—and be perfectly fine.”

Journey into EG’s brain

In the study, the researchers conducted two scans of EG’s brain using functional magnetic resonance imaging (fMRI), one in 2016 and one in 2019, and had her complete a range of behaviorial tests. fMRI measures the level of blood oxygenation across the brain and can be used to make inferences about where neural activity is taking place. The researchers also scanned the brains of 151 “neurotypical” people. The large number of participants, combined with robust task paradigms and rigorous statistical analyses made it possible to draw conclusions from a single case such as EG.

Magnetic resonance image of EG’s brain showing missing left temporal lobe. Image: Fedorenko Lab

Fedorenko is a staunch advocate of the single case study approach—common in medicine but not currently in neuroscience. “Unusual brains—and unusual individuals more broadly—can provide critical insights into brain organization and function that we simply cannot gain by looking at more typical brains.” Studying individual brains with fMRI, however, requires paradigms that work robustly at the single-brain level. This is not true of most paradigms used in the field, which require averaging many brains together to obtain an effect. Developing individual-level fMRI paradigms for language research has been the focus of Fedorenko’s early work, although the main reason for doing so had nothing to do with studying atypical brains: individual-level analyses are simply better—they are more sensitive and their results are more interpretable and meaningful.

“Looking at high-quality data in an individual participant versus looking at a group-level map is akin to using a high-precision microscope versus looking with a naked myopic eye, when all you see is a blur,” she wrote in an article published in Current Opinion in Behaviorial Sciences in 2021. Having developed and validated such paradigms, though, is now allowing Fedorenko and her group to probe interesting brains.

While in the scanner, each participant performed a task that Fedorenko began developing more than a decade ago. They were presented with a series of words that form real, meaningful sentences, and with a series of “nonwords”—strings of letters that are pronounceable but without meaning. In typical brains, language areas respond more strongly when participants read sentences compared to when they read nonword sequences.

Similarly, in response to the real sentences, the language regions in EG’s right frontal and temporal lobes lit up—they were bursting with activity—while the left frontal lobe regions remained silent. In the neurotypical participants, the language regions in both the left and right frontal and temporal lobes lit up, with the left areas outshining the right.

fMRI showing EG’s language activation on the brain surface. The right frontal lobe shows robust activations, while the left frontal lobe does not have any language responsive areas. Image: Fedorenko lab

“EG showed a very strong response in the right temporal and frontal regions that process language,” says Tuckute. “And if you look at the controls, whose language dominant hemisphere is in the left, EG’s response in her right hemisphere was similar—or even higher—compared to theirs, just on the opposite side.”

Leaving no stone unturned, the researchers next asked whether the lack of language responses in EG’s left frontal lobe might be due to a general lack of response to cognitive tasks rather than just to language. So they conducted a non-language, working-memory task: they had EG and the neurotypical participants perform arithmetic addition problems while in the scanner. In typical brains, this task elicits responses in frontal and parietal areas in both hemisphers.

Not only did regions of EG’s right frontal lobe light up in response to the task, those in her left frontal lobe did, too. “Both EG’s language-dominant (right) hemisphere, and her non-language-dominant (left) hemisphere showed robust responses to this working-memory task ,” says Tuckute. “So, yes, there’s definitely cognitive processing going on there. This selective lack of language responses in EG’s left frontal lobe led us to conclude that, for language, you need the temporal language region to ‘wire up’ the frontal language region.”

Next steps

In science, the answer to one question opens the door to untold more. “In EG, language took over a large chunk of the right frontal and temporal lobes,” says Fedorenko. “So what happens to the functions that in neurotypical individuals generally live in the right hemisphere?”

Many of those, she says, are social functions. The team has already tested EG on social tasks and is currently exploring how those social functions cohabit with the language ones in her right hemisphere. How can they all fit? Do some of the social functions have to migrate to other parts of the brain? They are also working with EG’s family: they have now scanned EG’s three siblings (one of whom is missing most of her right temporal lobe; the other two are neurotypical) and her father (also neurotypical).

The “Interesting Brains Project” website details current projects, findings, and ways to participate.

The project has now grown to include many other individuals with interesting brains, who contacted Fedorenko after some of this work was covered by news outlets. A website for this project can be found here. The project promises to provide unique insights into how our plastic brains reorganize and adapt to various circumstances.

 

New collaboration aims to strengthen orthotic and prosthetic care in Sierra Leone

MIT’s K. Lisa Yang Center for Bionics has entered into a collaboration with the Government of Sierra Leone to strengthen the capabilities and services of that country’s orthotic and prosthetic (O&P) sector. Tens of thousands of people in Sierra Leone are in need of orthotic braces and artificial limbs, but access to such specialized medical care in this African nation is limited.

The agreement, reached between MIT, the Center for Bionics, and Sierra Leone’s Ministry of Health and Sanitation (MoHS), provides a detailed memorandum of understanding and intentions that will begin as a four-year program.  The collaborators aim to strengthen Sierra Leone’s O&P sector through six key objectives: data collection and clinic operations, education, supply chain, infrastructure, new technologies and mobile delivery of services.

Project Objectives

  1. Data Collection and Clinic Operations: collect comprehensive data on epidemiology, need, utilization, and access for O&P services across the country
  2. Education: create an inclusive education and training program for the people of Sierra Leone, to enable sustainable and independent operation of O&P services
  3. Supply Chain: establish supply chains for prosthetic and orthotic components, parts, and materials for fabrication of devices
  4. Infrastructure: prepare infrastructure (e.g., physical space, sufficient water, power and internet) to support increased production and services
  5. New Technologies: develop and translate innovative technologies with potential to improve O&P clinic operations and management, patient mobility, and the design or fabrication of devices
  6. Mobile Delivery: support outreach services and mobile delivery of care for patients in rural and difficult-to-reach areas

Working together, MIT’s bionics center and Sierra Leone’s MoHS aim to sustainably double the production and distribution of O&P services at Sierra Leone’s National Rehabilitation Centre and Bo Clinics over the next four years.

The team of MIT scientists who will be implementing this novel collaboration is led by Hugh Herr, MIT Professor of Media Arts and Sciences. Herr, himself a double amputee, serves as co-director of the K. Lisa Yang Center for Bionics, and heads the renowned Biomechatronics research group at the MIT Media Lab.

“From educational services, to supply chain, to new technology, this important MOU with the government of Sierra Leone will enable the Center to develop a broad, integrative approach to the orthotic and prosthetic sector within Sierra Leone, strengthening services and restoring much needed care to its citizens,” notes Professor Herr.

Sierra Leone’s Honorable Minister of Health Dr. Austin Demby also states: “As the Ministry of Health and Sanitation continues to galvanize efforts towards the attainment of Universal Health Coverage through the life stages approach, this collaboration will foster access, innovation and capacity building in the Orthotic and Prosthetic division. The ministry is pleased to work with and learn from MIT over the next four years in building resilient health systems, especially for vulnerable groups.”

“Our team at MIT brings together expertise across disciplines from global health systems to engineering and design,” added Francesca Riccio-Ackerman, the graduate student lead for the MIT Sierra Leone project. “This allows us to craft an innovative strategy with Sierra Leone’s Ministry of Health and Sanitation. Together we aim to improve available orthotic and prosthetic care for people with disabilities.”

The K. Lisa Yang Center for Bionics at the Massachusetts Institute of Technology pioneers transformational bionic interventions across a broad range of conditions affecting the body and mind. Based on fundamental scientific principles, the Center seeks to develop neural and mechanical interfaces for human-machine communications; integrate these interfaces into novel bionic platforms; perform clinical trials to accelerate the deployment of bionic products by the private sector; and leverage novel and durable, but affordable, materials and manufacturing processes to ensure equitable access to the latest bionic technology by all impacted individuals, especially those in developing countries. 

Sierra Leone’s Ministry of Health and Sanitation is responsible for health service delivery across the country, as well as regulation of the health sector to meet the health needs of its citizenry. 

For more information about this project, please visit: https://mitmedialab.info/prosforallproj2

 

Season’s Greetings from the McGovern Institute

This year’s holiday video (shown above) was inspired by Ev Fedorenko’s July 2022 Nature Neuroscience paper, which found similar patterns of brain activation and language selectivity across speakers of 45 different languages.

Universal language network

Ev Fedorenko uses the widely translated book “Alice in Wonderland” to test brain responses to different languages. Photo: Caitlin Cunningham

Over several decades, neuroscientists have created a well-defined map of the brain’s “language network,” or the regions of the brain that are specialized for processing language. Found primarily in the left hemisphere, this network includes regions within Broca’s area, as well as in other parts of the frontal and temporal lobes. Although roughly 7,000 languages are currently spoken and signed across the globe, the vast majority of those mapping studies have been done in English speakers as they listened to or read English texts.

To truly understand the cognitive and neural mechanisms that allow us to learn and process such diverse languages, Fedorenko and her team scanned the brains of speakers of 45 different languages while they listened to Alice in Wonderland in their native language. The results show that the speakers’ language networks appear to be essentially the same as those of native English speakers — which suggests that the location and key properties of the language network appear to be universal.

The many languages of McGovern

English may be the primary language used by McGovern researchers, but more than 35 other languages are spoken by scientists and engineers at the McGovern Institute. Our holiday video features 30 of these researchers saying Happy New Year in their native (or learned) language. Below is the complete list of languages included in our video. Expand each accordion to learn more about the speaker of that particular language and the meaning behind their new year’s greeting.

Brains on conlangs

For a few days in November, the McGovern Institute hummed with invented languages. Strangers greeted one another in Esperanto; trivia games were played in High Valyrian; Klingon and Na’vi were heard inside MRI scanners. Creators and users of these constructed languages (conlangs) had gathered at MIT in the name of neuroscience. McGovern Institute investigator Evelina Fedorenko and her team wanted to know what happened in their brains when they heard and understood these “foreign” tongues.

The constructed languages spoken by attendees had all been created for specific purposes. Most, like the Na’vi language spoken in the movie Avatar, had given identity and voice to the inhabitants of fictional worlds, while Esperanto was created to reduce barriers to international communication. But despite their distinct origins, a familiar pattern of activity emerged when researchers scanned speakers’ brains. The brain, they found, processes constructed languages with the same network of areas it uses for languages that evolved naturally over millions of years.

The meaning of language

“There’s all these things that people call language,” Fedorenko says. “Music is a kind of language and math is a kind of language.” But the brain processes these metaphorical languages differently than it does the languages humans use to communicate broadly about the world. To neuroscientists like Fedorenko, they can’t legitimately be considered languages at all. In contrast, she says, “these constructed languages seem really quite like natural languages.”

The “Brains on Conlangs” event that Fedorenko’s team hosted was part of its ongoing effort to understand the way language is generated and understood by the brain. Her lab and others have identified specific brain regions involved in linguistic processing, but it’s not yet clear how universal the language network is. Most studies of language cognition have focused on languages widely spoken in well-resourced parts of the world—primarily English, German, and Dutch. There are thousands of languages—spoken or signed—that have not been included.

Brain activation in a Klingon speaker while listening to English (left) and Klingon (right). Image: Saima Malik Moraleda

Fedorenko and her team are deliberately taking a broader approach. “If we’re making claims about language as a whole, it’s kind of weird to make it based on a handful of languages,” she says. “So we’re trying to create tools and collect some data on as many languages as possible.”

So far, they have found that the language networks used by native speakers of dozens of different languages do share key architectural similarities. And by including a more diverse set of languages in their research, Fedorenko and her team can begin to explore how the brain makes sense of linguistic features that are not part of English or other well studied languages. The Brains on Conlangs event was a chance to expand their studies even further.

Connecting conlangs

Nearly 50 speakers of Esperanto, Klingon, High Valyrian, Dothraki, and Na’vi attended Brains on Conlangs, drawn by the opportunity to connect with other speakers, hear from language creators, and contribute to the science. Graduate student Saima Malik-Moraleda and postbac research assistant Maya Taliaferro, along with other members of both the Fedorenko lab and brain and cognitive sciences professor Ted Gibson’s lab, and with help from Steve Shannon, Operations Manager of the Martinos Imaging Center, worked tirelessly to collect data from all participants. Two MRI scanners ran nearly continuously as speakers listened to passages in their chosen languages and researchers captured images of the brain’s response. To enable the research team to find the language-specific network in each person’s brain, participants also performed other tasks inside the scanner, including a memory task and listening to muffled audio in which the constructed languages were spoken, but unintelligible. They performed language tasks in English, as well.

To understand how the brain processes constructed languages (conlangs), McGovern Investigator Ev Fedorenko (center) gathered with conlang creators/speakers Marc Okrand (Klingon), Paul Frommer (Na’vi), Damian Blasi, Jessie Sams (méníshè), David Peterson (High Valyrian and Dothraki) and Aroka Okrent at the McGovern Institute for the “Brains on Colangs” event in November 2022. Photo: Elise Malvicini

Prior to the study, Fedorenko says, she had suspected constructed languages would activate the brain’s natural language-processing network, but she couldn’t be sure. Another possibility was that languages like Klingon and Esperanto would be handled instead by a problem-solving network known to be used when people work with some other so-called “languages,” like mathematics or computer programming. But once the data was in, the answer was clear. The five constructed languages included in the study all activated the brain’s language network.

That makes sense, Fedorenko says, because like natural languages, constructed languages enable people to communicate by associating words or signs with objects and ideas. Any language is essentially a way of mapping forms to meanings, she says. “You can construe it as a set of memories of how a particular sequence of sounds corresponds to some meaning. You’re learning meanings of words and constructions, and how to put them together to get more complex meanings. And it seems like the brain’s language system is very well suited for that set of computations.”

The ways we move

This story originally appeared in the Winter 2023 issue of BrainScan.
__

Many people barely consider how their bodies move — at least not until movement becomes more difficult due to injury or disease. But the McGovern scientists who are working to understand human movement and restore it after it has been lost know that the way we move is an engineering marvel.
Muscles, bones, brain, and nerves work together to navigate and interact with an ever-changing environment, making constant but often imperceptible adjustments to carry out our goals. It’s an efficient and highly adaptable system, and the way it’s put together is not at all intuitive, says Hugh Herr, a new associate investigator at the Institute.

That’s why Herr, who also co-directs MIT’s new K. Lisa Yang Center for Bionics, looks to biology to guide the development of artificial limbs that aim to give people the same agency, control, and comfort of natural limbs. McGovern Associate Investigator Nidhi Seethapathi, who like Herr joined the Institute in September, is also interested in understanding human movement in all its complexity. She is coming at the problem from a different direction, using computational modeling to predict how and why we move the way we do.

Moving through change

The computational models that Seethapathi builds in her lab aim to predict how humans will move under different conditions. If a person is placed in an unfamiliar environment and asked to navigate a course under time pressure, what path will they take? How will they move their limbs, and what forces will they exert? How will their movements change as they become more comfortable on the terrain?

McGovern Associate Investigator Nidhi Seethapathi with lab members (from left to right) Inseung Kang, Nikasha Patel, Antoine De Comite, Eric Wang, and Crista Falk. Photo: Steph Stevens

Seethapathi uses the principles of robotics to build models that answer these questions, then tests them by placing real people in the same scenarios and monitoring their movements. So far, that has mostly meant inviting study subjects to her lab, but as she expands her models to predict more complex movements, she will begin monitoring people’s activity in the real world, over longer time periods than laboratory experiments typically allow.

Seethapathi’s hope is that her findings will inform the way doctors, therapists, and engineers help patients regain control over their movements after an injury or stroke, or learn to live with movement disorders like Parkinson’s disease. To make a real difference, she stresses, it’s important to bring studies of human movement out of the lab, where subjects are often limited to simple tasks like walking on a treadmill, into more natural settings. “When we’re talking about doing physical therapy, neuromotor rehabilitation, robotic exoskeletons — any way of helping people move better — we want to do it in the real world, for everyday, complex tasks,” she says.

When we’re talking about helping people move better — we want to do it in the real world, for everyday, complex tasks,” says Seethapathi.

Seethapathi’s work is already revealing how the brain directs movement in the face of competing priorities. For example, she has found that when people are given a time constraint for traveling a particular distance, they walk faster than their usual, comfortable pace — so much so that they often expend more energy than necessary and arrive at their destination a bit early. Her models suggest that people pick up their pace more than they need to because humans’ internal estimations of time are imprecise.

Her team is also learning how movements change as a person becomes familiar with an environment or task. She says people find an efficient way to move through a lot of practice. “If you’re walking in a straight line for a very long time, then you seem to pick the movement that is optimal for that long-distance walk,” she explains. But in the real world, things are always changing — both in the body and in the environment. So Seethapathi models how people behave when they must move in a new way or navigate a new environment. “In these kinds of conditions, people eventually wind up on an energy-optimal solution,” she says. “But initially, they pick something that prevents them from falling down.”

To capture the complexity of human movement, Seethapathi and her team are devising new tools that will let them monitor people’s movements outside the lab. They are also drawing on data from other fields, from architecture to physical therapy, and even from studies of other animals. “If I have general principles, they should be able to tell me how modifications in the body or in how the brain is connected to the body would lead to different movements,” she says. “I’m really excited about generalizing these principles across timescales and species.”

Building new bodies

In Herr’s lab, a deepening understanding of human movement is helping drive the development of increasingly sophisticated artificial limbs and other wearable robots. The team designs devices that interface directly with a user’s nervous system, so they are not only guided by the brain’s motor control systems, but also send information back to the brain.

Herr, a double amputee with two artificial legs of his own, says prosthetic devices are getting better at replicating natural movements, guided by signals from the brain. Mimicking the design and neural signals found in biology can even give those devices much of the extraordinary adaptability of natural human movement. As an example, Herr notes that his legs effortlessly navigate varied terrain. “There’s adaptive, stabilizing features, and the machine doesn’t have to detect every pothole and pebble and banana peel on the ground, because the morphology and the nervous system control is so inherently adaptive,” he says.

McGovern Associate Investigator Hugh Herr at work in the K. Lisa Yang Center for Bionics at MIT. Photo: Jimmy Day/Media Lab

But, he notes, the field of bionics is in its infancy, and there’s lots of room for improvement. “It’s only a matter of time before a robotic knee, for example, can be as good as the biological knee or better,” he says. “But the problem is the human attached to that knee won’t feel it’s their knee until they can feel it, and until their central nervous system has complete agency over that knee,” he says. “So if you want to actually build new bodies and not just more and more powerful tools for humans, you have to link to the brain bidirectionally.”

Herr’s team has found that surgically restoring natural connections between pairs of muscles that normally work in opposition to move a limb, such as the arm’s biceps and triceps, gives the central nervous system signals about how that limb is moving, even when a natural limb is gone. The idea takes a cue from the work of McGovern Emeritus Investigator Emilio Bizzi, who found that the coordinated activation of groups of muscles by the nervous system, called muscle synergies, is important for motor control.

“It’s only a matter of time before a robotic knee can be as good as the biological knee or better,” says Herr.

“When a person thinks and moves their phantom limb, those muscle pairings move dynamically, so they feel, in a natural way, the limb moving — even though the limb is not there,” Herr explains. He adds that when those proprioceptive signals communicate instead how an artificial limb is moving, a person experiences “great agency and ownership” of that limb. Now, his group is working to develop sensors that detect and relay information usually processed by sensory neurons in the skin, so prosthetic devices can also perceive pressure and touch.

At the same time, they’re working to improve the mechanical interface between wearable robots and the body to optimize comfort and fit — whether that’s by using detailed anatomical imaging to guide the design of an individual’s device or by engineering devices that integrate directly with a person’s skeleton. There’s no “average” human, Herr says, and effective technologies must meet individual needs, not just for fit, but also for function. At that same time, he says it’s important to plan for cost-effective, mass production, because the need for these technologies is so great.

“The amount of human suffering caused by the lack of technology to address disability is really beyond comprehension,” he says. He expects tremendous progress in the growing field of bionics in the coming decades, but he’s impatient. “I think in 50 years, when scientists look back to this era, it’ll be laughable,” he says. “I’m always anxiously wanting to be in the future.”

Machine learning can predict bipolar disorder in children and teens

Bipolar disorder often begins in childhood or adolescence, triggering dramatic mood shifts and intense emotions that cause problems at home and school. But the condition is often overlooked or misdiagnosed until patients are older. New research suggests that machine learning, a type of artificial intelligence, could help by identifying children who are at risk of bipolar disorder so doctors are better prepared to recognize the condition if it develops.

On October 13, 2022, researchers led by McGovern Institute investigator John Gabrieli and collaborators at Massachusetts General Hospital reported in the Journal of Psychiatric Research that when presented with clinical data on nearly 500 children and teenagers, a machine learning model was able to identify about 75 percent of those who were later diagnosed with bipolar disorder. The approach performs better than any other method of predicting bipolar disorder, and could be used to develop a simple risk calculator for health care providers.

Gabrieli says such a tool would be particularly valuable because bipolar disorder is less common in children than conditions like major depression, with which it shares symptoms, and attention-deficit/ hyperactivity disorder (ADHD), with which it often co-occurs. “Humans are not well tuned to watch out for rare events,” he says. “If you have a decent measure, it’s so much easier for a machine to identify than humans. And in this particular case, [the machine learning prediction] was surprisingly robust.”

Detecting bipolar disorder

Mai Uchida, Director of Massachusetts General Hospital’s Child Depression Program, says that nearly two percent of youth worldwide are estimated to have bipolar disorder, but diagnosing pediatric bipolar disorder can be challenging. A certain amount of emotional turmoil is to be expected in children and teenagers, and even when moods become seriously disruptive, children with bipolar disorder are often initially diagnosed with major depression or ADHD. That’s a problem, because the medications used to treat those conditions often worsen the symptoms of bipolar disorder. Tailoring treatment to a diagnosis of bipolar disorder, in contrast, can lead to significant improvements for patients and their families. “When we can give them a little bit of ease and give them a little bit of control over themselves, it really goes a long way,” Uchida says.

In fact, a poor response to antidepressants or ADHD medications can help point a psychiatrist toward a diagnosis of bipolar disorder. So too can a child’s family history, in addition to their own behavior and psychiatric history. But, Uchida says, “it’s kind of up to the individual clinician to pick up on these things.”

Uchida and Gabrieli wondered whether machine learning, which can find patterns in large, complex datasets, could focus in on the most relevant features to identify individuals with bipolar disorder. To find out, they turned to data from a study that began in the 1990s. The study, headed by Joseph Biederman, Chief of the Clinical and Research Programs in Pediatric Psychopharmacology and Adult ADHD at Massachusetts General Hospital, had collected extensive psychiatric assessments of hundreds of children with and without ADHD, then followed those individuals for ten years.

To explore whether machine learning could find predictors of bipolar disorder within that data, Gabrieli, Uchida, and colleagues focused on 492 children and teenagers without ADHD, who were recruited to the study as controls. Over the ten years of the study, 45 of those individuals developed bipolar disorder.

Within the data collected at the study’s outset, the machine learning model was able to find patterns that associated with a later diagnosis of bipolar disorder. A few behavioral measures turned out to be particularly relevant to the model’s predictions: children and teens with combined problems with attention, aggression, and anxiety were most likely to later be diagnosed with bipolar disorder. These indicators were all picked up by a standard assessment tool called the Child Behavior Checklist.

Uchida and Gabrieli say the machine learning model could be integrated into the medical record system to help pediatricians and child psychiatrists catch early warning signs of bipolar disorder. “The information that’s collected could alert a clinician to the possibility of a bipolar disorder developing,” Uchida says. “Then at least they’re aware of the risk, and they may be able to maybe pick up on some of the deterioration when it’s happening and think about either referring them or treating it themselves.”

How touch dampens the brain’s response to painful stimuli

McGovern Investigator Fan Wang. Photo: Caitliin Cunningham

When we press our temples to soothe an aching head or rub an elbow after an unexpected blow, it often brings some relief. It is believed that pain-responsive cells in the brain quiet down when these neurons also receive touch inputs, say scientists at MIT’s McGovern Institute, who for the first time have watched this phenomenon play out in the brains of mice.

The team’s discovery, reported November 16, 2022, in the journal Science Advances, offers researchers a deeper understanding of the complicated relationship between pain and touch and could offer some insights into chronic pain in humans. “We’re interested in this because it’s a common human experience,” says McGovern Investigator Fan Wang. “When some part of your body hurts, you rub it, right? We know touch can alleviate pain in this way.” But, she says, the phenomenon has been very difficult for neuroscientists to study.

Modeling pain relief

Touch-mediated pain relief may begin in the spinal cord, where prior studies have found pain-responsive neurons whose signals are dampened in response to touch. But there have been hints that the brain was involved too. Wang says this aspect of the response has been largely unexplored, because it can be hard to monitor the brain’s response to painful stimuli amidst all the other neural activity happening there—particularly when an animal moves.

So while her team knew that mice respond to a potentially painful stimulus on the cheek by wiping their faces with their paws, they couldn’t follow the specific pain response in the animals’ brains to see if that rubbing helped settle it down. “If you look at the brain when an animal is rubbing the face, movement and touch signals completely overwhelm any possible pain signal,” Wang explains.

She and her colleagues have found a way around this obstacle. Instead of studying the effects of face-rubbing, they have focused their attention on a subtler form of touch: the gentle vibrations produced by the movement of the animals’ whiskers. Mice use their whiskers to explore, moving them back and forth in a rhythmic motion known as whisking to feel out their environment. This motion activates touch receptors in the face and sends information to the brain in the form of vibrotactile signals. The human brain receives the same kind of touch signals when a person shakes their hand as they pull it back from a painfully hot pan—another way we seek touch-mediate pain relief.

If you look at the brain when an animal is rubbing the face, movement and touch signals completely overwhelm any possible pain signal, says Wang.

Wang and her colleagues found that this whisker movement alters the way mice respond to bothersome heat or a poke on the face—both of which usually lead to face rubbing. “When the unpleasant stimuli were applied in the presence of their self-generated vibrotactile whisking…they respond much less,” she says. Sometimes, she says, whisking animals entirely ignore these painful stimuli.

In the brain’s somatosensory cortex, where touch and pain signals are processed, the team found signaling changes that seem to underlie this effect. “The cells that preferentially respond to heat and poking are less frequently activated when the mice are whisking,” Wang says. “They’re less likely to show responses to painful stimuli.” Even when whisking animals did rub their faces in response to painful stimuli, the team found that neurons in the brain took more time to adopt the firing patterns associated with that rubbing movement. “When there is a pain stimulation, usually the trajectory the population dynamics quickly moved to wiping. But if you already have whisking, that takes much longer,” Wang says.

Wang notes that even in the fraction of a second before provoked mice begin rubbing their faces, when the animals are relatively still, it can be difficult to sort out which brain signals are related to perceiving heat and poking and which are involved in whisker movement. Her team developed computational tools to disentangle these, and are hoping other neuroscientists will use the new algorithms to make sense of their own data.

Whisking’s effects on pain signaling seem to depend on dedicated touch-processing circuitry that sends tactile information to the somatosensory cortex from a brain region called the ventral posterior thalamus. When the researchers blocked that pathway, whisking no longer dampened the animals’ response to painful stimuli. Now, Wang says, she and her team are eager to learn how this circuitry works with other parts of the brain to modulate the perception and response to painful stimuli.

Wang says the new findings might shed light on a condition called thalamic pain syndrome, a chronic pain disorder that can develop in patients after a stroke that affects the brain’s thalamus. “Such strokes may impair the functions of thalamic circuits that normally relay pure touch signals and dampen painful signals to the cortex,” she says.

RNA-activated protein cutter protects bacteria from infection

Our growing understanding of the ways bacteria defend themselves against viruses continues to change the way scientists work and offer new opportunities to improve human health. Ancient immune systems known as CRISPR systems have already been widely adopted as powerful genome editing tools, and the CRISPR toolkit is continuing to expand. Now, scientists at MIT’s McGovern Institute have uncovered an unexpected and potentially useful tool that some bacteria use to respond to infection: an RNA-activated protein-cutting enzyme.

McGovern Fellows Jonathan Gootenberg and Omar Abudayyeh in their lab. Photo: Caitlin Cunningham

The enzyme is part of a CRISPR system discovered last year by McGovern Fellows Omar Abudayyeh and Jonathan Gootenberg. The system, found in bacteria from Tokyo Bay, originally caught their interest because of the precision with which its RNA-activated enzyme cuts RNA. That enzyme, Cas7-11, is considered a promising tool for editing RNA for both research and potential therapeutics. Now, the same researchers have taken a closer look at this bacterial immune system and found that once Cas7-11 has been activated by the right RNA, it also turns on an enzyme that snips apart a particular bacterial protein.

That makes the Cas7-11 system notably more complex than better-studied CRISPR systems, which protect bacteria simply by chopping up the genetic material of an invading virus. “This is a much more elegant and complex signaling mechanism to really defend the bacteria,” Abudayyeh says. A team led by Abudayyeh, Gootenberg, and collaborator Hiroshi Nishimasu at the University of Tokyo report these findings in the November 3, 2022, issue of the journal Science.

Protease programming

The team’s experiments reveal that in bacteria, activation of the protein-cutting enzyme, known as a protease, triggers a series of events that ultimately slow the organism’s growth. But the components of the CRISPR system can be engineered to achieve different outcomes. Gootenberg and Abudayyeh have already programmed the RNA-activated protease to report on the presence of specific RNAs in mammalian cells. With further adaptations, they say it might one day be used to diagnose or treat disease.

The discovery grew out of the researchers’ curiosity about how bacteria protect themselves from infection using Cas7-11. They knew that the enzyme was capable of cutting viral RNA, but there were hints that something more might be going on. They wondered whether a set of genes that clustered near the Cas7-11 gene might also be involved in the bacteria’s infection response, and when graduate students Cian Schmitt-Ulms and Kaiyi Jiang began experimenting with those proteins, they discovered that they worked with Cas7-11 to execute a surprisingly elaborate response to a target RNA.

One of those proteins was the protease Csx29. In the team’s test tube experiments, Csx29 and Cas7-11 couldn’t cut anything on their own—but in the presence of a target RNA, Cas7-11 switched it on. Even then, when the researchers mixed the protease with Cas7-11 and its RNA target and allowed them to mingle with other proteins, most of the proteins remained intact. But one, a protein called Csx30, was reliably snipped apart by the protein-cutting enzyme.

Their experiments had uncovered an enzyme that cut a specific protein, but only in the presence of its particular target RNA. It was unusual—and potentially useful. “That was when we knew we were onto something,” Abudayyeh says.

As the team continued to explore the system, they found that the Csx29’s RNA-activated cut frees a fragment of Csx30 that then works with other bacterial proteins to execute a key aspect of the bacteria’s response to infection—slowing down growth. “Our growth experiments suggest that the cleavage is modulating the bacteria’s stress response in some way,” Gootenberg says.

The scientists quickly recognized that this RNA-activated protease could have uses beyond its natural role in antiviral defense. They have shown that the system can be adapted so that when the protease cuts Csx30 in the presence of its target RNA, it generates an easy to detect fluorescent signal. Because Cas7-11 can be directed to recognize any target RNA, researchers can program the system to detect and report on any RNA of interest. And even though the original system evolved in bacteria, this RNA sensor works well in mammalian cells.

Gootenberg and Abudayyeh say understanding this surprisingly elaborate CRISPR system opens new possibilities by adding to scientists’ growing toolkit of RNA-guided enzymes. “We’re excited to see how people use these tools and how they innovate on them,” Gootenberg says. It’s easy to imagine both diagnostic and therapeutic applications, they say. For example, an RNA sensor could detect signatures of disease in patient samples or to limit delivery of a potential therapy to specific types of cells, enabling that drug to carry out its work without side effects.

In addition to Gootenberg, Abudayyeh, Schmitt-Ulms, and Jiang, Abudayyeh-Gootenberg lab postdoc Nathan Wenyuan Zhou contributed to the project. This work was supported by NIH grants 1R21-AI149694, R01-EB031957, and R56-HG011857, the McGovern Institute Neurotechnology (MINT) program, the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics in Neuroscience, the G. Harold & Leila Y. Mathers Charitable Foundation, the MIT John W. Jarve (1978) Seed Fund for Science Innovation, the Cystic Fibrosis Foundation, Google Ventures, Impetus Grants, the NHGRI/TDCC Opportunity Fund, and the McGovern Institute.

RNA-sensing system controls protein expression in cells based on specific cell states

Researchers at the Broad Institute of MIT and Harvard and the McGovern Institute for Brain Research at MIT have developed a system that can detect a particular RNA sequence in live cells and produce a protein of interest in response. Using the technology, the team showed how they could identify specific cell types, detect and measure changes in the expression of individual genes, track transcriptional states, and control the production of proteins encoded by synthetic mRNA.

The platform, called Reprogrammable ADAR Sensors, or RADARS, even allowed the team to target and kill a specific cell type. The team said RADARS could one day help researchers detect and selectively kill tumor cells, or edit the genome in specific cells. The study appears today in Nature Biotechnology and was led by co-first authors Kaiyi Jiang (MIT), Jeremy Koob (Broad), Xi Chen (Broad), Rohan Krajeski (MIT), and Yifan Zhang (Broad).

“One of the revolutions in genomics has been the ability to sequence the transcriptomes of cells,” said Fei Chen, a core institute member at the Broad, Merkin Fellow, assistant professor at Harvard University, and co-corresponding author on the study. “That has really allowed us to learn about cell types and states. But, often, we haven’t been able to manipulate those cells specifically. RADARS is a big step in that direction.”

“Right now, the tools that we have to leverage cell markers are hard to develop and engineer,” added Omar Abudayyeh, a McGovern Institute Fellow and co-corresponding author on the study. “We really wanted to make a programmable way of sensing and responding to a cell state.”

Jonathan Gootenberg, who is also a McGovern Institute Fellow and co-corresponding author, says that their team was eager to build a tool to take advantage of all the data provided by single-cell RNA sequencing, which has revealed a vast array of cell types and cell states in the body.

“We wanted to ask how we could manipulate cellular identities in a way that was as easy as editing the genome with CRISPR,” he said. “And we’re excited to see what the field does with it.” 

Omar Abudayyeh, Jonathan Gootenberg and Fei Chen at the Broad Institute
Study authors (from left to right) Omar Abudayyeh, Jonathan Gootenberg, and Fei Chen. Photo: Namrita Sengupta

Repurposing RNA editing

The RADARS platform generates a desired protein when it detects a specific RNA by taking advantage of RNA editing that occurs naturally in cells.

The system consists of an RNA containing two components: a guide region, which binds to the target RNA sequence that scientists want to sense in cells, and a payload region, which encodes the protein of interest, such as a fluorescent signal or a cell-killing enzyme. When the guide RNA binds to the target RNA, this generates a short double-stranded RNA sequence containing a mismatch between two bases in the sequence — adenosine (A) and cytosine (C). This mismatch attracts a naturally occurring family of RNA-editing proteins called adenosine deaminases acting on RNA (ADARs).

In RADARS, the A-C mismatch appears within a “stop signal” in the guide RNA, which prevents the production of the desired payload protein. The ADARs edit and inactivate the stop signal, allowing for the translation of that protein. The order of these molecular events is key to RADARS’s function as a sensor; the protein of interest is produced only after the guide RNA binds to the target RNA and the ADARs disable the stop signal.

The team tested RADARS in different cell types and with different target sequences and protein products. They found that RADARS distinguished between kidney, uterine, and liver cells, and could produce different fluorescent signals as well as a caspase, an enzyme that kills cells. RADARS also measured gene expression over a large dynamic range, demonstrating their utility as sensors.

Most systems successfully detected target sequences using the cell’s native ADAR proteins, but the team found that supplementing the cells with additional ADAR proteins increased the strength of the signal. Abudayyeh says both of these cases are potentially useful; taking advantage of the cell’s native editing proteins would minimize the chance of off-target editing in therapeutic applications, but supplementing them could help produce stronger effects when RADARS are used as a research tool in the lab.

On the radar

Abudayyeh, Chen, and Gootenberg say that because both the guide RNA and payload RNA are modifiable, others can easily redesign RADARS to target different cell types and produce different signals or payloads. They also engineered more complex RADARS, in which cells produced a protein if they sensed two RNA sequences and another if they sensed either one RNA or another. The team adds that similar RADARS could help scientists detect more than one cell type at the same time, as well as complex cell states that can’t be defined by a single RNA transcript.

Ultimately, the researchers hope to develop a set of design rules so that others can more easily develop RADARS for their own experiments. They suggest other scientists could use RADARS to manipulate immune cell states, track neuronal activity in response to stimuli, or deliver therapeutic mRNA to specific tissues.

“We think this is a really interesting paradigm for controlling gene expression,” said Chen. “We can’t even anticipate what the best applications will be. That really comes from the combination of people with interesting biology and the tools you develop.”

This work was supported by the The McGovern Institute Neurotechnology (MINT) program, the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics in Neuroscience, the G. Harold & Leila Y. Mathers Charitable Foundation, Massachusetts Institute of Technology, Impetus Grants, the Cystic Fibrosis Foundation, Google Ventures, FastGrants, the McGovern Institute, National Institutes of Health, the Burroughs Wellcome Fund, the Searle Scholars Foundation, the Harvard Stem Cell Institute, and the Merkin Institute.

Personal pursuits

This story originally appeared in the Fall 2022 issue of BrainScan.

***

Many neuroscientists were drawn to their careers out of curiosity and wonder. Their deep desire to understand how the brain works drew them into the lab and keeps them coming back, digging deeper and exploring more each day. But for some, the work is more personal.

Several McGovern faculty say they entered their field because someone in their lives was dealing with a brain disorder that they wanted to better understand. They are committed to unraveling the basic biology of those conditions, knowing that knowledge is essential to guide the development of better treatments.

The distance from basic research to clinical progress is shortening, and many young neuroscientists hope not just to deepen scientific understanding of the brain, but to have direct impact on the lives of patients. Some want to know why people they love are suffering from neurological disorders or mental illness; others seek to understand the ways in which their own brains work differently than others. But above all, they want better treatments for people affected by such disorders.

Seeking answers

That’s true for Kian Caplan, a graduate student in MIT’s Department of Brain and Cognitive Sciences who was diagnosed with Tourette syndrome around age 13. At the time, learning that the repetitive, uncontrollable movements and vocal tics he had been making for most of his life were caused by a neurological disorder was something of a relief. But it didn’t take long for Caplan to realize his diagnosis came with few answers.

Graduate student Kian Caplan studies the brain circuits associated with Tourette syndrome and obsessive-compulsive disorder in Guoping Feng and Fan Wang’s labs at the McGovern Institute. Photo: Steph Stevens

Tourette syndrome has been estimated to occur in about six of every 1,000 children, but its neurobiology remains poorly understood.

“The doctors couldn’t really explain why I can’t control the movements and sounds I make,” he says. “They couldn’t really explain why my symptoms wax and wane, or why the tics I have aren’t always the same.”

That lack of understanding is not just frustrating for curious kids like Caplan. It means that researchers have been unable to develop treatments that target the root cause of Tourette syndrome. Drugs that dampen signaling in parts of the brain that control movement can help suppress tics, but not without significant side effects. Caplan has tried those drugs. For him, he says, “they’re not worth the suppression.”

Advised by Fan Wang and McGovern Associate Director Guoping Feng, Caplan is looking for answers. A mouse model of obsessive-compulsive disorder developed in Feng’s lab was recently found to exhibit repetitive movements similar to those of people with Tourette syndrome, and Caplan is working to characterize those tic-like movements. He will use the mouse model to examine the brain circuits underlying the two conditions, which often co-occur in people. Broadly, researchers think Tourette syndrome arises due to dysregulation of cortico-striatal-thalamo-cortical circuits, which connect distant parts of the brain to control movement. Caplan and Wang suspect that the brainstem — a structure found where the brain connects to the spinal cord, known for organizing motor movement into different modules — is probably involved, too.

Wang’s research group studies the brainstem’s role in movement, but she says that like most researchers, she hadn’t considered its role in Tourette syndrome until Caplan joined her lab. That’s one reason Caplan, who has long been a mentor and advocate for students with neurodevelopmental disorders, thinks neuroscience needs more neurodiversity.

“I think we need more representation in basic science research by the people who actually live with those conditions,” he says. Their experiences can lead to insights that may be inaccessible to others, he says, but significant barriers in academia often prevent this kind of representation. Caplan wants to see institutions make systemic changes to ensure that neurodiverse and otherwise minority individuals are able to thrive in academia. “I’m not an exception,” he says, “there should be more people like me here, but the present system makes that incredibly difficult.”

Overcoming adversity

Like Caplan, Lace Riggs faced significant challenges in her pursuit to study the brain. She grew up in Southern California’s Inland Empire, where issues of social disparity, chronic stress, drug addiction, and mental illness were a part of everyday life.

Postdoctoral fellow Lace Riggs studies the origins of neurodevelopmental conditions in Guoping Feng’s lab at the McGovern Institute. Photo: Lace Riggs

“Living in severe poverty and relying on government assistance without access to adequate education and resources led everyone I know and love to suffer tremendously, myself included,” says Riggs, a postdoctoral fellow in the Feng lab.

“There are not a lot of people like me who make it to this stage,” says Riggs, who has lost friends and family members to addiction, mental illness, and suicide. “There’s a reason for that,” she adds. “It’s really, really difficult to get through the educational system and to overcome socioeconomic barriers.”

Today, Riggs is investigating the origins of neurodevelopmental conditions, hoping to pave the way to better treatments for brain disorders by uncovering the molecular changes that alter the structure and function of neural circuits.

Riggs says that the adversities she faced early in life offered valuable insights in the pursuit of these goals. She first became interested in the brain because she wanted to understand how our experiences have a lasting impact on who we are — including in ways that leave people vulnerable to psychiatric problems.

“While the need for more effective treatments led me to become interested in psychiatry, my fascination with the brain’s unique ability to adapt is what led me to neuroscience,” says Riggs.

After finishing high school, Riggs attended California State University in San Bernardino and became the only member of her family to attend university or attempt a four-year degree. Today, she spends her days working with mice that carry mutations linked to autism or ADHD in humans, studying the animals’ behavior and monitoring their neural activity. She expects that aberrant neural circuit activity in these conditions may also contribute to mood disorders, whose origins are harder to tease apart because they often arise when genetic and environmental factors intersect. Ultimately, Riggs says, she wants to understand how our genes dictate whether an experience will alter neural signaling and impact mental health in a long-lasting way.

Riggs uses patch clamp electrophysiology to record the strength of inhibitory and excitatory synaptic input onto individual neurons (white arrow) in an animal model of autism. Image: Lace Riggs

“If we understand how these long-lasting synaptic changes come about, then we might be able to leverage these mechanisms to develop new and more effective treatments.”

While the turmoil of her childhood is in the past, Riggs says it is not forgotten — in part, because of its lasting effects on her own mental health.  She talks openly about her ongoing struggle with social anxiety and complex post-traumatic stress disorder because she is passionate about dismantling the stigma surrounding these conditions. “It’s something I have to deal with every day,” Riggs says. That means coping with symptoms like difficulty concentrating, hypervigilance, and heightened sensitivity to stress. “It’s like a constant hum in the background of my life, it never stops,” she says.

“I urge all of us to strive, not only to make scientific discoveries to move the field forward,” says Riggs, “but to improve the accessibility of this career to those whose lived experiences are required to truly accomplish that goal.”