Not every reader’s struggle is the same

Many children struggle to learn to read, and studies have shown that students from a lower socioeconomic status (SES) background are more likely to have difficulty than those from a higher SES background.

MIT neuroscientists have now discovered that the types of difficulties that lower-SES students have with reading, and the underlying brain signatures, are, on average, different from those of higher-SES students who struggle with reading.

In a new study, which included brain scans of more than 150 children as they performed tasks related to reading, researchers found that when students from higher SES backgrounds struggled with reading, it could usually be explained by differences in their ability to piece sounds together into words, a skill known as phonological processing.

However, when students from lower SES backgrounds struggled, it was best explained by differences in their ability to rapidly name words or letters, a task associated with orthographic processing, or visual interpretation of words and letters. This pattern was further confirmed by brain activation during phonological and orthographic processing.

These differences suggest that different types of interventions may needed for different groups of children, the researchers say. The study also highlights the importance of including a wide range of SES levels in studies of reading or other types of academic learning.

“Within the neuroscience realm, we tend to rely on convenience samples of participants, so a lot of our understanding of the neuroscience components of reading in general, and reading disabilities in particular, tends to be based on higher-SES families,” says Rachel Romeo, a former graduate student in the Harvard-MIT Program in Health Sciences and Technology and the lead author of the study. “If we only look at these nonrepresentative samples, we can come away with a relatively biased view of how the brain works.”

Romeo is now an assistant professor in the Department of Human Development and Quantitative Methodology at the University of Maryland. John Gabrieli, the Grover Hermann Professor of Health Sciences and Technology and a professor of brain and cognitive sciences at MIT, is the senior author of the paper, which appears today in the journal Developmental Cognitive Neuroscience.

Components of reading

For many years, researchers have known that children’s scores on standardized assessments of reading are correlated with socioeconomic factors such as school spending per student or the number of children at the school who qualify for free or reduced-price lunches.

Studies of children who struggle with reading, mostly done in higher-SES environments, have shown that the aspect of reading they struggle with most is phonological awareness: the understanding of how sounds combine to make a word, and how sounds can be split up and swapped in or out to make new words.

“That’s a key component of reading, and difficulty with phonological processing is often one of the hallmarks of dyslexia or other reading disorders,” Romeo says.

In the new study, the MIT team wanted to explore how SES might affect phonological processing as well as another key aspect of reading, orthographic processing. This relates more to the visual components of reading, including the ability to identify letters and read words.

To do the study, the researchers recruited first and second grade students from the Boston area, making an effort to include a range of SES levels. For the purposes of this study, SES was assessed by parents’ total years of formal education, which is commonly used as a measure of the family’s SES.

“We went into this not necessarily with any hypothesis about how SES might relate to the two types of processing, but just trying to understand whether SES might be impacting one or the other more, or if it affects both types the same,” Romeo says.

The researchers first gave each child a series of standardized tests designed to measure either phonological processing or orthographic processing. Then, they performed fMRI scans of each child while they carried out additional phonological or orthographic tasks.

The initial series of tests allowed the researchers to determine each child’s abilities for both types of processing, and the brain scans allowed them to measure brain activity in parts of the brain linked with each type of processing.

The results showed that at the higher end of the SES spectrum, differences in phonological processing ability accounted for most of the differences between good readers and struggling readers. This is consistent with the findings of previous studies of reading difficulty. In those children, the researchers also found greater differences in activity in the parts of the brain responsible for phonological processing.

However, the outcomes were different when the researchers analyzed the lower end of the SES spectrum. There, the researchers found that variance in orthographic processing ability accounted for most of the differences between good readers and struggling readers. MRI scans of these children revealed greater differences in brain activity in parts of the brain that are involved in orthographic processing.

Optimizing interventions

There are many possible reasons why a lower SES background might lead to difficulties in orthographic processing, the researchers say. It might be less exposure to books at home, or limited access to libraries and other resources that promote literacy. For children from this background who struggle with reading, different types of interventions might benefit them more than the ones typically used for children who have difficulty with phonological processing.

In a 2017 study, Gabrieli, Romeo, and others found that a summer reading intervention that focused on helping students develop the sensory and cognitive processing necessary for reading was more beneficial for students from lower-SES backgrounds than children from higher-SES backgrounds. Those findings also support the idea that tailored interventions may be necessary for individual students, they say.

“There are two major reasons we understand that cause children to struggle as they learn to read in these early grades. One of them is learning differences, most prominently dyslexia, and the other one is socioeconomic disadvantage,” Gabrieli says. “In my mind, schools have to help all these kinds of kids become the best readers they can, so recognizing the source or sources of reading difficulty ought to inform practices and policies that are sensitive to these differences and optimize supportive interventions.”

Gabrieli and Romeo are now working with researchers at the Harvard University Graduate School of Education to evaluate language and reading interventions that could better prepare preschool children from lower SES backgrounds to learn to read. In her new lab at the University of Maryland, Romeo also plans to further delve into how different aspects of low SES contribute to different areas of language and literacy development.

“No matter why a child is struggling with reading, they need the education and the attention to support them. Studies that try to tease out the underlying factors can help us in tailoring educational interventions to what a child needs,” she says.

The research was funded by the Ellison Medical Foundation, the Halis Family Foundation, and the National Institutes of Health.

RNA-activated protein cutter protects bacteria from infection

Our growing understanding of the ways bacteria defend themselves against viruses continues to change the way scientists work and offer new opportunities to improve human health. Ancient immune systems known as CRISPR systems have already been widely adopted as powerful genome editing tools, and the CRISPR toolkit is continuing to expand. Now, scientists at MIT’s McGovern Institute have uncovered an unexpected and potentially useful tool that some bacteria use to respond to infection: an RNA-activated protein-cutting enzyme.

McGovern Fellows Jonathan Gootenberg and Omar Abudayyeh in their lab. Photo: Caitlin Cunningham

The enzyme is part of a CRISPR system discovered last year by McGovern Fellows Omar Abudayyeh and Jonathan Gootenberg. The system, found in bacteria from Tokyo Bay, originally caught their interest because of the precision with which its RNA-activated enzyme cuts RNA. That enzyme, Cas7-11, is considered a promising tool for editing RNA for both research and potential therapeutics. Now, the same researchers have taken a closer look at this bacterial immune system and found that once Cas7-11 has been activated by the right RNA, it also turns on an enzyme that snips apart a particular bacterial protein.

That makes the Cas7-11 system notably more complex than better-studied CRISPR systems, which protect bacteria simply by chopping up the genetic material of an invading virus. “This is a much more elegant and complex signaling mechanism to really defend the bacteria,” Abudayyeh says. A team led by Abudayyeh, Gootenberg, and collaborator Hiroshi Nishimasu at the University of Tokyo report these findings in the November 3, 2022, issue of the journal Science.

Protease programming

The team’s experiments reveal that in bacteria, activation of the protein-cutting enzyme, known as a protease, triggers a series of events that ultimately slow the organism’s growth. But the components of the CRISPR system can be engineered to achieve different outcomes. Gootenberg and Abudayyeh have already programmed the RNA-activated protease to report on the presence of specific RNAs in mammalian cells. With further adaptations, they say it might one day be used to diagnose or treat disease.

The discovery grew out of the researchers’ curiosity about how bacteria protect themselves from infection using Cas7-11. They knew that the enzyme was capable of cutting viral RNA, but there were hints that something more might be going on. They wondered whether a set of genes that clustered near the Cas7-11 gene might also be involved in the bacteria’s infection response, and when graduate students Cian Schmitt-Ulms and Kaiyi Jiang began experimenting with those proteins, they discovered that they worked with Cas7-11 to execute a surprisingly elaborate response to a target RNA.

One of those proteins was the protease Csx29. In the team’s test tube experiments, Csx29 and Cas7-11 couldn’t cut anything on their own—but in the presence of a target RNA, Cas7-11 switched it on. Even then, when the researchers mixed the protease with Cas7-11 and its RNA target and allowed them to mingle with other proteins, most of the proteins remained intact. But one, a protein called Csx30, was reliably snipped apart by the protein-cutting enzyme.

Their experiments had uncovered an enzyme that cut a specific protein, but only in the presence of its particular target RNA. It was unusual—and potentially useful. “That was when we knew we were onto something,” Abudayyeh says.

As the team continued to explore the system, they found that the Csx29’s RNA-activated cut frees a fragment of Csx30 that then works with other bacterial proteins to execute a key aspect of the bacteria’s response to infection—slowing down growth. “Our growth experiments suggest that the cleavage is modulating the bacteria’s stress response in some way,” Gootenberg says.

The scientists quickly recognized that this RNA-activated protease could have uses beyond its natural role in antiviral defense. They have shown that the system can be adapted so that when the protease cuts Csx30 in the presence of its target RNA, it generates an easy to detect fluorescent signal. Because Cas7-11 can be directed to recognize any target RNA, researchers can program the system to detect and report on any RNA of interest. And even though the original system evolved in bacteria, this RNA sensor works well in mammalian cells.

Gootenberg and Abudayyeh say understanding this surprisingly elaborate CRISPR system opens new possibilities by adding to scientists’ growing toolkit of RNA-guided enzymes. “We’re excited to see how people use these tools and how they innovate on them,” Gootenberg says. It’s easy to imagine both diagnostic and therapeutic applications, they say. For example, an RNA sensor could detect signatures of disease in patient samples or to limit delivery of a potential therapy to specific types of cells, enabling that drug to carry out its work without side effects.

In addition to Gootenberg, Abudayyeh, Schmitt-Ulms, and Jiang, Abudayyeh-Gootenberg lab postdoc Nathan Wenyuan Zhou contributed to the project. This work was supported by NIH grants 1R21-AI149694, R01-EB031957, and R56-HG011857, the McGovern Institute Neurotechnology (MINT) program, the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics in Neuroscience, the G. Harold & Leila Y. Mathers Charitable Foundation, the MIT John W. Jarve (1978) Seed Fund for Science Innovation, the Cystic Fibrosis Foundation, Google Ventures, Impetus Grants, the NHGRI/TDCC Opportunity Fund, and the McGovern Institute.

Study urges caution when comparing neural networks to the brain

Neural networks, a type of computing system loosely modeled on the organization of the human brain, form the basis of many artificial intelligence systems for applications such speech recognition, computer vision, and medical image analysis.

In the field of neuroscience, researchers often use neural networks to try to model the same kind of tasks that the brain performs, in hopes that the models could suggest new hypotheses regarding how the brain itself performs those tasks. However, a group of researchers at MIT is urging that more caution should be taken when interpreting these models.

In an analysis of more than 11,000 neural networks that were trained to simulate the function of grid cells — key components of the brain’s navigation system — the researchers found that neural networks only produced grid-cell-like activity when they were given very specific constraints that are not found in biological systems.

“What this suggests is that in order to obtain a result with grid cells, the researchers training the models needed to bake in those results with specific, biologically implausible implementation choices,” says Rylan Schaeffer, a former senior research associate at MIT.

Without those constraints, the MIT team found that very few neural networks generated grid-cell-like activity, suggesting that these models do not necessarily generate useful predictions of how the brain works.

Schaeffer, who is now a graduate student in computer science at Stanford University, is the lead author of the new study, which will be presented at the 2022 Conference on Neural Information Processing Systems this month. Ila Fiete, a professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research, is the senior author of the paper. Mikail Khona, an MIT graduate student in physics, is also an author.

Ila Fiete leads a discussion in her lab at the McGovern Institute. Photo: Steph Stevens

Modeling grid cells

Neural networks, which researchers have been using for decades to perform a variety of computational tasks, consist of thousands or millions of processing units connected to each other. Each node has connections of varying strengths to other nodes in the network. As the network analyzes huge amounts of data, the strengths of those connections change as the network learns to perform the desired task.

In this study, the researchers focused on neural networks that have been developed to mimic the function of the brain’s grid cells, which are found in the entorhinal cortex of the mammalian brain. Together with place cells, found in the hippocampus, grid cells form a brain circuit that helps animals know where they are and how to navigate to a different location.

Place cells have been shown to fire whenever an animal is in a specific location, and each place cell may respond to more than one location. Grid cells, on the other hand, work very differently. As an animal moves through a space such as a room, grid cells fire only when the animal is at one of the vertices of a triangular lattice. Different groups of grid cells create lattices of slightly different dimensions, which overlap each other. This allows grid cells to encode a large number of unique positions using a relatively small number of cells.

This type of location encoding also makes it possible to predict an animal’s next location based on a given starting point and a velocity. In several recent studies, researchers have trained neural networks to perform this same task, which is known as path integration.

To train neural networks to perform this task, researchers feed into it a starting point and a velocity that varies over time. The model essentially mimics the activity of an animal roaming through a space, and calculates updated positions as it moves. As the model performs the task, the activity patterns of different units within the network can be measured. Each unit’s activity can be represented as a firing pattern, similar to the firing patterns of neurons in the brain.

In several previous studies, researchers have reported that their models produced units with activity patterns that closely mimic the firing patterns of grid cells. These studies concluded that grid-cell-like representations would naturally emerge in any neural network trained to perform the path integration task.

However, the MIT researchers found very different results. In an analysis of more than 11,000 neural networks that they trained on path integration, they found that while nearly 90 percent of them learned the task successfully, only about 10 percent of those networks generated activity patterns that could be classified as grid-cell-like. That includes networks in which even only a single unit achieved a high grid score.

The earlier studies were more likely to generate grid-cell-like activity only because of the constraints that researchers build into those models, according to the MIT team.

“Earlier studies have presented this story that if you train networks to path integrate, you’re going to get grid cells. What we found is that instead, you have to make this long sequence of choices of parameters, which we know are inconsistent with the biology, and then in a small sliver of those parameters, you will get the desired result,” Schaeffer says.

More biological models

One of the constraints found in earlier studies is that the researchers required the model to convert velocity into a unique position, reported by one network unit that corresponds to a place cell. For this to happen, the researchers also required that each place cell correspond to only one location, which is not how biological place cells work: Studies have shown that place cells in the hippocampus can respond to up to 20 different locations, not just one.

When the MIT team adjusted the models so that place cells were more like biological place cells, the models were still able to perform the path integration task, but they no longer produced grid-cell-like activity. Grid-cell-like activity also disappeared when the researchers instructed the models to generate different types of location output, such as location on a grid with X and Y axes, or location as a distance and angle relative to a home point.

“If the only thing that you ask this network to do is path integrate, and you impose a set of very specific, not physiological requirements on the readout unit, then it’s possible to obtain grid cells,” says Fiete, who is also the director of the K. Lisa Yang Integrative Computational Neuroscience Center at MIT. “But if you relax any of these aspects of this readout unit, that strongly degrades the ability of the network to produce grid cells. In fact, usually they don’t, even though they still solve the path integration task.”

Therefore, if the researchers hadn’t already known of the existence of grid cells, and guided the model to produce them, it would be very unlikely for them to appear as a natural consequence of the model training.

The researchers say that their findings suggest that more caution is warranted when interpreting neural network models of the brain.

“When you use deep learning models, they can be a powerful tool, but one has to be very circumspect in interpreting them and in determining whether they are truly making de novo predictions, or even shedding light on what it is that the brain is optimizing,” Fiete says.

Kenneth Harris, a professor of quantitative neuroscience at University College London, says he hopes the new study will encourage neuroscientists to be more careful when stating what can be shown by analogies between neural networks and the brain.

“Neural networks can be a useful source of predictions. If you want to learn how the brain solves a computation, you can train a network to perform it, then test the hypothesis that the brain works the same way. Whether the hypothesis is confirmed or not, you will learn something,” says Harris, who was not involved in the study. “This paper shows that ‘postdiction’ is less powerful: Neural networks have many parameters, so getting them to replicate an existing result is not as surprising.”

When using these models to make predictions about how the brain works, it’s important to take into account realistic, known biological constraints when building the models, the MIT researchers say. They are now working on models of grid cells that they hope will generate more accurate predictions of how grid cells in the brain work.

“Deep learning models will give us insight about the brain, but only after you inject a lot of biological knowledge into the model,” Khona says. “If you use the correct constraints, then the models can give you a brain-like solution.”

The research was funded by the Office of Naval Research, the National Science Foundation, the Simons Foundation through the Simons Collaboration on the Global Brain, and the Howard Hughes Medical Institute through the Faculty Scholars Program. Mikail Khona was supported by the MathWorks Science Fellowship.

RNA-sensing system controls protein expression in cells based on specific cell states

Researchers at the Broad Institute of MIT and Harvard and the McGovern Institute for Brain Research at MIT have developed a system that can detect a particular RNA sequence in live cells and produce a protein of interest in response. Using the technology, the team showed how they could identify specific cell types, detect and measure changes in the expression of individual genes, track transcriptional states, and control the production of proteins encoded by synthetic mRNA.

The platform, called Reprogrammable ADAR Sensors, or RADARS, even allowed the team to target and kill a specific cell type. The team said RADARS could one day help researchers detect and selectively kill tumor cells, or edit the genome in specific cells. The study appears today in Nature Biotechnology and was led by co-first authors Kaiyi Jiang (MIT), Jeremy Koob (Broad), Xi Chen (Broad), Rohan Krajeski (MIT), and Yifan Zhang (Broad).

“One of the revolutions in genomics has been the ability to sequence the transcriptomes of cells,” said Fei Chen, a core institute member at the Broad, Merkin Fellow, assistant professor at Harvard University, and co-corresponding author on the study. “That has really allowed us to learn about cell types and states. But, often, we haven’t been able to manipulate those cells specifically. RADARS is a big step in that direction.”

“Right now, the tools that we have to leverage cell markers are hard to develop and engineer,” added Omar Abudayyeh, a McGovern Institute Fellow and co-corresponding author on the study. “We really wanted to make a programmable way of sensing and responding to a cell state.”

Jonathan Gootenberg, who is also a McGovern Institute Fellow and co-corresponding author, says that their team was eager to build a tool to take advantage of all the data provided by single-cell RNA sequencing, which has revealed a vast array of cell types and cell states in the body.

“We wanted to ask how we could manipulate cellular identities in a way that was as easy as editing the genome with CRISPR,” he said. “And we’re excited to see what the field does with it.” 

Omar Abudayyeh, Jonathan Gootenberg and Fei Chen at the Broad Institute
Study authors (from left to right) Omar Abudayyeh, Jonathan Gootenberg, and Fei Chen. Photo: Namrita Sengupta

Repurposing RNA editing

The RADARS platform generates a desired protein when it detects a specific RNA by taking advantage of RNA editing that occurs naturally in cells.

The system consists of an RNA containing two components: a guide region, which binds to the target RNA sequence that scientists want to sense in cells, and a payload region, which encodes the protein of interest, such as a fluorescent signal or a cell-killing enzyme. When the guide RNA binds to the target RNA, this generates a short double-stranded RNA sequence containing a mismatch between two bases in the sequence — adenosine (A) and cytosine (C). This mismatch attracts a naturally occurring family of RNA-editing proteins called adenosine deaminases acting on RNA (ADARs).

In RADARS, the A-C mismatch appears within a “stop signal” in the guide RNA, which prevents the production of the desired payload protein. The ADARs edit and inactivate the stop signal, allowing for the translation of that protein. The order of these molecular events is key to RADARS’s function as a sensor; the protein of interest is produced only after the guide RNA binds to the target RNA and the ADARs disable the stop signal.

The team tested RADARS in different cell types and with different target sequences and protein products. They found that RADARS distinguished between kidney, uterine, and liver cells, and could produce different fluorescent signals as well as a caspase, an enzyme that kills cells. RADARS also measured gene expression over a large dynamic range, demonstrating their utility as sensors.

Most systems successfully detected target sequences using the cell’s native ADAR proteins, but the team found that supplementing the cells with additional ADAR proteins increased the strength of the signal. Abudayyeh says both of these cases are potentially useful; taking advantage of the cell’s native editing proteins would minimize the chance of off-target editing in therapeutic applications, but supplementing them could help produce stronger effects when RADARS are used as a research tool in the lab.

On the radar

Abudayyeh, Chen, and Gootenberg say that because both the guide RNA and payload RNA are modifiable, others can easily redesign RADARS to target different cell types and produce different signals or payloads. They also engineered more complex RADARS, in which cells produced a protein if they sensed two RNA sequences and another if they sensed either one RNA or another. The team adds that similar RADARS could help scientists detect more than one cell type at the same time, as well as complex cell states that can’t be defined by a single RNA transcript.

Ultimately, the researchers hope to develop a set of design rules so that others can more easily develop RADARS for their own experiments. They suggest other scientists could use RADARS to manipulate immune cell states, track neuronal activity in response to stimuli, or deliver therapeutic mRNA to specific tissues.

“We think this is a really interesting paradigm for controlling gene expression,” said Chen. “We can’t even anticipate what the best applications will be. That really comes from the combination of people with interesting biology and the tools you develop.”

This work was supported by the The McGovern Institute Neurotechnology (MINT) program, the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics in Neuroscience, the G. Harold & Leila Y. Mathers Charitable Foundation, Massachusetts Institute of Technology, Impetus Grants, the Cystic Fibrosis Foundation, Google Ventures, FastGrants, the McGovern Institute, National Institutes of Health, the Burroughs Wellcome Fund, the Searle Scholars Foundation, the Harvard Stem Cell Institute, and the Merkin Institute.

Magnetic sensors track muscle length

Using a simple set of magnets, MIT researchers have come up with a sophisticated way to monitor muscle movements, which they hope will make it easier for people with amputations to control their prosthetic limbs.

In a new pair of papers, the researchers demonstrated the accuracy and safety of their magnet-based system, which can track the length of muscles during movement. The studies, performed in animals, offer hope that this strategy could be used to help people with prosthetic devices control them in a way that more closely mimics natural limb movement.

“These recent results demonstrate that this tool can be used outside the lab to track muscle movement during natural activity, and they also suggest that the magnetic implants are stable and biocompatible and that they don’t cause discomfort,” says Cameron Taylor, an MIT research scientist and co-lead author of both papers.

McGovern Institute Associate Investigator Hugh Herr. Photo: Jimmy Day / MIT Media Lab

In one of the studies, the researchers showed that they could accurately measure the lengths of turkeys’ calf muscles as the birds ran, jumped, and performed other natural movements. In the other study, they showed that the small magnetic beads used for the measurements do not cause inflammation or other adverse effects when implanted in muscle.

“I am very excited for the clinical potential of this new technology to improve the control and efficacy of bionic limbs for persons with limb-loss,” says Hugh Herr, a professor of media arts and sciences, co-director of the K. Lisa Yang Center for Bionics at MIT, and an associate member of MIT’s McGovern Institute for Brain Research.

Herr is a senior author of both papers, which appear today in the journal Frontiers in Bioengineering and Biotechnology. Thomas Roberts, a professor of ecology, evolution, and organismal biology at Brown University, is a senior author of the measurement study.

Tracking movement

Currently, powered prosthetic limbs are usually controlled using an approach known as surface electromyography (EMG). Electrodes attached to the surface of the skin or surgically implanted in the residual muscle of the amputated limb measure electrical signals from a person’s muscles, which are fed into the prosthesis to help it move the way the person wearing the limb intends.

However, that approach does not take into account any information about the muscle length or velocity, which could help to make the prosthetic movements more accurate.

Several years ago, the MIT team began working on a novel way to perform those kinds of muscle measurements, using an approach that they call magnetomicrometry. This strategy takes advantage of the permanent magnetic fields surrounding small beads implanted in a muscle. Using a credit-card-sized, compass-like sensor attached to the outside of the body, their system can track the distances between the two magnets. When a muscle contracts, the magnets move closer together, and when it flexes, they move further apart.

The new muscle measuring approach takes advantage of the magnetic attraction between two small beads implanted in a muscle. Using a small sensor attached to the outside of the body, the system can track the distances between the two magnets as the muscle contracts and flexes. Image: Hugh Herr

In a study published last year, the researchers showed that this system could be used to accurately measure small ankle movements when the beads were implanted in the calf muscles of turkeys. In one of the new studies, the researchers set out to see if the system could make accurate measurements during more natural movements in a nonlaboratory setting.

To do that, they created an obstacle course of ramps for the turkeys to climb and boxes for them to jump on and off of. The researchers used their magnetic sensor to track muscle movements during these activities, and found that the system could calculate muscle lengths in less than a millisecond.

They also compared their data to measurements taken using a more traditional approach known as fluoromicrometry, a type of X-ray technology that requires much larger equipment than magnetomicrometry. The magnetomicrometry measurements varied from those generated by fluoromicrometry by less than a millimeter, on average.

“We’re able to provide the muscle-length tracking functionality of the room-sized X-ray equipment using a much smaller, portable package, and we’re able to collect the data continuously instead of being limited to the 10-second bursts that fluoromicrometry is limited to,” Taylor says.

Seong Ho Yeon, an MIT graduate student, is also a co-lead author of the measurement study. Other authors include MIT Research Support Associate Ellen Clarrissimeaux and former Brown University postdoc Mary Kate O’Donnell.

Biocompatibility

In the second paper, the researchers focused on the biocompatibility of the implants. They found that the magnets did not generate tissue scarring, inflammation, or other harmful effects. They also showed that the implanted magnets did not alter the turkeys’ gaits, suggesting they did not produce discomfort. William Clark, a postdoc at Brown, is the co-lead author of the biocompatibility study.

The researchers also showed that the implants remained stable for eight months, the length of the study, and did not migrate toward each other, as long as they were implanted at least 3 centimeters apart. The researchers envision that the beads, which consist of a magnetic core coated with gold and a polymer called Parylene, could remain in tissue indefinitely once implanted.

“Magnets don’t require an external power source, and after implanting them into the muscle, they can maintain the full strength of their magnetic field throughout the lifetime of the patient,” Taylor says.

The researchers are now planning to seek FDA approval to test the system in people with prosthetic limbs. They hope to use the sensor to control prostheses similar to the way surface EMG is used now: Measurements regarding the length of muscles will be fed into the control system of a prosthesis to help guide it to the position that the wearer intends.

“The place where this technology fills a need is in communicating those muscle lengths and velocities to a wearable robot, so that the robot can perform in a way that works in tandem with the human,” Taylor says. “We hope that magnetomicrometry will enable a person to control a wearable robot with the same comfort level and the same ease as someone would control their own limb.”

In addition to prosthetic limbs, those wearable robots could include robotic exoskeletons, which are worn outside the body to help people move their legs or arms more easily.

The research was funded by the Salah Foundation, the K. Lisa Yang Center for Bionics at MIT, the MIT Media Lab Consortia, the National Institutes of Health, and the National Science Foundation.

Unlocking the mysteries of how neurons learn

When he matriculated in 2019 as a graduate student, Raúl Mojica Soto-Albors was no stranger to MIT. He’d spent time here on multiple occasions as an undergraduate at the University of Puerto Rico at Mayagüez, including eight months in 2018 as a displaced student after Hurricane Maria in 2017. Those experiences — including participating in the MIT Summer Research Bio Program (MSRP-Bio), which offers a funded summer research experience to underrepresented minorities and other underserved students — not only changed his course of study; they also empowered him to pursue a PhD.

“The summer program eased a lot of my worries about what science would be like, because I had never been immersed in an environment like MIT’s,” he says. “I thought it would be too intense and I wouldn’t be able to make it. But, in reality, it is just a bunch of people following their passions. And so, as long as you are following your passion, you are going to be pretty happy and productive.”

Mojica is now following his passion as a doctoral student in the MIT Department of Brain and Cognitive Sciences, using a complex electrophysiology method termed “patch clamp” to investigate neuronal activity in vivo. “It has all the stuff which we historically have not paid much attention to,” he explains. “Neuroscientists have been very focused on the spiking of the neuron. But I am concentrating instead on patterns in the subthreshold activity of neurons.”

Opening a door to neuroscience

Mojica’s affinity for science blossomed in childhood. Even though his parents encouraged him, he says, “It was a bit difficult as I did not have someone in science in my family. There was no one [like that] who I could go to for guidance.” In college, he became interested in the parameters of human behavior and decided to major in psychology. At the same time, he was curious about biology. “As I was learning about psychology,” he says. “I kept wondering how we, as human beings, emerge from such a mess of interacting neurons.”

His journey at MIT began in January 2017, when he was invited to attend the Center for Brains, Minds and Machines Quantitative Biology Methods Program, an intensive, weeklong program offered to underrepresented students of color to prepare them for scientific careers. Even though he had taken a Python class at the University of Puerto Rico and completed some online courses, he says, “This was the first instance where I had to develop my own tools and learn how to use a programming language to my advantage.”

The program also dramatically changed the course of his undergraduate career, thanks to conversations with Mandana Sassanfar, a biology lecturer and the program’s coordinator, about his future goals. “She advised me to change to majors to biology, as the psychology component is a little bit easier to read up on than missing the foundational biology classes,” he says. She also recommended that he apply to MSRP.

Mojica promptly took her advice, and he returned to MIT in the summer of 2017 as an MSRP student working in the lab of Associate Professor Mark Harnett in the Department of Brain and Cognitive Sciences and the McGovern Institute. There, he focused on performing calcium imaging on the retro splenial cortex to understand the role of neurons in navigating a complex spatial environment. The experience was eye-opening; there are very few specialized programs at UPRM, notes Mojica, which limited his exposure to interdisciplinary subjects. “That was my door into neuroscience, which I otherwise would have never been able to get into.”

Weathering the storm

Mojica returned home to begin his senior year, but shortly thereafter, in September 2017, Hurricane Maria hit Puerto Rico and devastated the community. “The island was dealing with blackouts almost a year after the hurricane, and they are still dealing with them today. It makes it really difficult, for example, for people who rely on electricity for oxygen or to refrigerate their diabetes medicine,” he says. “[My family] was lucky to have electricity reliably four months after the hurricane. But I had a lot of people around me who spent eight, nine, 10 months without electricity,” he says.

The hurricane’s destruction disrupted every aspect of life, including education. MIT offered its educational resources by hosting several 2017 MSRP students from Puerto Rico for the spring semester, including Mojica. He moved back to campus in February 2018, finished up his fall term university exams, and took classes and did research throughout the spring and summer of that year.

“That was when I first got some culture shock and felt homesick,” he notes. Thankfully, he was not alone. He befriended another student from Puerto Rico who helped him through that tough time. They understood and supported each other, as both of their families were navigating the challenges of a post-hurricane island. Mojica says, “We had just come out of this mess of the hurricane, and we came [to MIT] and everything was perfect. … It was jarring.”

Despite the immense upheaval in his life, Mojica was determined to pursue a PhD. “I didn’t want to just consume knowledge for the rest of my life,” he says. “I wanted to produce knowledge. I wanted to be on the cutting-edge of something.”

Paying it forward

Now a fourth-year PhD candidate in the Harnett Lab, he’s doing just that, utilizing a classical method termed “patch clamp electrophysiology” in novel ways to investigate neuronal learning. The patch clamp technique allows him to observe activity below the threshold of neuronal firing in mice, something that no other method can do.

“I am studying how single neurons learn and adapt, or plasticize,” Mojica explains. “If I present something new and unexpected to the animal, how does a cell respond? And if I stimulate the cell, can I make it learn something that it didn’t respond to before?” This research could have implications for patient recovery after severe brain injuries. Plasticity is a crucial aspect of brain function. If we could figure out how neurons learn, or even how to plasticize them, we could speed up recovery from life-threatening loss of brain tissue, for example,” he says.

In addition to research, Mojica’s passion for mentorship shines through. His voice lifts as he describes one of his undergraduate mentees, Gabriella, who is now a full-time graduate student in the Harnett lab. He currently mentors MSRP students and advises prospective PhD students on their applications. “When I was navigating the PhD process, I did not have people like me serving as my own mentors,” he notes.

Mojica knows firsthand the impact of mentoring. Even though he never had anyone who could provide guidance about science, his childhood music teacher played an extremely influential role in his early career and always encouraged him to pursue his passions. “He had a lot of knowledge in how to navigate the complicated mess of being 17 or 18 and figuring out what you want to devote the rest of your life to,” he recalls fondly.

Although he’s not sure about his future professional plans, one thing is clear for Mojica: “A big part of it will be mentoring the people who come from similar backgrounds to mine who have less access to opportunities. I want to keep that front and center.”

Understanding reality through algorithms

Although Fernanda De La Torre still has several years left in her graduate studies, she’s already dreaming big when it comes to what the future has in store for her.

“I dream of opening up a school one day where I could bring this world of understanding of cognition and perception into places that would never have contact with this,” she says.

It’s that kind of ambitious thinking that’s gotten De La Torre, a doctoral student in MIT’s Department of Brain and Cognitive Sciences, to this point. A recent recipient of the prestigious Paul and Daisy Soros Fellowship for New Americans, De La Torre has found at MIT a supportive, creative research environment that’s allowed her to delve into the cutting-edge science of artificial intelligence. But she’s still driven by an innate curiosity about human imagination and a desire to bring that knowledge to the communities in which she grew up.

An unconventional path to neuroscience

De La Torre’s first exposure to neuroscience wasn’t in the classroom, but in her daily life. As a child, she watched her younger sister struggle with epilepsy. At 12, she crossed into the United States from Mexico illegally to reunite with her mother, exposing her to a whole new language and culture. Once in the States, she had to grapple with her mother’s shifting personality in the midst of an abusive relationship. “All of these different things I was seeing around me drove me to want to better understand how psychology works,” De La Torre says, “to understand how the mind works, and how it is that we can all be in the same environment and feel very different things.”

But finding an outlet for that intellectual curiosity was challenging. As an undocumented immigrant, her access to financial aid was limited. Her high school was also underfunded and lacked elective options. Mentors along the way, though, encouraged the aspiring scientist, and through a program at her school, she was able to take community college courses to fulfill basic educational requirements.

It took an inspiring amount of dedication to her education, but De La Torre made it to Kansas State University for her undergraduate studies, where she majored in computer science and math. At Kansas State, she was able to get her first real taste of research. “I was just fascinated by the questions they were asking and this entire space I hadn’t encountered,” says De La Torre of her experience working in a visual cognition lab and discovering the field of computational neuroscience.

Although Kansas State didn’t have a dedicated neuroscience program, her research experience in cognition led her to a machine learning lab led by William Hsu, a computer science professor. There, De La Torre became enamored by the possibilities of using computation to model the human brain. Hsu’s support also convinced her that a scientific career was a possibility. “He always made me feel like I was capable of tackling big questions,” she says fondly.

With the confidence imparted in her at Kansas State, De La Torre came to MIT in 2019 as a post-baccalaureate student in the lab of Tomaso Poggio, the Eugene McDermott Professor of Brain and Cognitive Sciences and an investigator at the McGovern Institute for Brain Research. With Poggio, also the director of the Center for Brains, Minds and Machines, De La Torre began working on deep-learning theory, an area of machine learning focused on how artificial neural networks modeled on the brain can learn to recognize patterns and learn.

“It’s a very interesting question because we’re starting to use them everywhere,” says De La Torre of neural networks, listing off examples from self-driving cars to medicine. “But, at the same time, we don’t fully understand how these networks can go from knowing nothing and just being a bunch of numbers to outputting things that make sense.”

Her experience as a post-bac was De La Torre’s first real opportunity to apply the technical computer skills she developed as an undergraduate to neuroscience. It was also the first time she could fully focus on research. “That was the first time that I had access to health insurance and a stable salary. That was, in itself, sort of life-changing,” she says. “But on the research side, it was very intimidating at first. I was anxious, and I wasn’t sure that I belonged here.”

Fortunately, De La Torre says she was able to overcome those insecurities, both through a growing unabashed enthusiasm for the field and through the support of Poggio and her other colleagues in MIT’s Department of Brain and Cognitive Sciences. When the opportunity came to apply to the department’s PhD program, she jumped on it. “It was just knowing these kinds of mentors are here and that they cared about their students,” says De La Torre of her decision to stay on at MIT for graduate studies. “That was really meaningful.”

Expanding notions of reality and imagination

In her two years so far in the graduate program, De La Torre’s work has expanded the understanding of neural networks and their applications to the study of the human brain. Working with Guangyu Robert Yang, an associate investigator at the McGovern Institute and an assistant professor in the departments of Brain and Cognitive Sciences and Electrical Engineering and Computer Sciences, she’s engaged in what she describes as more philosophical questions about how one develops a sense of self as an independent being. She’s interested in how that self-consciousness develops and why it might be useful.

De La Torre’s primary advisor, though, is Professor Josh McDermott, who leads the Laboratory for Computational Audition. With McDermott, De La Torre is attempting to understand how the brain integrates vision and sound. While combining sensory inputs may seem like a basic process, there are many unanswered questions about how our brains combine multiple signals into a coherent impression, or percept, of the world. Many of the questions are raised by audiovisual illusions in which what we hear changes what we see. For example, if one sees a video of two discs passing each other, but the clip contains the sound of a collision, the brain will perceive that the discs are bouncing off, rather than passing through each other. Given an ambiguous image, that simple auditory cue is all it takes to create a different perception of reality.

There’s something interesting happening where our brains are receiving two signals telling us different things and, yet, we have to combine them somehow to make sense of the world.

De La Torre is using behavioral experiments to probe how the human brain makes sense of multisensory cues to construct a particular perception. To do so, she’s created various scenes of objects interacting in 3D space over different sounds, asking research participants to describe characteristics of the scene. For example, in one experiment, she combines visuals of a block moving across a surface at different speeds with various scraping sounds, asking participants to estimate how rough the surface is. Eventually she hopes to take the experiment into virtual reality, where participants will physically push blocks in response to how rough they perceive the surface to be, rather than just reporting on what they experience.

Once she’s collected data, she’ll move into the modeling phase of the research, evaluating whether multisensory neural networks perceive illusions the way humans do. “What we want to do is model exactly what’s happening,” says De La Torre. “How is it that we’re receiving these two signals, integrating them and, at the same time, using all of our prior knowledge and inferences of physics to really make sense of the world?”

Although her two strands of research with Yang and McDermott may seem distinct, she sees clear connections between the two. Both projects are about grasping what artificial neural networks are capable of and what they tell us about the brain. At a more fundamental level, she says that how the brain perceives the world from different sensory cues might be part of what gives people a sense of self. Sensory perception is about constructing a cohesive, unitary sense of the world from multiple sources of sensory data. Similarly, she argues, “the sense of self is really a combination of actions, plans, goals, emotions, all of these different things that are components of their own, but somehow create a unitary being.”

It’s a fitting sentiment for De La Torre, who has been working to make sense of and integrate different aspects of her own life. Working in the Computational Audition lab, for example, she’s started experimenting with combining electronic music with folk music from her native Mexico, connecting her “two worlds,” as she says. Having the space to undertake those kinds of intellectual explorations, and colleagues who encourage it, is one of De La Torre’s favorite parts of MIT.

“Beyond professors, there’s also a lot of students whose way of thinking just amazes me,” she says. “I see a lot of goodness and excitement for science and a little bit of — it’s not nerdiness, but a love for very niche things — and I just kind of love that.”

A “golden era” to study the brain

As an undergraduate, Mitch Murdock was a rare science-humanities double major, specializing in both English and molecular, cellular, and developmental biology at Yale University. Today, as a doctoral student in the MIT Department of Brain and Cognitive Sciences, he sees obvious ways that his English education expanded his horizons as a neuroscientist.

“One of my favorite parts of English was trying to explore interiority, and how people have really complicated experiences inside their heads,” Murdock explains. “I was excited about trying to bridge that gap between internal experiences of the world and that actual biological substrate of the brain.”

Though he can see those connections now, it wasn’t until after Yale that Murdock became interested in brain sciences. As an undergraduate, he was in a traditional molecular biology lab. He even planned to stay there after graduation as a research technician; fortunately, though, he says his advisor Ron Breaker encouraged him to explore the field. That’s how Murdock ended up in a new lab run by Conor Liston, an associate professor at Weill Cornell Medicine, who studies how factors such as stress and sleep regulate the modeling of brain circuits.

It was in Liston’s lab that Murdock was first exposed to neuroscience and began to see the brain as the biological basis of the philosophical questions about experience and emotion that interested him. “It was really in his lab where I thought, ‘Wow, this is so cool. I have to do a PhD studying neuroscience,’” Murdock laughs.

During his time as a research technician, Murdock examined the impact of chronic stress on brain activity in mice. Specifically, he was interested in ketamine, a fast-acting antidepressant prone to being abused, with the hope that better understanding how ketamine works will help scientists find safer alternatives. He focused on dendritic spines, small organelles attached to neurons that help transmit electrical signals between neurons and provide the physical substrate for memory storage. His findings, Murdock explains, suggested that ketamine works by recovering dendritic spines that can be lost after periods of chronic stress.

After three years at Weill Cornell, Murdock decided to pursue doctoral studies in neuroscience, hoping to continue some of the work he started with Liston. He chose MIT because of the research being done on dendritic spines in the lab of Elly Nedivi, the William R. (1964) and Linda R. Young Professor of Neuroscience in The Picower Institute for Learning and Memory.

Once again, though, the opportunity to explore a wider set of interests fortuitously led Murdock to a new passion. During lab rotations at the beginning of his PhD program, Murdock spent time shadowing a physician at Massachusetts General Hospital who was working with Alzheimer’s disease patients.

“Everyone knows that Alzheimer’s doesn’t have a cure. But I realized that, really, if you have Alzheimer’s disease, there’s very little that can be done,” he says. “That was a big wake-up call for me.”

After that experience, Murdock strategically planned his remaining lab rotations, eventually settling into the lab of Li-Huei Tsai, the Picower Professor of Neuroscience and the director of the Picower Institute. For the past five years, Murdock has worked with Tsai on various strands of Alzheimer’s research.

In one project, for example, members of the Tsai lab have shown how certain kinds of non-invasive light and sound stimulation induce brain activity that can improve memory loss in mouse models of Alzheimer’s. Scientists think that, during sleep, small movements in blood vessels drive spinal fluid into the brain, which, in turn, flushes out toxic metabolic waste. Murdock’s research suggests that certain kinds of stimulation might drive a similar process, flushing out waste that can exacerbate memory loss.

Much of his work is focused on the activity of single cells in the brain. Are certain neurons or types of neurons genetically predisposed to degenerate, or do they break down randomly? Why do certain subtypes of cells appear to be dysfunctional earlier on in the course of Alzheimer’s disease? How do changes in blood flow in vascular cells affect degeneration? All of these questions, Murdock believes, will help scientists better understand the causes of Alzheimer’s, which will translate eventually into developing cures and therapies.

To answer these questions, Murdock relies on new single-cell sequencing techniques that he says have changed the way we think about the brain. “This has been a big advance for the field, because we know there are a lot of different cell types in the brain, and we think that they might contribute differentially to Alzheimer’s disease risk,” says Murdock. “We can’t think of the brain as only about neurons.”

Murdock says that that kind of “big-picture” approach — thinking about the brain as a compilation of many different cell types that are all interacting — is the central tenet of his research. To look at the brain in the kind of detail that approach requires, Murdock works with Ed Boyden, the Y. Eva Tan Professor in Neurotechnology, a professor of biological engineering and brain and cognitive sciences at MIT, a Howard Hughes Medical Institute investigator, and a member of MIT’s McGovern Institute for Brain Research and Koch Institute for Integrative Cancer Research. Working with Boyden has allowed Murdock to use new technologies such as expansion microscopy and genetically encoded sensors to aid his research.

That kind of new technology, he adds, has helped blow the field wide open. “This is such a cool time to be a neuroscientist because the tools available now make this a golden era to study the brain.” That rapid intellectual expansion applies to the study of Alzheimer’s as well, including newly understood connections between the immune system and Alzheimer’s — an area in which Murdock says he hopes to continue after graduation.

Right now, though, Murdock is focused on a review paper synthesizing some of the latest research. Given the mountains of new Alzheimer’s work coming out each year, he admits that synthesizing all the data is a bit “crazy,” but he couldn’t be happier to be in the middle of it. “There’s just so much that we are learning about the brain from these new techniques, and it’s just so exciting.”

Personal pursuits

This story originally appeared in the Fall 2022 issue of BrainScan.

***

Many neuroscientists were drawn to their careers out of curiosity and wonder. Their deep desire to understand how the brain works drew them into the lab and keeps them coming back, digging deeper and exploring more each day. But for some, the work is more personal.

Several McGovern faculty say they entered their field because someone in their lives was dealing with a brain disorder that they wanted to better understand. They are committed to unraveling the basic biology of those conditions, knowing that knowledge is essential to guide the development of better treatments.

The distance from basic research to clinical progress is shortening, and many young neuroscientists hope not just to deepen scientific understanding of the brain, but to have direct impact on the lives of patients. Some want to know why people they love are suffering from neurological disorders or mental illness; others seek to understand the ways in which their own brains work differently than others. But above all, they want better treatments for people affected by such disorders.

Seeking answers

That’s true for Kian Caplan, a graduate student in MIT’s Department of Brain and Cognitive Sciences who was diagnosed with Tourette syndrome around age 13. At the time, learning that the repetitive, uncontrollable movements and vocal tics he had been making for most of his life were caused by a neurological disorder was something of a relief. But it didn’t take long for Caplan to realize his diagnosis came with few answers.

Graduate student Kian Caplan studies the brain circuits associated with Tourette syndrome and obsessive-compulsive disorder in Guoping Feng and Fan Wang’s labs at the McGovern Institute. Photo: Steph Stevens

Tourette syndrome has been estimated to occur in about six of every 1,000 children, but its neurobiology remains poorly understood.

“The doctors couldn’t really explain why I can’t control the movements and sounds I make,” he says. “They couldn’t really explain why my symptoms wax and wane, or why the tics I have aren’t always the same.”

That lack of understanding is not just frustrating for curious kids like Caplan. It means that researchers have been unable to develop treatments that target the root cause of Tourette syndrome. Drugs that dampen signaling in parts of the brain that control movement can help suppress tics, but not without significant side effects. Caplan has tried those drugs. For him, he says, “they’re not worth the suppression.”

Advised by Fan Wang and McGovern Associate Director Guoping Feng, Caplan is looking for answers. A mouse model of obsessive-compulsive disorder developed in Feng’s lab was recently found to exhibit repetitive movements similar to those of people with Tourette syndrome, and Caplan is working to characterize those tic-like movements. He will use the mouse model to examine the brain circuits underlying the two conditions, which often co-occur in people. Broadly, researchers think Tourette syndrome arises due to dysregulation of cortico-striatal-thalamo-cortical circuits, which connect distant parts of the brain to control movement. Caplan and Wang suspect that the brainstem — a structure found where the brain connects to the spinal cord, known for organizing motor movement into different modules — is probably involved, too.

Wang’s research group studies the brainstem’s role in movement, but she says that like most researchers, she hadn’t considered its role in Tourette syndrome until Caplan joined her lab. That’s one reason Caplan, who has long been a mentor and advocate for students with neurodevelopmental disorders, thinks neuroscience needs more neurodiversity.

“I think we need more representation in basic science research by the people who actually live with those conditions,” he says. Their experiences can lead to insights that may be inaccessible to others, he says, but significant barriers in academia often prevent this kind of representation. Caplan wants to see institutions make systemic changes to ensure that neurodiverse and otherwise minority individuals are able to thrive in academia. “I’m not an exception,” he says, “there should be more people like me here, but the present system makes that incredibly difficult.”

Overcoming adversity

Like Caplan, Lace Riggs faced significant challenges in her pursuit to study the brain. She grew up in Southern California’s Inland Empire, where issues of social disparity, chronic stress, drug addiction, and mental illness were a part of everyday life.

Postdoctoral fellow Lace Riggs studies the origins of neurodevelopmental conditions in Guoping Feng’s lab at the McGovern Institute. Photo: Lace Riggs

“Living in severe poverty and relying on government assistance without access to adequate education and resources led everyone I know and love to suffer tremendously, myself included,” says Riggs, a postdoctoral fellow in the Feng lab.

“There are not a lot of people like me who make it to this stage,” says Riggs, who has lost friends and family members to addiction, mental illness, and suicide. “There’s a reason for that,” she adds. “It’s really, really difficult to get through the educational system and to overcome socioeconomic barriers.”

Today, Riggs is investigating the origins of neurodevelopmental conditions, hoping to pave the way to better treatments for brain disorders by uncovering the molecular changes that alter the structure and function of neural circuits.

Riggs says that the adversities she faced early in life offered valuable insights in the pursuit of these goals. She first became interested in the brain because she wanted to understand how our experiences have a lasting impact on who we are — including in ways that leave people vulnerable to psychiatric problems.

“While the need for more effective treatments led me to become interested in psychiatry, my fascination with the brain’s unique ability to adapt is what led me to neuroscience,” says Riggs.

After finishing high school, Riggs attended California State University in San Bernardino and became the only member of her family to attend university or attempt a four-year degree. Today, she spends her days working with mice that carry mutations linked to autism or ADHD in humans, studying the animals’ behavior and monitoring their neural activity. She expects that aberrant neural circuit activity in these conditions may also contribute to mood disorders, whose origins are harder to tease apart because they often arise when genetic and environmental factors intersect. Ultimately, Riggs says, she wants to understand how our genes dictate whether an experience will alter neural signaling and impact mental health in a long-lasting way.

Riggs uses patch clamp electrophysiology to record the strength of inhibitory and excitatory synaptic input onto individual neurons (white arrow) in an animal model of autism. Image: Lace Riggs

“If we understand how these long-lasting synaptic changes come about, then we might be able to leverage these mechanisms to develop new and more effective treatments.”

While the turmoil of her childhood is in the past, Riggs says it is not forgotten — in part, because of its lasting effects on her own mental health.  She talks openly about her ongoing struggle with social anxiety and complex post-traumatic stress disorder because she is passionate about dismantling the stigma surrounding these conditions. “It’s something I have to deal with every day,” Riggs says. That means coping with symptoms like difficulty concentrating, hypervigilance, and heightened sensitivity to stress. “It’s like a constant hum in the background of my life, it never stops,” she says.

“I urge all of us to strive, not only to make scientific discoveries to move the field forward,” says Riggs, “but to improve the accessibility of this career to those whose lived experiences are required to truly accomplish that goal.”

Modeling the social mind

Typically, it would take two graduate students to do the research that Setayesh Radkani is doing.

Driven by an insatiable curiosity about the human mind, she is working on two PhD thesis projects in two different cognitive neuroscience labs at MIT. For one, she is studying punishment as a social tool to influence others. For the other, she is uncovering the neural processes underlying social learning — that is, learning from others. By piecing together these two research programs, Radkani is hoping to gain a better understanding of the mechanisms underpinning social influence in the mind and brain.

Radkani lived in Iran for most of her life, growing up alongside her younger brother in Tehran. The two spent a lot of time together and have long been each other’s best friends. Her father is a civil engineer, and her mother is a midwife. Her parents always encouraged her to explore new things and follow her own path, even if it wasn’t quite what they imagined for her. And her uncle helped cultivate her sense of curiosity, teaching her to “always ask why” as a way to understand how the world works.

Growing up, Radkani most loved learning about human psychology and using math to model the world around her. But she thought it was impossible to combine her two interests. Prioritizing math, she pursued a bachelor’s degree in electrical engineering at the Sharif University of Technology in Iran.

Then, late in her undergraduate studies, Radkani took a psychology course and discovered the field of cognitive neuroscience, in which scientists mathematically model the human mind and brain. She also spent a summer working in a computational neuroscience lab at the Swiss Federal Institute of Technology in Lausanne. Seeing a way to combine her interests, she decided to pivot and pursue the subject in graduate school.

An experience leading a project in her engineering ethics course during her final year of undergrad further helped her discover some of the questions that would eventually form the basis of her PhD. The project investigated why some students cheat and how to change this.

“Through this project I learned how complicated it is to understand the reasons that people engage in immoral behavior, and even more complicated than that is how to devise policies and react in these situations in order to change people’s attitudes,” Radkani says. “It was this experience that made me realize that I’m interested in studying the human social and moral mind.”

She began looking into social cognitive neuroscience research and stumbled upon a relevant TED talk by Rebecca Saxe, the John W. Jarve Professor in Brain and Cognitive Sciences at MIT, who would eventually become one of Radkani’s research advisors. Radkani knew immediately that she wanted to work with Saxe. But she needed to first get into the BCS PhD program at MIT, a challenging obstacle given her minimal background in the field.

After two application cycles and a year’s worth of graduate courses in cognitive neuroscience, Radkani was accepted into the program. But to come to MIT, she had to leave her family behind. Coming from Iran, Radkani has a single-entry visa, making it difficult for her to travel outside the U.S. She hasn’t been able to visit her family since starting her PhD and won’t be able to until at least after she graduates. Her visa also limits her research contributions, restricting her from attending conferences outside the U.S. “That is definitely a huge burden on my education and on my mental health,” she says.

Still, Radkani is grateful to be at MIT, indulging her curiosity in the human social mind. And she’s thankful for her supportive family, who she calls over FaceTime every day.

Modeling how people think about punishment

In Saxe’s lab, Radkani is researching how people approach and react to punishment, through behavioral studies and neuroimaging. By synthesizing these findings, she’s developing a computational model of the mind that characterizes how people make decisions in situations involving punishment, such as when a parent disciplines a child, when someone punishes their romantic partner, or when the criminal justice system sentences a defendant. With this model, Radkani says she hopes to better understand “when and why punishment works in changing behavior and influencing beliefs about right and wrong, and why sometimes it fails.”

Punishment isn’t a new research topic in cognitive neuroscience, Radkani says, but in previous studies, scientists have often only focused on people’s behavior in punitive situations and haven’t considered the thought processes that underlie those behaviors. Characterizing these thought processes, though, is key to understanding whether punishment in a situation can be effective in changing people’s attitudes.

People bring their prior beliefs into a punitive situation. Apart from moral beliefs about the appropriateness of different behaviors, “you have beliefs about the characteristics of the people involved, and you have theories about their intentions and motivations,” Radkani says. “All those come together to determine what you do or how you are influenced by punishment,” given the circumstances. Punishers decide a suitable punishment based on their interpretation of the situation, in light of their beliefs. Targets of punishment then decide whether they’ll change their attitude as a result of the punishment, depending on their own beliefs. Even outside observers make decisions, choosing whether to keep or change their moral beliefs based on what they see.

To capture these decision-making processes, Radkani is developing a computational model of the mind for punitive situations. The model mathematically represents people’s beliefs and how they interact with certain features of the situation to shape their decisions. The model then predicts a punisher’s decisions, and how punishment will influence the target and observers. Through this model, Radkani will provide a foundational understanding of how people think in various punitive situations.

Researching the neural mechanisms of social learning

In parallel, working in the lab of Professor Mehrdad Jazayeri, Radkani is studying social learning, uncovering its underlying neural processes. Through social learning, people learn from other people’s experiences and decisions, and incorporate this socially acquired knowledge into their own decisions or beliefs.

Humans are extraordinary in their social learning abilities, however our primary form of learning, shared by all other animals, is learning from self-experience. To investigate how learning from others is similar to or different from learning from our own experiences, Radkani has designed a two-player video game that involves both types of learning. During the game, she and her collaborators in Jazayeri’s lab record neural activity in the brain. By analyzing these neural measurements, they plan to uncover the computations carried out by neural circuits during social learning, and compare those to learning from self-experience.

Radkani first became curious about this comparison as a way to understand why people sometimes draw contrasting conclusions from very similar situations. “For example, if I get Covid from going to a restaurant, I’ll blame the restaurant and say it was not clean,” Radkani says. “But if I hear the same thing happen to my friend, I’ll say it’s because they were not careful.” Radkani wanted to know the root causes of this mismatch in how other people’s experiences affect our beliefs and judgements differently from our own similar experiences, particularly because it can lead to “errors that color the way that we judge other people,” she says.

By combining her two research projects, Radkani hopes to better understand how social influence works, particularly in moral situations. From there, she has a slew of research questions that she’s eager to investigate, including: How do people choose who to trust? And which types of people tend to be the most influential? As Radkani’s research grows, so does her curiosity.