Not every reader’s struggle is the same

Many children struggle to learn to read, and studies have shown that students from a lower socioeconomic status (SES) background are more likely to have difficulty than those from a higher SES background.

MIT neuroscientists have now discovered that the types of difficulties that lower-SES students have with reading, and the underlying brain signatures, are, on average, different from those of higher-SES students who struggle with reading.

In a new study, which included brain scans of more than 150 children as they performed tasks related to reading, researchers found that when students from higher SES backgrounds struggled with reading, it could usually be explained by differences in their ability to piece sounds together into words, a skill known as phonological processing.

However, when students from lower SES backgrounds struggled, it was best explained by differences in their ability to rapidly name words or letters, a task associated with orthographic processing, or visual interpretation of words and letters. This pattern was further confirmed by brain activation during phonological and orthographic processing.

These differences suggest that different types of interventions may needed for different groups of children, the researchers say. The study also highlights the importance of including a wide range of SES levels in studies of reading or other types of academic learning.

“Within the neuroscience realm, we tend to rely on convenience samples of participants, so a lot of our understanding of the neuroscience components of reading in general, and reading disabilities in particular, tends to be based on higher-SES families,” says Rachel Romeo, a former graduate student in the Harvard-MIT Program in Health Sciences and Technology and the lead author of the study. “If we only look at these nonrepresentative samples, we can come away with a relatively biased view of how the brain works.”

Romeo is now an assistant professor in the Department of Human Development and Quantitative Methodology at the University of Maryland. John Gabrieli, the Grover Hermann Professor of Health Sciences and Technology and a professor of brain and cognitive sciences at MIT, is the senior author of the paper, which appears today in the journal Developmental Cognitive Neuroscience.

Components of reading

For many years, researchers have known that children’s scores on standardized assessments of reading are correlated with socioeconomic factors such as school spending per student or the number of children at the school who qualify for free or reduced-price lunches.

Studies of children who struggle with reading, mostly done in higher-SES environments, have shown that the aspect of reading they struggle with most is phonological awareness: the understanding of how sounds combine to make a word, and how sounds can be split up and swapped in or out to make new words.

“That’s a key component of reading, and difficulty with phonological processing is often one of the hallmarks of dyslexia or other reading disorders,” Romeo says.

In the new study, the MIT team wanted to explore how SES might affect phonological processing as well as another key aspect of reading, orthographic processing. This relates more to the visual components of reading, including the ability to identify letters and read words.

To do the study, the researchers recruited first and second grade students from the Boston area, making an effort to include a range of SES levels. For the purposes of this study, SES was assessed by parents’ total years of formal education, which is commonly used as a measure of the family’s SES.

“We went into this not necessarily with any hypothesis about how SES might relate to the two types of processing, but just trying to understand whether SES might be impacting one or the other more, or if it affects both types the same,” Romeo says.

The researchers first gave each child a series of standardized tests designed to measure either phonological processing or orthographic processing. Then, they performed fMRI scans of each child while they carried out additional phonological or orthographic tasks.

The initial series of tests allowed the researchers to determine each child’s abilities for both types of processing, and the brain scans allowed them to measure brain activity in parts of the brain linked with each type of processing.

The results showed that at the higher end of the SES spectrum, differences in phonological processing ability accounted for most of the differences between good readers and struggling readers. This is consistent with the findings of previous studies of reading difficulty. In those children, the researchers also found greater differences in activity in the parts of the brain responsible for phonological processing.

However, the outcomes were different when the researchers analyzed the lower end of the SES spectrum. There, the researchers found that variance in orthographic processing ability accounted for most of the differences between good readers and struggling readers. MRI scans of these children revealed greater differences in brain activity in parts of the brain that are involved in orthographic processing.

Optimizing interventions

There are many possible reasons why a lower SES background might lead to difficulties in orthographic processing, the researchers say. It might be less exposure to books at home, or limited access to libraries and other resources that promote literacy. For children from this background who struggle with reading, different types of interventions might benefit them more than the ones typically used for children who have difficulty with phonological processing.

In a 2017 study, Gabrieli, Romeo, and others found that a summer reading intervention that focused on helping students develop the sensory and cognitive processing necessary for reading was more beneficial for students from lower-SES backgrounds than children from higher-SES backgrounds. Those findings also support the idea that tailored interventions may be necessary for individual students, they say.

“There are two major reasons we understand that cause children to struggle as they learn to read in these early grades. One of them is learning differences, most prominently dyslexia, and the other one is socioeconomic disadvantage,” Gabrieli says. “In my mind, schools have to help all these kinds of kids become the best readers they can, so recognizing the source or sources of reading difficulty ought to inform practices and policies that are sensitive to these differences and optimize supportive interventions.”

Gabrieli and Romeo are now working with researchers at the Harvard University Graduate School of Education to evaluate language and reading interventions that could better prepare preschool children from lower SES backgrounds to learn to read. In her new lab at the University of Maryland, Romeo also plans to further delve into how different aspects of low SES contribute to different areas of language and literacy development.

“No matter why a child is struggling with reading, they need the education and the attention to support them. Studies that try to tease out the underlying factors can help us in tailoring educational interventions to what a child needs,” she says.

The research was funded by the Ellison Medical Foundation, the Halis Family Foundation, and the National Institutes of Health.

Study urges caution when comparing neural networks to the brain

Neural networks, a type of computing system loosely modeled on the organization of the human brain, form the basis of many artificial intelligence systems for applications such speech recognition, computer vision, and medical image analysis.

In the field of neuroscience, researchers often use neural networks to try to model the same kind of tasks that the brain performs, in hopes that the models could suggest new hypotheses regarding how the brain itself performs those tasks. However, a group of researchers at MIT is urging that more caution should be taken when interpreting these models.

In an analysis of more than 11,000 neural networks that were trained to simulate the function of grid cells — key components of the brain’s navigation system — the researchers found that neural networks only produced grid-cell-like activity when they were given very specific constraints that are not found in biological systems.

“What this suggests is that in order to obtain a result with grid cells, the researchers training the models needed to bake in those results with specific, biologically implausible implementation choices,” says Rylan Schaeffer, a former senior research associate at MIT.

Without those constraints, the MIT team found that very few neural networks generated grid-cell-like activity, suggesting that these models do not necessarily generate useful predictions of how the brain works.

Schaeffer, who is now a graduate student in computer science at Stanford University, is the lead author of the new study, which will be presented at the 2022 Conference on Neural Information Processing Systems this month. Ila Fiete, a professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research, is the senior author of the paper. Mikail Khona, an MIT graduate student in physics, is also an author.

Ila Fiete leads a discussion in her lab at the McGovern Institute. Photo: Steph Stevens

Modeling grid cells

Neural networks, which researchers have been using for decades to perform a variety of computational tasks, consist of thousands or millions of processing units connected to each other. Each node has connections of varying strengths to other nodes in the network. As the network analyzes huge amounts of data, the strengths of those connections change as the network learns to perform the desired task.

In this study, the researchers focused on neural networks that have been developed to mimic the function of the brain’s grid cells, which are found in the entorhinal cortex of the mammalian brain. Together with place cells, found in the hippocampus, grid cells form a brain circuit that helps animals know where they are and how to navigate to a different location.

Place cells have been shown to fire whenever an animal is in a specific location, and each place cell may respond to more than one location. Grid cells, on the other hand, work very differently. As an animal moves through a space such as a room, grid cells fire only when the animal is at one of the vertices of a triangular lattice. Different groups of grid cells create lattices of slightly different dimensions, which overlap each other. This allows grid cells to encode a large number of unique positions using a relatively small number of cells.

This type of location encoding also makes it possible to predict an animal’s next location based on a given starting point and a velocity. In several recent studies, researchers have trained neural networks to perform this same task, which is known as path integration.

To train neural networks to perform this task, researchers feed into it a starting point and a velocity that varies over time. The model essentially mimics the activity of an animal roaming through a space, and calculates updated positions as it moves. As the model performs the task, the activity patterns of different units within the network can be measured. Each unit’s activity can be represented as a firing pattern, similar to the firing patterns of neurons in the brain.

In several previous studies, researchers have reported that their models produced units with activity patterns that closely mimic the firing patterns of grid cells. These studies concluded that grid-cell-like representations would naturally emerge in any neural network trained to perform the path integration task.

However, the MIT researchers found very different results. In an analysis of more than 11,000 neural networks that they trained on path integration, they found that while nearly 90 percent of them learned the task successfully, only about 10 percent of those networks generated activity patterns that could be classified as grid-cell-like. That includes networks in which even only a single unit achieved a high grid score.

The earlier studies were more likely to generate grid-cell-like activity only because of the constraints that researchers build into those models, according to the MIT team.

“Earlier studies have presented this story that if you train networks to path integrate, you’re going to get grid cells. What we found is that instead, you have to make this long sequence of choices of parameters, which we know are inconsistent with the biology, and then in a small sliver of those parameters, you will get the desired result,” Schaeffer says.

More biological models

One of the constraints found in earlier studies is that the researchers required the model to convert velocity into a unique position, reported by one network unit that corresponds to a place cell. For this to happen, the researchers also required that each place cell correspond to only one location, which is not how biological place cells work: Studies have shown that place cells in the hippocampus can respond to up to 20 different locations, not just one.

When the MIT team adjusted the models so that place cells were more like biological place cells, the models were still able to perform the path integration task, but they no longer produced grid-cell-like activity. Grid-cell-like activity also disappeared when the researchers instructed the models to generate different types of location output, such as location on a grid with X and Y axes, or location as a distance and angle relative to a home point.

“If the only thing that you ask this network to do is path integrate, and you impose a set of very specific, not physiological requirements on the readout unit, then it’s possible to obtain grid cells,” says Fiete, who is also the director of the K. Lisa Yang Integrative Computational Neuroscience Center at MIT. “But if you relax any of these aspects of this readout unit, that strongly degrades the ability of the network to produce grid cells. In fact, usually they don’t, even though they still solve the path integration task.”

Therefore, if the researchers hadn’t already known of the existence of grid cells, and guided the model to produce them, it would be very unlikely for them to appear as a natural consequence of the model training.

The researchers say that their findings suggest that more caution is warranted when interpreting neural network models of the brain.

“When you use deep learning models, they can be a powerful tool, but one has to be very circumspect in interpreting them and in determining whether they are truly making de novo predictions, or even shedding light on what it is that the brain is optimizing,” Fiete says.

Kenneth Harris, a professor of quantitative neuroscience at University College London, says he hopes the new study will encourage neuroscientists to be more careful when stating what can be shown by analogies between neural networks and the brain.

“Neural networks can be a useful source of predictions. If you want to learn how the brain solves a computation, you can train a network to perform it, then test the hypothesis that the brain works the same way. Whether the hypothesis is confirmed or not, you will learn something,” says Harris, who was not involved in the study. “This paper shows that ‘postdiction’ is less powerful: Neural networks have many parameters, so getting them to replicate an existing result is not as surprising.”

When using these models to make predictions about how the brain works, it’s important to take into account realistic, known biological constraints when building the models, the MIT researchers say. They are now working on models of grid cells that they hope will generate more accurate predictions of how grid cells in the brain work.

“Deep learning models will give us insight about the brain, but only after you inject a lot of biological knowledge into the model,” Khona says. “If you use the correct constraints, then the models can give you a brain-like solution.”

The research was funded by the Office of Naval Research, the National Science Foundation, the Simons Foundation through the Simons Collaboration on the Global Brain, and the Howard Hughes Medical Institute through the Faculty Scholars Program. Mikail Khona was supported by the MathWorks Science Fellowship.

Magnetic sensors track muscle length

Using a simple set of magnets, MIT researchers have come up with a sophisticated way to monitor muscle movements, which they hope will make it easier for people with amputations to control their prosthetic limbs.

In a new pair of papers, the researchers demonstrated the accuracy and safety of their magnet-based system, which can track the length of muscles during movement. The studies, performed in animals, offer hope that this strategy could be used to help people with prosthetic devices control them in a way that more closely mimics natural limb movement.

“These recent results demonstrate that this tool can be used outside the lab to track muscle movement during natural activity, and they also suggest that the magnetic implants are stable and biocompatible and that they don’t cause discomfort,” says Cameron Taylor, an MIT research scientist and co-lead author of both papers.

McGovern Institute Associate Investigator Hugh Herr. Photo: Jimmy Day / MIT Media Lab

In one of the studies, the researchers showed that they could accurately measure the lengths of turkeys’ calf muscles as the birds ran, jumped, and performed other natural movements. In the other study, they showed that the small magnetic beads used for the measurements do not cause inflammation or other adverse effects when implanted in muscle.

“I am very excited for the clinical potential of this new technology to improve the control and efficacy of bionic limbs for persons with limb-loss,” says Hugh Herr, a professor of media arts and sciences, co-director of the K. Lisa Yang Center for Bionics at MIT, and an associate member of MIT’s McGovern Institute for Brain Research.

Herr is a senior author of both papers, which appear today in the journal Frontiers in Bioengineering and Biotechnology. Thomas Roberts, a professor of ecology, evolution, and organismal biology at Brown University, is a senior author of the measurement study.

Tracking movement

Currently, powered prosthetic limbs are usually controlled using an approach known as surface electromyography (EMG). Electrodes attached to the surface of the skin or surgically implanted in the residual muscle of the amputated limb measure electrical signals from a person’s muscles, which are fed into the prosthesis to help it move the way the person wearing the limb intends.

However, that approach does not take into account any information about the muscle length or velocity, which could help to make the prosthetic movements more accurate.

Several years ago, the MIT team began working on a novel way to perform those kinds of muscle measurements, using an approach that they call magnetomicrometry. This strategy takes advantage of the permanent magnetic fields surrounding small beads implanted in a muscle. Using a credit-card-sized, compass-like sensor attached to the outside of the body, their system can track the distances between the two magnets. When a muscle contracts, the magnets move closer together, and when it flexes, they move further apart.

The new muscle measuring approach takes advantage of the magnetic attraction between two small beads implanted in a muscle. Using a small sensor attached to the outside of the body, the system can track the distances between the two magnets as the muscle contracts and flexes. Image: Hugh Herr

In a study published last year, the researchers showed that this system could be used to accurately measure small ankle movements when the beads were implanted in the calf muscles of turkeys. In one of the new studies, the researchers set out to see if the system could make accurate measurements during more natural movements in a nonlaboratory setting.

To do that, they created an obstacle course of ramps for the turkeys to climb and boxes for them to jump on and off of. The researchers used their magnetic sensor to track muscle movements during these activities, and found that the system could calculate muscle lengths in less than a millisecond.

They also compared their data to measurements taken using a more traditional approach known as fluoromicrometry, a type of X-ray technology that requires much larger equipment than magnetomicrometry. The magnetomicrometry measurements varied from those generated by fluoromicrometry by less than a millimeter, on average.

“We’re able to provide the muscle-length tracking functionality of the room-sized X-ray equipment using a much smaller, portable package, and we’re able to collect the data continuously instead of being limited to the 10-second bursts that fluoromicrometry is limited to,” Taylor says.

Seong Ho Yeon, an MIT graduate student, is also a co-lead author of the measurement study. Other authors include MIT Research Support Associate Ellen Clarrissimeaux and former Brown University postdoc Mary Kate O’Donnell.

Biocompatibility

In the second paper, the researchers focused on the biocompatibility of the implants. They found that the magnets did not generate tissue scarring, inflammation, or other harmful effects. They also showed that the implanted magnets did not alter the turkeys’ gaits, suggesting they did not produce discomfort. William Clark, a postdoc at Brown, is the co-lead author of the biocompatibility study.

The researchers also showed that the implants remained stable for eight months, the length of the study, and did not migrate toward each other, as long as they were implanted at least 3 centimeters apart. The researchers envision that the beads, which consist of a magnetic core coated with gold and a polymer called Parylene, could remain in tissue indefinitely once implanted.

“Magnets don’t require an external power source, and after implanting them into the muscle, they can maintain the full strength of their magnetic field throughout the lifetime of the patient,” Taylor says.

The researchers are now planning to seek FDA approval to test the system in people with prosthetic limbs. They hope to use the sensor to control prostheses similar to the way surface EMG is used now: Measurements regarding the length of muscles will be fed into the control system of a prosthesis to help guide it to the position that the wearer intends.

“The place where this technology fills a need is in communicating those muscle lengths and velocities to a wearable robot, so that the robot can perform in a way that works in tandem with the human,” Taylor says. “We hope that magnetomicrometry will enable a person to control a wearable robot with the same comfort level and the same ease as someone would control their own limb.”

In addition to prosthetic limbs, those wearable robots could include robotic exoskeletons, which are worn outside the body to help people move their legs or arms more easily.

The research was funded by the Salah Foundation, the K. Lisa Yang Center for Bionics at MIT, the MIT Media Lab Consortia, the National Institutes of Health, and the National Science Foundation.

Unlocking the mysteries of how neurons learn

When he matriculated in 2019 as a graduate student, Raúl Mojica Soto-Albors was no stranger to MIT. He’d spent time here on multiple occasions as an undergraduate at the University of Puerto Rico at Mayagüez, including eight months in 2018 as a displaced student after Hurricane Maria in 2017. Those experiences — including participating in the MIT Summer Research Bio Program (MSRP-Bio), which offers a funded summer research experience to underrepresented minorities and other underserved students — not only changed his course of study; they also empowered him to pursue a PhD.

“The summer program eased a lot of my worries about what science would be like, because I had never been immersed in an environment like MIT’s,” he says. “I thought it would be too intense and I wouldn’t be able to make it. But, in reality, it is just a bunch of people following their passions. And so, as long as you are following your passion, you are going to be pretty happy and productive.”

Mojica is now following his passion as a doctoral student in the MIT Department of Brain and Cognitive Sciences, using a complex electrophysiology method termed “patch clamp” to investigate neuronal activity in vivo. “It has all the stuff which we historically have not paid much attention to,” he explains. “Neuroscientists have been very focused on the spiking of the neuron. But I am concentrating instead on patterns in the subthreshold activity of neurons.”

Opening a door to neuroscience

Mojica’s affinity for science blossomed in childhood. Even though his parents encouraged him, he says, “It was a bit difficult as I did not have someone in science in my family. There was no one [like that] who I could go to for guidance.” In college, he became interested in the parameters of human behavior and decided to major in psychology. At the same time, he was curious about biology. “As I was learning about psychology,” he says. “I kept wondering how we, as human beings, emerge from such a mess of interacting neurons.”

His journey at MIT began in January 2017, when he was invited to attend the Center for Brains, Minds and Machines Quantitative Biology Methods Program, an intensive, weeklong program offered to underrepresented students of color to prepare them for scientific careers. Even though he had taken a Python class at the University of Puerto Rico and completed some online courses, he says, “This was the first instance where I had to develop my own tools and learn how to use a programming language to my advantage.”

The program also dramatically changed the course of his undergraduate career, thanks to conversations with Mandana Sassanfar, a biology lecturer and the program’s coordinator, about his future goals. “She advised me to change to majors to biology, as the psychology component is a little bit easier to read up on than missing the foundational biology classes,” he says. She also recommended that he apply to MSRP.

Mojica promptly took her advice, and he returned to MIT in the summer of 2017 as an MSRP student working in the lab of Associate Professor Mark Harnett in the Department of Brain and Cognitive Sciences and the McGovern Institute. There, he focused on performing calcium imaging on the retro splenial cortex to understand the role of neurons in navigating a complex spatial environment. The experience was eye-opening; there are very few specialized programs at UPRM, notes Mojica, which limited his exposure to interdisciplinary subjects. “That was my door into neuroscience, which I otherwise would have never been able to get into.”

Weathering the storm

Mojica returned home to begin his senior year, but shortly thereafter, in September 2017, Hurricane Maria hit Puerto Rico and devastated the community. “The island was dealing with blackouts almost a year after the hurricane, and they are still dealing with them today. It makes it really difficult, for example, for people who rely on electricity for oxygen or to refrigerate their diabetes medicine,” he says. “[My family] was lucky to have electricity reliably four months after the hurricane. But I had a lot of people around me who spent eight, nine, 10 months without electricity,” he says.

The hurricane’s destruction disrupted every aspect of life, including education. MIT offered its educational resources by hosting several 2017 MSRP students from Puerto Rico for the spring semester, including Mojica. He moved back to campus in February 2018, finished up his fall term university exams, and took classes and did research throughout the spring and summer of that year.

“That was when I first got some culture shock and felt homesick,” he notes. Thankfully, he was not alone. He befriended another student from Puerto Rico who helped him through that tough time. They understood and supported each other, as both of their families were navigating the challenges of a post-hurricane island. Mojica says, “We had just come out of this mess of the hurricane, and we came [to MIT] and everything was perfect. … It was jarring.”

Despite the immense upheaval in his life, Mojica was determined to pursue a PhD. “I didn’t want to just consume knowledge for the rest of my life,” he says. “I wanted to produce knowledge. I wanted to be on the cutting-edge of something.”

Paying it forward

Now a fourth-year PhD candidate in the Harnett Lab, he’s doing just that, utilizing a classical method termed “patch clamp electrophysiology” in novel ways to investigate neuronal learning. The patch clamp technique allows him to observe activity below the threshold of neuronal firing in mice, something that no other method can do.

“I am studying how single neurons learn and adapt, or plasticize,” Mojica explains. “If I present something new and unexpected to the animal, how does a cell respond? And if I stimulate the cell, can I make it learn something that it didn’t respond to before?” This research could have implications for patient recovery after severe brain injuries. Plasticity is a crucial aspect of brain function. If we could figure out how neurons learn, or even how to plasticize them, we could speed up recovery from life-threatening loss of brain tissue, for example,” he says.

In addition to research, Mojica’s passion for mentorship shines through. His voice lifts as he describes one of his undergraduate mentees, Gabriella, who is now a full-time graduate student in the Harnett lab. He currently mentors MSRP students and advises prospective PhD students on their applications. “When I was navigating the PhD process, I did not have people like me serving as my own mentors,” he notes.

Mojica knows firsthand the impact of mentoring. Even though he never had anyone who could provide guidance about science, his childhood music teacher played an extremely influential role in his early career and always encouraged him to pursue his passions. “He had a lot of knowledge in how to navigate the complicated mess of being 17 or 18 and figuring out what you want to devote the rest of your life to,” he recalls fondly.

Although he’s not sure about his future professional plans, one thing is clear for Mojica: “A big part of it will be mentoring the people who come from similar backgrounds to mine who have less access to opportunities. I want to keep that front and center.”

Understanding reality through algorithms

Although Fernanda De La Torre still has several years left in her graduate studies, she’s already dreaming big when it comes to what the future has in store for her.

“I dream of opening up a school one day where I could bring this world of understanding of cognition and perception into places that would never have contact with this,” she says.

It’s that kind of ambitious thinking that’s gotten De La Torre, a doctoral student in MIT’s Department of Brain and Cognitive Sciences, to this point. A recent recipient of the prestigious Paul and Daisy Soros Fellowship for New Americans, De La Torre has found at MIT a supportive, creative research environment that’s allowed her to delve into the cutting-edge science of artificial intelligence. But she’s still driven by an innate curiosity about human imagination and a desire to bring that knowledge to the communities in which she grew up.

An unconventional path to neuroscience

De La Torre’s first exposure to neuroscience wasn’t in the classroom, but in her daily life. As a child, she watched her younger sister struggle with epilepsy. At 12, she crossed into the United States from Mexico illegally to reunite with her mother, exposing her to a whole new language and culture. Once in the States, she had to grapple with her mother’s shifting personality in the midst of an abusive relationship. “All of these different things I was seeing around me drove me to want to better understand how psychology works,” De La Torre says, “to understand how the mind works, and how it is that we can all be in the same environment and feel very different things.”

But finding an outlet for that intellectual curiosity was challenging. As an undocumented immigrant, her access to financial aid was limited. Her high school was also underfunded and lacked elective options. Mentors along the way, though, encouraged the aspiring scientist, and through a program at her school, she was able to take community college courses to fulfill basic educational requirements.

It took an inspiring amount of dedication to her education, but De La Torre made it to Kansas State University for her undergraduate studies, where she majored in computer science and math. At Kansas State, she was able to get her first real taste of research. “I was just fascinated by the questions they were asking and this entire space I hadn’t encountered,” says De La Torre of her experience working in a visual cognition lab and discovering the field of computational neuroscience.

Although Kansas State didn’t have a dedicated neuroscience program, her research experience in cognition led her to a machine learning lab led by William Hsu, a computer science professor. There, De La Torre became enamored by the possibilities of using computation to model the human brain. Hsu’s support also convinced her that a scientific career was a possibility. “He always made me feel like I was capable of tackling big questions,” she says fondly.

With the confidence imparted in her at Kansas State, De La Torre came to MIT in 2019 as a post-baccalaureate student in the lab of Tomaso Poggio, the Eugene McDermott Professor of Brain and Cognitive Sciences and an investigator at the McGovern Institute for Brain Research. With Poggio, also the director of the Center for Brains, Minds and Machines, De La Torre began working on deep-learning theory, an area of machine learning focused on how artificial neural networks modeled on the brain can learn to recognize patterns and learn.

“It’s a very interesting question because we’re starting to use them everywhere,” says De La Torre of neural networks, listing off examples from self-driving cars to medicine. “But, at the same time, we don’t fully understand how these networks can go from knowing nothing and just being a bunch of numbers to outputting things that make sense.”

Her experience as a post-bac was De La Torre’s first real opportunity to apply the technical computer skills she developed as an undergraduate to neuroscience. It was also the first time she could fully focus on research. “That was the first time that I had access to health insurance and a stable salary. That was, in itself, sort of life-changing,” she says. “But on the research side, it was very intimidating at first. I was anxious, and I wasn’t sure that I belonged here.”

Fortunately, De La Torre says she was able to overcome those insecurities, both through a growing unabashed enthusiasm for the field and through the support of Poggio and her other colleagues in MIT’s Department of Brain and Cognitive Sciences. When the opportunity came to apply to the department’s PhD program, she jumped on it. “It was just knowing these kinds of mentors are here and that they cared about their students,” says De La Torre of her decision to stay on at MIT for graduate studies. “That was really meaningful.”

Expanding notions of reality and imagination

In her two years so far in the graduate program, De La Torre’s work has expanded the understanding of neural networks and their applications to the study of the human brain. Working with Guangyu Robert Yang, an associate investigator at the McGovern Institute and an assistant professor in the departments of Brain and Cognitive Sciences and Electrical Engineering and Computer Sciences, she’s engaged in what she describes as more philosophical questions about how one develops a sense of self as an independent being. She’s interested in how that self-consciousness develops and why it might be useful.

De La Torre’s primary advisor, though, is Professor Josh McDermott, who leads the Laboratory for Computational Audition. With McDermott, De La Torre is attempting to understand how the brain integrates vision and sound. While combining sensory inputs may seem like a basic process, there are many unanswered questions about how our brains combine multiple signals into a coherent impression, or percept, of the world. Many of the questions are raised by audiovisual illusions in which what we hear changes what we see. For example, if one sees a video of two discs passing each other, but the clip contains the sound of a collision, the brain will perceive that the discs are bouncing off, rather than passing through each other. Given an ambiguous image, that simple auditory cue is all it takes to create a different perception of reality.

There’s something interesting happening where our brains are receiving two signals telling us different things and, yet, we have to combine them somehow to make sense of the world.

De La Torre is using behavioral experiments to probe how the human brain makes sense of multisensory cues to construct a particular perception. To do so, she’s created various scenes of objects interacting in 3D space over different sounds, asking research participants to describe characteristics of the scene. For example, in one experiment, she combines visuals of a block moving across a surface at different speeds with various scraping sounds, asking participants to estimate how rough the surface is. Eventually she hopes to take the experiment into virtual reality, where participants will physically push blocks in response to how rough they perceive the surface to be, rather than just reporting on what they experience.

Once she’s collected data, she’ll move into the modeling phase of the research, evaluating whether multisensory neural networks perceive illusions the way humans do. “What we want to do is model exactly what’s happening,” says De La Torre. “How is it that we’re receiving these two signals, integrating them and, at the same time, using all of our prior knowledge and inferences of physics to really make sense of the world?”

Although her two strands of research with Yang and McDermott may seem distinct, she sees clear connections between the two. Both projects are about grasping what artificial neural networks are capable of and what they tell us about the brain. At a more fundamental level, she says that how the brain perceives the world from different sensory cues might be part of what gives people a sense of self. Sensory perception is about constructing a cohesive, unitary sense of the world from multiple sources of sensory data. Similarly, she argues, “the sense of self is really a combination of actions, plans, goals, emotions, all of these different things that are components of their own, but somehow create a unitary being.”

It’s a fitting sentiment for De La Torre, who has been working to make sense of and integrate different aspects of her own life. Working in the Computational Audition lab, for example, she’s started experimenting with combining electronic music with folk music from her native Mexico, connecting her “two worlds,” as she says. Having the space to undertake those kinds of intellectual explorations, and colleagues who encourage it, is one of De La Torre’s favorite parts of MIT.

“Beyond professors, there’s also a lot of students whose way of thinking just amazes me,” she says. “I see a lot of goodness and excitement for science and a little bit of — it’s not nerdiness, but a love for very niche things — and I just kind of love that.”

A “golden era” to study the brain

As an undergraduate, Mitch Murdock was a rare science-humanities double major, specializing in both English and molecular, cellular, and developmental biology at Yale University. Today, as a doctoral student in the MIT Department of Brain and Cognitive Sciences, he sees obvious ways that his English education expanded his horizons as a neuroscientist.

“One of my favorite parts of English was trying to explore interiority, and how people have really complicated experiences inside their heads,” Murdock explains. “I was excited about trying to bridge that gap between internal experiences of the world and that actual biological substrate of the brain.”

Though he can see those connections now, it wasn’t until after Yale that Murdock became interested in brain sciences. As an undergraduate, he was in a traditional molecular biology lab. He even planned to stay there after graduation as a research technician; fortunately, though, he says his advisor Ron Breaker encouraged him to explore the field. That’s how Murdock ended up in a new lab run by Conor Liston, an associate professor at Weill Cornell Medicine, who studies how factors such as stress and sleep regulate the modeling of brain circuits.

It was in Liston’s lab that Murdock was first exposed to neuroscience and began to see the brain as the biological basis of the philosophical questions about experience and emotion that interested him. “It was really in his lab where I thought, ‘Wow, this is so cool. I have to do a PhD studying neuroscience,’” Murdock laughs.

During his time as a research technician, Murdock examined the impact of chronic stress on brain activity in mice. Specifically, he was interested in ketamine, a fast-acting antidepressant prone to being abused, with the hope that better understanding how ketamine works will help scientists find safer alternatives. He focused on dendritic spines, small organelles attached to neurons that help transmit electrical signals between neurons and provide the physical substrate for memory storage. His findings, Murdock explains, suggested that ketamine works by recovering dendritic spines that can be lost after periods of chronic stress.

After three years at Weill Cornell, Murdock decided to pursue doctoral studies in neuroscience, hoping to continue some of the work he started with Liston. He chose MIT because of the research being done on dendritic spines in the lab of Elly Nedivi, the William R. (1964) and Linda R. Young Professor of Neuroscience in The Picower Institute for Learning and Memory.

Once again, though, the opportunity to explore a wider set of interests fortuitously led Murdock to a new passion. During lab rotations at the beginning of his PhD program, Murdock spent time shadowing a physician at Massachusetts General Hospital who was working with Alzheimer’s disease patients.

“Everyone knows that Alzheimer’s doesn’t have a cure. But I realized that, really, if you have Alzheimer’s disease, there’s very little that can be done,” he says. “That was a big wake-up call for me.”

After that experience, Murdock strategically planned his remaining lab rotations, eventually settling into the lab of Li-Huei Tsai, the Picower Professor of Neuroscience and the director of the Picower Institute. For the past five years, Murdock has worked with Tsai on various strands of Alzheimer’s research.

In one project, for example, members of the Tsai lab have shown how certain kinds of non-invasive light and sound stimulation induce brain activity that can improve memory loss in mouse models of Alzheimer’s. Scientists think that, during sleep, small movements in blood vessels drive spinal fluid into the brain, which, in turn, flushes out toxic metabolic waste. Murdock’s research suggests that certain kinds of stimulation might drive a similar process, flushing out waste that can exacerbate memory loss.

Much of his work is focused on the activity of single cells in the brain. Are certain neurons or types of neurons genetically predisposed to degenerate, or do they break down randomly? Why do certain subtypes of cells appear to be dysfunctional earlier on in the course of Alzheimer’s disease? How do changes in blood flow in vascular cells affect degeneration? All of these questions, Murdock believes, will help scientists better understand the causes of Alzheimer’s, which will translate eventually into developing cures and therapies.

To answer these questions, Murdock relies on new single-cell sequencing techniques that he says have changed the way we think about the brain. “This has been a big advance for the field, because we know there are a lot of different cell types in the brain, and we think that they might contribute differentially to Alzheimer’s disease risk,” says Murdock. “We can’t think of the brain as only about neurons.”

Murdock says that that kind of “big-picture” approach — thinking about the brain as a compilation of many different cell types that are all interacting — is the central tenet of his research. To look at the brain in the kind of detail that approach requires, Murdock works with Ed Boyden, the Y. Eva Tan Professor in Neurotechnology, a professor of biological engineering and brain and cognitive sciences at MIT, a Howard Hughes Medical Institute investigator, and a member of MIT’s McGovern Institute for Brain Research and Koch Institute for Integrative Cancer Research. Working with Boyden has allowed Murdock to use new technologies such as expansion microscopy and genetically encoded sensors to aid his research.

That kind of new technology, he adds, has helped blow the field wide open. “This is such a cool time to be a neuroscientist because the tools available now make this a golden era to study the brain.” That rapid intellectual expansion applies to the study of Alzheimer’s as well, including newly understood connections between the immune system and Alzheimer’s — an area in which Murdock says he hopes to continue after graduation.

Right now, though, Murdock is focused on a review paper synthesizing some of the latest research. Given the mountains of new Alzheimer’s work coming out each year, he admits that synthesizing all the data is a bit “crazy,” but he couldn’t be happier to be in the middle of it. “There’s just so much that we are learning about the brain from these new techniques, and it’s just so exciting.”

Modeling the social mind

Typically, it would take two graduate students to do the research that Setayesh Radkani is doing.

Driven by an insatiable curiosity about the human mind, she is working on two PhD thesis projects in two different cognitive neuroscience labs at MIT. For one, she is studying punishment as a social tool to influence others. For the other, she is uncovering the neural processes underlying social learning — that is, learning from others. By piecing together these two research programs, Radkani is hoping to gain a better understanding of the mechanisms underpinning social influence in the mind and brain.

Radkani lived in Iran for most of her life, growing up alongside her younger brother in Tehran. The two spent a lot of time together and have long been each other’s best friends. Her father is a civil engineer, and her mother is a midwife. Her parents always encouraged her to explore new things and follow her own path, even if it wasn’t quite what they imagined for her. And her uncle helped cultivate her sense of curiosity, teaching her to “always ask why” as a way to understand how the world works.

Growing up, Radkani most loved learning about human psychology and using math to model the world around her. But she thought it was impossible to combine her two interests. Prioritizing math, she pursued a bachelor’s degree in electrical engineering at the Sharif University of Technology in Iran.

Then, late in her undergraduate studies, Radkani took a psychology course and discovered the field of cognitive neuroscience, in which scientists mathematically model the human mind and brain. She also spent a summer working in a computational neuroscience lab at the Swiss Federal Institute of Technology in Lausanne. Seeing a way to combine her interests, she decided to pivot and pursue the subject in graduate school.

An experience leading a project in her engineering ethics course during her final year of undergrad further helped her discover some of the questions that would eventually form the basis of her PhD. The project investigated why some students cheat and how to change this.

“Through this project I learned how complicated it is to understand the reasons that people engage in immoral behavior, and even more complicated than that is how to devise policies and react in these situations in order to change people’s attitudes,” Radkani says. “It was this experience that made me realize that I’m interested in studying the human social and moral mind.”

She began looking into social cognitive neuroscience research and stumbled upon a relevant TED talk by Rebecca Saxe, the John W. Jarve Professor in Brain and Cognitive Sciences at MIT, who would eventually become one of Radkani’s research advisors. Radkani knew immediately that she wanted to work with Saxe. But she needed to first get into the BCS PhD program at MIT, a challenging obstacle given her minimal background in the field.

After two application cycles and a year’s worth of graduate courses in cognitive neuroscience, Radkani was accepted into the program. But to come to MIT, she had to leave her family behind. Coming from Iran, Radkani has a single-entry visa, making it difficult for her to travel outside the U.S. She hasn’t been able to visit her family since starting her PhD and won’t be able to until at least after she graduates. Her visa also limits her research contributions, restricting her from attending conferences outside the U.S. “That is definitely a huge burden on my education and on my mental health,” she says.

Still, Radkani is grateful to be at MIT, indulging her curiosity in the human social mind. And she’s thankful for her supportive family, who she calls over FaceTime every day.

Modeling how people think about punishment

In Saxe’s lab, Radkani is researching how people approach and react to punishment, through behavioral studies and neuroimaging. By synthesizing these findings, she’s developing a computational model of the mind that characterizes how people make decisions in situations involving punishment, such as when a parent disciplines a child, when someone punishes their romantic partner, or when the criminal justice system sentences a defendant. With this model, Radkani says she hopes to better understand “when and why punishment works in changing behavior and influencing beliefs about right and wrong, and why sometimes it fails.”

Punishment isn’t a new research topic in cognitive neuroscience, Radkani says, but in previous studies, scientists have often only focused on people’s behavior in punitive situations and haven’t considered the thought processes that underlie those behaviors. Characterizing these thought processes, though, is key to understanding whether punishment in a situation can be effective in changing people’s attitudes.

People bring their prior beliefs into a punitive situation. Apart from moral beliefs about the appropriateness of different behaviors, “you have beliefs about the characteristics of the people involved, and you have theories about their intentions and motivations,” Radkani says. “All those come together to determine what you do or how you are influenced by punishment,” given the circumstances. Punishers decide a suitable punishment based on their interpretation of the situation, in light of their beliefs. Targets of punishment then decide whether they’ll change their attitude as a result of the punishment, depending on their own beliefs. Even outside observers make decisions, choosing whether to keep or change their moral beliefs based on what they see.

To capture these decision-making processes, Radkani is developing a computational model of the mind for punitive situations. The model mathematically represents people’s beliefs and how they interact with certain features of the situation to shape their decisions. The model then predicts a punisher’s decisions, and how punishment will influence the target and observers. Through this model, Radkani will provide a foundational understanding of how people think in various punitive situations.

Researching the neural mechanisms of social learning

In parallel, working in the lab of Professor Mehrdad Jazayeri, Radkani is studying social learning, uncovering its underlying neural processes. Through social learning, people learn from other people’s experiences and decisions, and incorporate this socially acquired knowledge into their own decisions or beliefs.

Humans are extraordinary in their social learning abilities, however our primary form of learning, shared by all other animals, is learning from self-experience. To investigate how learning from others is similar to or different from learning from our own experiences, Radkani has designed a two-player video game that involves both types of learning. During the game, she and her collaborators in Jazayeri’s lab record neural activity in the brain. By analyzing these neural measurements, they plan to uncover the computations carried out by neural circuits during social learning, and compare those to learning from self-experience.

Radkani first became curious about this comparison as a way to understand why people sometimes draw contrasting conclusions from very similar situations. “For example, if I get Covid from going to a restaurant, I’ll blame the restaurant and say it was not clean,” Radkani says. “But if I hear the same thing happen to my friend, I’ll say it’s because they were not careful.” Radkani wanted to know the root causes of this mismatch in how other people’s experiences affect our beliefs and judgements differently from our own similar experiences, particularly because it can lead to “errors that color the way that we judge other people,” she says.

By combining her two research projects, Radkani hopes to better understand how social influence works, particularly in moral situations. From there, she has a slew of research questions that she’s eager to investigate, including: How do people choose who to trust? And which types of people tend to be the most influential? As Radkani’s research grows, so does her curiosity.

Studies of autism tend to exclude women, researchers find

In recent years, researchers who study autism have made an effort to include more women and girls in their studies. However, despite these efforts, most studies of autism consistently enroll small numbers of female subjects or exclude them altogether, according to a new study from MIT.

The researchers found that a screening test commonly used to determine eligibility for studies of autism consistently winnows out a much higher percentage of women than men, creating a “leaky pipeline” that results in severe underrepresentation of women in studies of autism.

This lack of representation makes it more difficult to develop useful interventions or provide accurate diagnoses for girls and women, the researchers say.

“I think the findings favor having a more inclusive approach and widening the lens to end up being less biased in terms of who participates in research,” says John Gabrieli, the Grover Hermann Professor of Health Sciences and Technology and a professor of brain and cognitive sciences at MIT. “The more we understand autism in men and women and nonbinary individuals, the better services and more accurate diagnoses we can provide.”

Gabrieli, who is also a member of MIT’s McGovern Institute for Brain Research, is the senior author of the study, which appears in the journal Autism Research. Anila D’Mello, a former MIT postdoc who is now an assistant professor at the University of Texas Southwestern, is the lead author of the paper. MIT Technical Associate Isabelle Frosch, Research Coordinator Cindy Li, and Research Specialist Annie Cardinaux are also authors of the paper.

Gabrieli lab researchers Annie Cardinaux (left), Anila D’Mello (center), Cindy Li (right), and Isabelle Frosch (not pictured) have
uncovered sex biases in ASD research. Photo: Steph Stevens

Screening out females

Autism spectrum disorders are diagnosed based on observation of traits such as repetitive behaviors and difficulty with language and social interaction. Doctors may use a variety of screening tests to help them make a diagnosis, but these screens are not required.

For research studies of autism, it is routine to use a screening test called the Autism Diagnostic Observation Schedule (ADOS) to determine eligibility for the study. This test, which assesses social interaction, communication, play, and repetitive behaviors, provides a quantitative score in each category, and only participants who reach certain scores qualify for inclusion in studies.

While doing a study exploring how quickly the brains of autistic adults adapt to novel events in the environment, scientists in Gabrieli’s lab began to notice that the ADOS appeared to have unequal effects on male and female participation in research. As the study progressed, D’Mello noticed some significant brain differences between the male and female subjects in the study.

To investigate these differences further, D’Mello tried to find more female participants using an MIT database of autistic adults who have expressed interest in participating in research studies. However, when she sorted through the subjects, she found that only about half of the women in the database had met the ADOS cutoff scores typically required for inclusion in autism studies, compared to 80 percent of the males.

“We realized then that there’s a discrepancy and that the ADOS is essentially screening out who eventually participated in research,” D’Mello says. “We were really surprised at how many males we retained and how many females we lost to the ADOS.”

To see if this phenomenon was more widespread, the researchers looked at six publicly available datasets, which include more than 40,000 adults who have been diagnosed as autistic. For some of these datasets, participants were screened with ADOS to determine their eligibility to participate in studies, while for others, a “community diagnosis” — diagnosis from a doctor or other health care provider — was sufficient.

The researchers found that in datasets that required ADOS screening for eligibility, the ratio of male to female participants ended up being around 8:1, while in those that required only a community diagnosis the ratios ranged from about 2:1 to 1:1.

Previous studies have found differences between behavioral patterns in autistic men and women, but the ADOS test was originally developed using a largely male sample, which may explain why it often excludes women from research studies, D’Mello says.

“There were few females in the sample that was used to create this assessment, so it might be that it’s not great at picking up the female phenotype, which may differ in certain ways — primarily in domains like social communication,” she says.

Effects of exclusion

Failure to include more women and girls in studies of autism may contribute to shortcomings in the definitions of the disorder, the researchers say.

“The way we think about it is that the field evolved perhaps an implicit bias in how autism is defined, and it was driven disproportionately by analysis of males, and recruitment of males, and so on,” Gabrieli says. “So, the definition doesn’t fit as well, on average, with the different expression of autism that seems to be more common in females.”

This implicit bias has led to documented difficulties in receiving a diagnosis for girls and women, even when their symptoms are the same as those presented by autistic boys and men.

“Many females might be missed altogether in terms of diagnoses, and then our study shows that in the research setting, what is already a small pool gets whittled down at a much larger rate than that of males,” D’Mello says.

Excluding girls and women from this kind of research study can lead to treatments that don’t work as well for them, and it contributes to the perception that autism doesn’t affect women as much as men.

“The goal is that research should directly inform treatment, therapies, and public perception,” D’Mello says. “If the research is saying that there aren’t females with autism, or that the brain basis of autism only looks like the patterns established in males, then you’re not really helping females as much as you could be, and you’re not really getting at the truth of what the disorder might be.”

The researchers now plan to further explore some of the gender and sex-based differences that appear in autism, and how they arise. They also plan to expand the gender categories that they include. In the current study, the surveys that each participant filled out asked them to choose male or female, but the researchers have updated their questionnaire to include nonbinary and transgender options.

The research was funded by the Hock E. Tan and K. Lisa Yang Center for Autism Research, the Simons Center for the Social Brain at MIT, and the National Institutes of Mental Health.

How the brain generates rhythmic behavior

Many of our bodily functions, such as walking, breathing, and chewing, are controlled by brain circuits called central oscillators, which generate rhythmic firing patterns that regulate these behaviors.

MIT neuroscientists have now discovered the neuronal identity and mechanism underlying one of these circuits: an oscillator that controls the rhythmic back-and-forth sweeping of tactile whiskers, or whisking, in mice. This is the first time that any such oscillator has been fully characterized in mammals.

The MIT team found that the whisking oscillator consists of a population of inhibitory neurons in the brainstem that fires rhythmic bursts during whisking. As each neuron fires, it also inhibits some of the other neurons in the network, allowing the overall population to generate a synchronous rhythm that retracts the whiskers from their protracted positions.

“We have defined a mammalian oscillator molecularly, electrophysiologically, functionally, and mechanistically,” says Fan Wang, an MIT professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research. “It’s very exciting to see a clearly defined circuit and mechanism of how rhythm is generated in a mammal.”

Wang is the senior author of the study, which appears today in Nature. The lead authors of the paper are MIT research scientists Jun Takatoh and Vincent Prevosto.

Rhythmic behavior

Most of the research that clearly identified central oscillator circuits has been done in invertebrates. For example, Eve Marder’s lab at Brandeis University found cells in the stomatogastric ganglion in lobsters and crabs that generate oscillatory activity to control rhythmic motion of the digestive tract.

Characterizing oscillators in mammals, especially in awake behaving animals, has proven to be highly challenging. The oscillator that controls walking is believed to be distributed throughout the spinal cord, making it difficult to precisely identify the neurons and circuits involved. The oscillator that generates rhythmic breathing is located in a part of the brain stem called the pre-Bötzinger complex, but the exact identity of the oscillator neurons is not fully understood.

“There haven’t been detailed studies in awake behaving animals, where one can record from molecularly identified oscillator cells and manipulate them in a precise way,” Wang says.

Whisking is a prominent rhythmic exploratory behavior in many mammals, which use their tactile whiskers to detect objects and sense textures. In mice, whiskers extend and retract at a frequency of about 12 cycles per second. Several years ago, Wang’s lab set out try to identify the cells and the mechanism that control this oscillation.

To find the location of the whisking oscillator, the researchers traced back from the motor neurons that innervate whisker muscles. Using a modified rabies virus that infects axons, the researchers were able to label a group of cells presynaptic to these motor neurons in a part of the brainstem called the vibrissa intermediate reticular nucleus (vIRt). This finding was consistent with previous studies showing that damage to this part of the brain eliminates whisking.

The researchers then found that about half of these vIRt neurons express a protein called parvalbumin, and that this subpopulation of cells drives the rhythmic motion of the whiskers. When these neurons are silenced, whisking activity is abolished.

Next, the researchers recorded electrical activity from these parvalbumin-expressing vIRt neurons in brainstem in awake mice, a technically challenging task, and found that these neurons indeed have bursts of activity only during the whisker retraction period. Because these neurons provide inhibitory synaptic inputs to whisker motor neurons, it follows that rhythmic whisking is generated by a constant motor neuron protraction signal interrupted by the rhythmic retraction signal from these oscillator cells.

“That was a super satisfying and rewarding moment, to see that these cells are indeed the oscillator cells, because they fire rhythmically, they fire in the retraction phase, and they’re inhibitory neurons,” Wang says.

A maximum projection image showing tracked whiskers on the mouse muzzle. The right (control) side shows the back-and-forth rhythmic sweeping of the whiskers, while the experimental side where the whisking oscillator neurons are silenced, the whiskers move very little. Image: Wang Lab

“New principles”

The oscillatory bursting pattern of vIRt cells is initiated at the start of whisking. When the whiskers are not moving, these neurons fire continuously. When the researchers blocked vIRt neurons from inhibiting each other, the rhythm disappeared, and instead the oscillator neurons simply increased their rate of continuous firing.

This type of network, known as recurrent inhibitory network, differs from the types of oscillators that have been seen in the stomatogastric neurons in lobsters, in which neurons intrinsically generate their own rhythm.

“Now we have found a mammalian network oscillator that is formed by all inhibitory neurons,” Wang says.

The MIT scientists also collaborated with a team of theorists led by David Golomb at Ben-Gurion University, Israel, and David Kleinfeld at the University of California at San Diego. The theorists created a detailed computational model outlining how whisking is controlled, which fits well with all experimental data. A paper describing that model is appearing in an upcoming issue of Neuron.

Wang’s lab now plans to investigate other types of oscillatory circuits in mice, including those that control chewing and licking.

“We are very excited to find oscillators of these feeding behaviors and compare and contrast to the whisking oscillator, because they are all in the brain stem, and we want to know whether there’s some common theme or if there are many different ways to generate oscillators,” she says.

The research was funded by the National Institutes of Health.

Microscopy technique reveals hidden nanostructures in cells and tissues

Press Mentions

Inside a living cell, proteins and other molecules are often tightly packed together. These dense clusters can be difficult to image because the fluorescent labels used to make them visible can’t wedge themselves in between the molecules.

MIT researchers have now developed a novel way to overcome this limitation and make those “invisible” molecules visible. Their technique allows them to “de-crowd” the molecules by expanding a cell or tissue sample before labeling the molecules, which makes the molecules more accessible to fluorescent tags.

This method, which builds on a widely used technique known as expansion microscopy previously developed at MIT, should allow scientists to visualize molecules and cellular structures that have never been seen before.

“It’s becoming clear that the expansion process will reveal many new biological discoveries. If biologists and clinicians have been studying a protein in the brain or another biological specimen, and they’re labeling it the regular way, they might be missing entire categories of phenomena,” says Edward Boyden, the Y. Eva Tan Professor in Neurotechnology, a professor of biological engineering and brain and cognitive sciences at MIT, a Howard Hughes Medical Institute investigator, and a member of MIT’s McGovern Institute for Brain Research and Koch Institute for Integrative Cancer Research.

Using this technique, Boyden and his colleagues showed that they could image a nanostructure found in the synapses of neurons. They also imaged the structure of Alzheimer’s-linked amyloid beta plaques in greater detail than has been possible before.

“Our technology, which we named expansion revealing, enables visualization of these nanostructures, which previously remained hidden, using hardware easily available in academic labs,” says Deblina Sarkar, an assistant professor in the Media Lab and one of the lead authors of the study.

The senior authors of the study are Boyden; Li-Huei Tsai, director of MIT’s Picower Institute for Learning and Memory; and Thomas Blanpied, a professor of physiology at the University of Maryland. Other lead authors include Jinyoung Kang, an MIT postdoc, and Asmamaw Wassie, a recent MIT PhD recipient. The study appears today in Nature Biomedical Engineering.

De-crowding

Imaging a specific protein or other molecule inside a cell requires labeling it with a fluorescent tag carried by an antibody that binds to the target. Antibodies are about 10 nanometers long, while typical cellular proteins are usually about 2 to 5 nanometers in diameter, so if the target proteins are too densely packed, the antibodies can’t get to them.

This has been an obstacle to traditional imaging and also to the original version of expansion microscopy, which Boyden first developed in 2015. In the original version of expansion microscopy, researchers attached fluorescent labels to molecules of interest before they expanded the tissue. The labeling was done first, in part because the researchers had to use an enzyme to chop up proteins in the sample so the tissue could be expanded. This meant that the proteins couldn’t be labeled after the tissue was expanded.

To overcome that obstacle, the researchers had to find a way to expand the tissue while leaving the proteins intact. They used heat instead of enzymes to soften the tissue, allowing the tissue to expand 20-fold without being destroyed. Then, the separated proteins could be labeled with fluorescent tags after expansion.

With so many more proteins accessible for labeling, the researchers were able to identify tiny cellular structures within synapses, the connections between neurons that are densely packed with proteins. They labeled and imaged seven different synaptic proteins, which allowed them to visualize, in detail, “nanocolumns” consisting of calcium channels aligned with other synaptic proteins. These nanocolumns, which are believed to help make synaptic communication more efficient, were first discovered by Blanpied’s lab in 2016.

“This technology can be used to answer a lot of biological questions about dysfunction in synaptic proteins, which are involved in neurodegenerative diseases,” Kang says. “Until now there has been no tool to visualize synapses very well.”

New patterns

The researchers also used their new technique to image beta amyloid, a peptide that forms plaques in the brains of Alzheimer’s patients. Using brain tissue from mice, the researchers found that amyloid beta forms periodic nanoclusters, which had not been seen before. These clusters of amyloid beta also include potassium channels. The researchers also found amyloid beta molecules that formed helical structures along axons.

“In this paper, we don’t speculate as to what that biology might mean, but we show that it exists. That is just one example of the new patterns that we can see,” says Margaret Schroeder, an MIT graduate student who is also an author of the paper.

Sarkar says that she is fascinated by the nanoscale biomolecular patterns that this technology unveils. “With a background in nanoelectronics, I have developed electronic chips that require extremely precise alignment, in the nanofab. But when I see that in our brain Mother Nature has arranged biomolecules with such nanoscale precision, that really blows my mind,” she says.

Boyden and his group members are now working with other labs to study cellular structures such as protein aggregates linked to Parkinson’s and other diseases. In other projects, they are studying pathogens that infect cells and molecules that are involved in aging in the brain. Preliminary results from these studies have also revealed novel structures, Boyden says.

“Time and time again, you see things that are truly shocking,” he says. “It shows us how much we are missing with classical unexpanded staining.”

The researchers are also working on modifying the technique so they can image up to 20 proteins at a time. They are also working on adapting their process so that it can be used on human tissue samples.

Sarkar and her team, on the other hand, are developing tiny wirelessly powered nanoelectronic devices which could be distributed in the brain. They plan to integrate these devices with expansion revealing. “This can combine the intelligence of nanoelectronics with the nanoscopy prowess of expansion technology, for an integrated functional and structural understanding of the brain,” Sarkar says.

The research was funded by the National Institutes of Health, the National Science Foundation, the Ludwig Family Foundation, the JPB Foundation, the Open Philanthropy Project, John Doerr, Lisa Yang and the Tan-Yang Center for Autism Research at MIT, the U.S. Army Research Office, Charles Hieken, Tom Stocky, Kathleen Octavio, Lore McGovern, Good Ventures, and HHMI.