Self-assembling proteins can store cellular “memories”

As cells perform their everyday functions, they turn on a variety of genes and cellular pathways. MIT engineers have now coaxed cells to inscribe the history of these events in a long protein chain that can be imaged using a light microscope.

Cells programmed to produce these chains continuously add building blocks that encode particular cellular events. Later, the ordered protein chains can be labeled with fluorescent molecules and read under a microscope, allowing researchers to reconstruct the timing of the events.

This technique could help shed light on the steps that underlie processes such as memory formation, response to drug treatment, and gene expression.

“There are a lot of changes that happen at organ or body scale, over hours to weeks, which cannot be tracked over time,” says Edward Boyden, the Y. Eva Tan Professor in Neurotechnology, a professor of biological engineering and brain and cognitive sciences at MIT, a Howard Hughes Medical Institute investigator, and a member of MIT’s McGovern Institute for Brain Research and Koch Institute for Integrative Cancer Research.

If the technique could be extended to work over longer time periods, it could also be used to study processes such as aging and disease progression, the researchers say.

Boyden is the senior author of the study, which appears today in Nature Biotechnology. Changyang Linghu, a former J. Douglas Tan Postdoctoral Fellow at the McGovern Institute, who is now an assistant professor at the University of Michigan, is the lead author of the paper.

Cellular history

Biological systems such as organs contain many different kinds of cells, all of which have distinctive functions. One way to study these functions is to image proteins, RNA, or other molecules inside the cells, which provide hints to what the cells are doing. However, most methods for doing this offer only a glimpse of a single moment in time, or don’t work well with very large populations of cells.

“Biological systems are often composed of a large number of different types of cells. For example, the human brain has 86 billion cells,” Linghu says. “To understand those kinds of biological systems, we need to observe physiological events over time in these large cell populations.”

To achieve that, the research team came up with the idea of recording cellular events as a series of protein subunits that are continuously added to a chain. To create their chains, the researchers used engineered protein subunits, not normally found in living cells, that can self-assemble into long filaments.

The researchers designed a genetically encoded system in which one of these subunits is continuously produced inside cells, while the other is generated only when a specific event occurs. Each subunit also contains a very short peptide called an epitope tag — in this case, the researchers chose tags called HA and V5. Each of these tags can bind to a different fluorescent antibody, making it easy to visualize the tags later on and determine the sequence of the protein subunits.

For this study, the researchers made production of the V5-containing subunit contingent on the activation of a gene called c-fos, which is involved in encoding new memories. HA-tagged subunits make up most of the chain, but whenever the V5 tag shows up in the chain, that means that c-fos was activated during that time.

“We’re hoping to use this kind of protein self-assembly to record activity in every single cell,” Linghu says. “It’s not only a snapshot in time, but also records past history, just like how tree rings can permanently store information over time as the wood grows.”

Recording events

In this study, the researchers first used their system to record activation of c-fos in neurons growing in a lab dish. The c-fos gene was activated by chemically induced activation of the neurons, which caused the V5 subunit to be added to the protein chain.

To explore whether this approach could work in the brains of animals, the researchers programmed brain cells of mice to generate protein chains that would reveal when the animals were exposed to a particular drug. Later, the researchers were able to detect that exposure by preserving the tissue and analyzing it with a light microscope.

The researchers designed their system to be modular, so that different epitope tags can be swapped in, or different types of cellular events can be detected, including, in principle, cell division or activation of enzymes called protein kinases, which help control many cellular pathways.

The researchers also hope to extend the recording period that they can achieve. In this study, they recorded events for several days before imaging the tissue. There is a tradeoff between the amount of time that can be recorded and the time resolution, or frequency of event recording, because the length of the protein chain is limited by the size of the cell.

“The total amount of information it could store is fixed, but we could in principle slow down or increase the speed of the growth of the chain,” Linghu says. “If we want to record for a longer time, we could slow down the synthesis so that it will reach the size of the cell within, let’s say two weeks. In that way we could record longer, but with less time resolution.”

The researchers are also working on engineering the system so that it can record multiple types of events in the same chain, by increasing the number of different subunits that can be incorporated.

The research was funded by the Hock E. Tan and K. Lisa Yang Center for Autism Research, John Doerr, the National Institutes of Health, the National Science Foundation, the U.S. Army Research Office, and the Howard Hughes Medical Institute.

New sensor uses MRI to detect light deep in the brain

Using a specialized MRI sensor, MIT researchers have shown that they can detect light deep within tissues such as the brain.

Imaging light in deep tissues is extremely difficult because as light travels into tissue, much of it is either absorbed or scattered. The MIT team overcame that obstacle by designing a sensor that converts light into a magnetic signal that can be detected by MRI (magnetic resonance imaging).

This type of sensor could be used to map light emitted by optical fibers implanted in the brain, such as the fibers used to stimulate neurons during optogenetic experiments. With further development, it could also prove useful for monitoring patients who receive light-based therapies for cancer, the researchers say.

“We can image the distribution of light in tissue, and that’s important because people who use light to stimulate tissue or to measure from tissue often don’t quite know where the light is going, where they’re stimulating, or where the light is coming from. Our tool can be used to address those unknowns,” says Alan Jasanoff, an MIT professor of biological engineering, brain and cognitive sciences, and nuclear science and engineering.

Jasanoff, who is also an associate investigator at MIT’s McGovern Institute for Brain Research, is the senior author of the study, which appears today in Nature Biomedical Engineering. Jacob Simon PhD ’21 and MIT postdoc Miriam Schwalm are the paper’s lead authors, and Johannes Morstein and Dirk Trauner of New York University are also authors of the paper.

A light-sensitive probe

Scientists have been using light to study living cells for hundreds of years, dating back to the late 1500s, when the light microscope was invented. This kind of microscopy allows researchers to peer inside cells and thin slices of tissue, but not deep inside an organism.

“One of the persistent problems in using light, especially in the life sciences, is that it doesn’t do a very good job penetrating many materials,” Jasanoff says. “Biological materials absorb light and scatter light, and the combination of those things prevents us from using most types of optical imaging for anything that involves focusing in deep tissue.”

To overcome that limitation, Jasanoff and his students decided to design a sensor that could transform light into a magnetic signal.

“We wanted to create a magnetic sensor that responds to light locally, and therefore is not subject to absorbance or scattering. Then this light detector can be imaged using MRI,” he says.

Jasanoff’s lab has previously developed MRI probes that can interact with a variety of molecules in the brain, including dopamine and calcium. When these probes bind to their targets, it affects the sensors’ magnetic interactions with the surrounding tissue, dimming or brightening the MRI signal.

To make a light-sensitive MRI probe, the researchers decided to encase magnetic particles in a nanoparticle called a liposome. The liposomes used in this study are made from specialized light-sensitive lipids that Trauner had previously developed. When these lipids are exposed to a certain wavelength of light, the liposomes become more permeable to water, or “leaky.” This allows the magnetic particles inside to interact with water and generate a signal detectable by MRI.

The particles, which the researchers called liposomal nanoparticle reporters (LisNR), can switch from permeable to impermeable depending on the type of light they’re exposed to. In this study, the researchers created particles that become leaky when exposed to ultraviolet light, and then become impermeable again when exposed to blue light. The researchers also showed that the particles could respond to other wavelengths of light.

“This paper shows a novel sensor to enable photon detection with MRI through the brain. This illuminating work introduces a new avenue to bridge photon and proton-driven neuroimaging studies,” says Xin Yu, an assistant professor radiology at Harvard Medical School, who was not involved in the study.

Mapping light

The researchers tested the sensors in the brains of rats — specifically, in a part of the brain called the striatum, which is involved in planning movement and responding to reward. After injecting the particles throughout the striatum, the researchers were able to map the distribution of light from an optical fiber implanted nearby.

The fiber they used is similar to those used for optogenetic stimulation, so this kind of sensing could be useful to researchers who perform optogenetic experiments in the brain, Jasanoff says.

“We don’t expect that everybody doing optogenetics will use this for every experiment — it’s more something that you would do once in a while, to see whether a paradigm that you’re using is really producing the profile of light that you think it should be,” Jasanoff says.

In the future, this type of sensor could also be useful for monitoring patients receiving treatments that involve light, such as photodynamic therapy, which uses light from a laser or LED to kill cancer cells.

The researchers are now working on similar probes that could be used to detect light emitted by luciferases, a family of glowing proteins that are often used in biological experiments. These proteins can be used to reveal whether a particular gene is activated or not, but currently they can only be imaged in superficial tissue or cells grown in a lab dish.

Jasanoff also hopes to use the strategy used for the LisNR sensor to design MRI probes that can detect stimuli other than light, such as neurochemicals or other molecules found in the brain.

“We think that the principle that we use to construct these sensors is quite broad and can be used for other purposes too,” he says.

The research was funded by the National Institutes of Health, the G. Harold and Leila Y. Mathers Foundation, a Friends of the McGovern Fellowship from the McGovern Institute for Brain Research, the MIT Neurobiological Engineering Training Program, and a Marie Curie Individual Fellowship from the European Commission.

Season’s Greetings from the McGovern Institute

This year’s holiday video (shown above) was inspired by Ev Fedorenko’s July 2022 Nature Neuroscience paper, which found similar patterns of brain activation and language selectivity across speakers of 45 different languages.

Universal language network

Ev Fedorenko uses the widely translated book “Alice in Wonderland” to test brain responses to different languages. Photo: Caitlin Cunningham

Over several decades, neuroscientists have created a well-defined map of the brain’s “language network,” or the regions of the brain that are specialized for processing language. Found primarily in the left hemisphere, this network includes regions within Broca’s area, as well as in other parts of the frontal and temporal lobes. Although roughly 7,000 languages are currently spoken and signed across the globe, the vast majority of those mapping studies have been done in English speakers as they listened to or read English texts.

To truly understand the cognitive and neural mechanisms that allow us to learn and process such diverse languages, Fedorenko and her team scanned the brains of speakers of 45 different languages while they listened to Alice in Wonderland in their native language. The results show that the speakers’ language networks appear to be essentially the same as those of native English speakers — which suggests that the location and key properties of the language network appear to be universal.

The many languages of McGovern

English may be the primary language used by McGovern researchers, but more than 35 other languages are spoken by scientists and engineers at the McGovern Institute. Our holiday video features 30 of these researchers saying Happy New Year in their native (or learned) language. Below is the complete list of languages included in our video. Expand each accordion to learn more about the speaker of that particular language and the meaning behind their new year’s greeting.

Not every reader’s struggle is the same

Many children struggle to learn to read, and studies have shown that students from a lower socioeconomic status (SES) background are more likely to have difficulty than those from a higher SES background.

MIT neuroscientists have now discovered that the types of difficulties that lower-SES students have with reading, and the underlying brain signatures, are, on average, different from those of higher-SES students who struggle with reading.

In a new study, which included brain scans of more than 150 children as they performed tasks related to reading, researchers found that when students from higher SES backgrounds struggled with reading, it could usually be explained by differences in their ability to piece sounds together into words, a skill known as phonological processing.

However, when students from lower SES backgrounds struggled, it was best explained by differences in their ability to rapidly name words or letters, a task associated with orthographic processing, or visual interpretation of words and letters. This pattern was further confirmed by brain activation during phonological and orthographic processing.

These differences suggest that different types of interventions may needed for different groups of children, the researchers say. The study also highlights the importance of including a wide range of SES levels in studies of reading or other types of academic learning.

“Within the neuroscience realm, we tend to rely on convenience samples of participants, so a lot of our understanding of the neuroscience components of reading in general, and reading disabilities in particular, tends to be based on higher-SES families,” says Rachel Romeo, a former graduate student in the Harvard-MIT Program in Health Sciences and Technology and the lead author of the study. “If we only look at these nonrepresentative samples, we can come away with a relatively biased view of how the brain works.”

Romeo is now an assistant professor in the Department of Human Development and Quantitative Methodology at the University of Maryland. John Gabrieli, the Grover Hermann Professor of Health Sciences and Technology and a professor of brain and cognitive sciences at MIT, is the senior author of the paper, which appears today in the journal Developmental Cognitive Neuroscience.

Components of reading

For many years, researchers have known that children’s scores on standardized assessments of reading are correlated with socioeconomic factors such as school spending per student or the number of children at the school who qualify for free or reduced-price lunches.

Studies of children who struggle with reading, mostly done in higher-SES environments, have shown that the aspect of reading they struggle with most is phonological awareness: the understanding of how sounds combine to make a word, and how sounds can be split up and swapped in or out to make new words.

“That’s a key component of reading, and difficulty with phonological processing is often one of the hallmarks of dyslexia or other reading disorders,” Romeo says.

In the new study, the MIT team wanted to explore how SES might affect phonological processing as well as another key aspect of reading, orthographic processing. This relates more to the visual components of reading, including the ability to identify letters and read words.

To do the study, the researchers recruited first and second grade students from the Boston area, making an effort to include a range of SES levels. For the purposes of this study, SES was assessed by parents’ total years of formal education, which is commonly used as a measure of the family’s SES.

“We went into this not necessarily with any hypothesis about how SES might relate to the two types of processing, but just trying to understand whether SES might be impacting one or the other more, or if it affects both types the same,” Romeo says.

The researchers first gave each child a series of standardized tests designed to measure either phonological processing or orthographic processing. Then, they performed fMRI scans of each child while they carried out additional phonological or orthographic tasks.

The initial series of tests allowed the researchers to determine each child’s abilities for both types of processing, and the brain scans allowed them to measure brain activity in parts of the brain linked with each type of processing.

The results showed that at the higher end of the SES spectrum, differences in phonological processing ability accounted for most of the differences between good readers and struggling readers. This is consistent with the findings of previous studies of reading difficulty. In those children, the researchers also found greater differences in activity in the parts of the brain responsible for phonological processing.

However, the outcomes were different when the researchers analyzed the lower end of the SES spectrum. There, the researchers found that variance in orthographic processing ability accounted for most of the differences between good readers and struggling readers. MRI scans of these children revealed greater differences in brain activity in parts of the brain that are involved in orthographic processing.

Optimizing interventions

There are many possible reasons why a lower SES background might lead to difficulties in orthographic processing, the researchers say. It might be less exposure to books at home, or limited access to libraries and other resources that promote literacy. For children from this background who struggle with reading, different types of interventions might benefit them more than the ones typically used for children who have difficulty with phonological processing.

In a 2017 study, Gabrieli, Romeo, and others found that a summer reading intervention that focused on helping students develop the sensory and cognitive processing necessary for reading was more beneficial for students from lower-SES backgrounds than children from higher-SES backgrounds. Those findings also support the idea that tailored interventions may be necessary for individual students, they say.

“There are two major reasons we understand that cause children to struggle as they learn to read in these early grades. One of them is learning differences, most prominently dyslexia, and the other one is socioeconomic disadvantage,” Gabrieli says. “In my mind, schools have to help all these kinds of kids become the best readers they can, so recognizing the source or sources of reading difficulty ought to inform practices and policies that are sensitive to these differences and optimize supportive interventions.”

Gabrieli and Romeo are now working with researchers at the Harvard University Graduate School of Education to evaluate language and reading interventions that could better prepare preschool children from lower SES backgrounds to learn to read. In her new lab at the University of Maryland, Romeo also plans to further delve into how different aspects of low SES contribute to different areas of language and literacy development.

“No matter why a child is struggling with reading, they need the education and the attention to support them. Studies that try to tease out the underlying factors can help us in tailoring educational interventions to what a child needs,” she says.

The research was funded by the Ellison Medical Foundation, the Halis Family Foundation, and the National Institutes of Health.

Studies of autism tend to exclude women, researchers find

In recent years, researchers who study autism have made an effort to include more women and girls in their studies. However, despite these efforts, most studies of autism consistently enroll small numbers of female subjects or exclude them altogether, according to a new study from MIT.

The researchers found that a screening test commonly used to determine eligibility for studies of autism consistently winnows out a much higher percentage of women than men, creating a “leaky pipeline” that results in severe underrepresentation of women in studies of autism.

This lack of representation makes it more difficult to develop useful interventions or provide accurate diagnoses for girls and women, the researchers say.

“I think the findings favor having a more inclusive approach and widening the lens to end up being less biased in terms of who participates in research,” says John Gabrieli, the Grover Hermann Professor of Health Sciences and Technology and a professor of brain and cognitive sciences at MIT. “The more we understand autism in men and women and nonbinary individuals, the better services and more accurate diagnoses we can provide.”

Gabrieli, who is also a member of MIT’s McGovern Institute for Brain Research, is the senior author of the study, which appears in the journal Autism Research. Anila D’Mello, a former MIT postdoc who is now an assistant professor at the University of Texas Southwestern, is the lead author of the paper. MIT Technical Associate Isabelle Frosch, Research Coordinator Cindy Li, and Research Specialist Annie Cardinaux are also authors of the paper.

Gabrieli lab researchers Annie Cardinaux (left), Anila D’Mello (center), Cindy Li (right), and Isabelle Frosch (not pictured) have
uncovered sex biases in ASD research. Photo: Steph Stevens

Screening out females

Autism spectrum disorders are diagnosed based on observation of traits such as repetitive behaviors and difficulty with language and social interaction. Doctors may use a variety of screening tests to help them make a diagnosis, but these screens are not required.

For research studies of autism, it is routine to use a screening test called the Autism Diagnostic Observation Schedule (ADOS) to determine eligibility for the study. This test, which assesses social interaction, communication, play, and repetitive behaviors, provides a quantitative score in each category, and only participants who reach certain scores qualify for inclusion in studies.

While doing a study exploring how quickly the brains of autistic adults adapt to novel events in the environment, scientists in Gabrieli’s lab began to notice that the ADOS appeared to have unequal effects on male and female participation in research. As the study progressed, D’Mello noticed some significant brain differences between the male and female subjects in the study.

To investigate these differences further, D’Mello tried to find more female participants using an MIT database of autistic adults who have expressed interest in participating in research studies. However, when she sorted through the subjects, she found that only about half of the women in the database had met the ADOS cutoff scores typically required for inclusion in autism studies, compared to 80 percent of the males.

“We realized then that there’s a discrepancy and that the ADOS is essentially screening out who eventually participated in research,” D’Mello says. “We were really surprised at how many males we retained and how many females we lost to the ADOS.”

To see if this phenomenon was more widespread, the researchers looked at six publicly available datasets, which include more than 40,000 adults who have been diagnosed as autistic. For some of these datasets, participants were screened with ADOS to determine their eligibility to participate in studies, while for others, a “community diagnosis” — diagnosis from a doctor or other health care provider — was sufficient.

The researchers found that in datasets that required ADOS screening for eligibility, the ratio of male to female participants ended up being around 8:1, while in those that required only a community diagnosis the ratios ranged from about 2:1 to 1:1.

Previous studies have found differences between behavioral patterns in autistic men and women, but the ADOS test was originally developed using a largely male sample, which may explain why it often excludes women from research studies, D’Mello says.

“There were few females in the sample that was used to create this assessment, so it might be that it’s not great at picking up the female phenotype, which may differ in certain ways — primarily in domains like social communication,” she says.

Effects of exclusion

Failure to include more women and girls in studies of autism may contribute to shortcomings in the definitions of the disorder, the researchers say.

“The way we think about it is that the field evolved perhaps an implicit bias in how autism is defined, and it was driven disproportionately by analysis of males, and recruitment of males, and so on,” Gabrieli says. “So, the definition doesn’t fit as well, on average, with the different expression of autism that seems to be more common in females.”

This implicit bias has led to documented difficulties in receiving a diagnosis for girls and women, even when their symptoms are the same as those presented by autistic boys and men.

“Many females might be missed altogether in terms of diagnoses, and then our study shows that in the research setting, what is already a small pool gets whittled down at a much larger rate than that of males,” D’Mello says.

Excluding girls and women from this kind of research study can lead to treatments that don’t work as well for them, and it contributes to the perception that autism doesn’t affect women as much as men.

“The goal is that research should directly inform treatment, therapies, and public perception,” D’Mello says. “If the research is saying that there aren’t females with autism, or that the brain basis of autism only looks like the patterns established in males, then you’re not really helping females as much as you could be, and you’re not really getting at the truth of what the disorder might be.”

The researchers now plan to further explore some of the gender and sex-based differences that appear in autism, and how they arise. They also plan to expand the gender categories that they include. In the current study, the surveys that each participant filled out asked them to choose male or female, but the researchers have updated their questionnaire to include nonbinary and transgender options.

The research was funded by the Hock E. Tan and K. Lisa Yang Center for Autism Research, the Simons Center for the Social Brain at MIT, and the National Institutes of Mental Health.

These neurons have food on the brain

A gooey slice of pizza. A pile of crispy French fries. Ice cream dripping down a cone on a hot summer day. When you look at any of these foods, a specialized part of your visual cortex lights up, according to a new study from MIT neuroscientists.

This newly discovered population of food-responsive neurons is located in the ventral visual stream, alongside populations that respond specifically to faces, bodies, places, and words. The unexpected finding may reflect the special significance of food in human culture, the researchers say.

“Food is central to human social interactions and cultural practices. It’s not just sustenance,” says Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience and a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds, and Machines. “Food is core to so many elements of our cultural identity, religious practice, and social interactions, and many other things that humans do.”

The findings, based on an analysis of a large public database of human brain responses to a set of 10,000 images, raise many additional questions about how and why this neural population develops. In future studies, the researchers hope to explore how people’s responses to certain foods might differ depending on their likes and dislikes, or their familiarity with certain types of food.

MIT postdoc Meenakshi Khosla is the lead author of the paper, along with MIT research scientist N. Apurva Ratan Murty. The study appears today in the journal Current Biology.

Visual categories

More than 20 years ago, while studying the ventral visual stream, the part of the brain that recognizes objects, Kanwisher discovered cortical regions that respond selectively to faces. Later, she and other scientists discovered other regions that respond selectively to places, bodies, or words. Most of those areas were discovered when researchers specifically set out to look for them. However, that hypothesis-driven approach can limit what you end up finding, Kanwisher says.

“There could be other things that we might not think to look for,” she says. “And even when we find something, how do we know that that’s actually part of the basic dominant structure of that pathway, and not something we found just because we were looking for it?”

To try to uncover the fundamental structure of the ventral visual stream, Kanwisher and Khosla decided to analyze a large, publicly available dataset of full-brain functional magnetic resonance imaging (fMRI) responses from eight human subjects as they viewed thousands of images.

“We wanted to see when we apply a data-driven, hypothesis-free strategy, what kinds of selectivities pop up, and whether those are consistent with what had been discovered before. A second goal was to see if we could discover novel selectivities that either haven’t been hypothesized before, or that have remained hidden due to the lower spatial resolution of fMRI data,” Khosla says.

To do that, the researchers applied a mathematical method that allows them to discover neural populations that can’t be identified from traditional fMRI data. An fMRI image is made up of many voxels — three-dimensional units that represent a cube of brain tissue. Each voxel contains hundreds of thousands of neurons, and if some of those neurons belong to smaller populations that respond to one type of visual input, their responses may be drowned out by other populations within the same voxel.

The new analytical method, which Kanwisher’s lab has previously used on fMRI data from the auditory cortex, can tease out responses of neural populations within each voxel of fMRI data.

Using this approach, the researchers found four populations that corresponded to previously identified clusters that respond to faces, places, bodies, and words. “That tells us that this method works, and it tells us that the things that we found before are not just obscure properties of that pathway, but major, dominant properties,” Kanwisher says.

Intriguingly, a fifth population also emerged, and this one appeared to be selective for images of food.

“We were first quite puzzled by this because food is not a visually homogenous category,” Khosla says. “Things like apples and corn and pasta all look so unlike each other, yet we found a single population that responds similarly to all these diverse food items.”

The food-specific population, which the researchers call the ventral food component (VFC), appears to be spread across two clusters of neurons, located on either side of the FFA. The fact that the food-specific populations are spread out between other category-specific populations may help explain why they have not been seen before, the researchers say.

“We think that food selectivity had been harder to characterize before because the populations that are selective for food are intermingled with other nearby populations that have distinct responses to other stimulus attributes. The low spatial resolution of fMRI prevents us from seeing this selectivity because the responses of different neural population get mixed in a voxel,” Khosla says.

“The technique which the researchers used to identify category-sensitive cells or areas is impressive, and it recovered known category-sensitive systems, making the food category findings most impressive,” says Paul Rozin, a professor of psychology at the University of Pennsylvania, who was not involved in the study. “I can’t imagine a way for the brain to reliably identify the diversity of foods based on sensory features. That makes this all the more fascinating, and likely to clue us in about something really new.”

Food vs non-food

The researchers also used the data to train a computational model of the VFC, based on previous models Murty had developed for the brain’s face and place recognition areas. This allowed the researchers to run additional experiments and predict the responses of the VFC. In one experiment, they fed the model matched images of food and non-food items that looked very similar — for example, a banana and a yellow crescent moon.

“Those matched stimuli have very similar visual properties, but the main attribute in which they differ is edible versus inedible,” Khosla says. “We could feed those arbitrary stimuli through the predictive model and see whether it would still respond more to food than non-food, without having to collect the fMRI data.”

They could also use the computational model to analyze much larger datasets, consisting of millions of images. Those simulations helped to confirm that the VFC is highly selective for images of food.

From their analysis of the human fMRI data, the researchers found that in some subjects, the VFC responded slightly more to processed foods such as pizza than unprocessed foods like apples. In the future they hope to explore how factors such as familiarity and like or dislike of a particular food might affect individuals’ responses to that food.

They also hope to study when and how this region becomes specialized during early childhood, and what other parts of the brain it communicates with. Another question is whether this food-selective population will be seen in other animals such as monkeys, who do not attach the cultural significance to food that humans do.

The research was funded by the National Institutes of Health, the National Eye Institute, and the National Science Foundation through the MIT Center for Brains, Minds, and Machines.

Whether speaking Turkish or Norwegian, the brain’s language network looks the same

Over several decades, neuroscientists have created a well-defined map of the brain’s “language network,” or the regions of the brain that are specialized for processing language. Found primarily in the left hemisphere, this network includes regions within Broca’s area, as well as in other parts of the frontal and temporal lobes.

However, the vast majority of those mapping studies have been done in English speakers as they listened to or read English texts. MIT neuroscientists have now performed brain imaging studies of speakers of 45 different languages. The results show that the speakers’ language networks appear to be essentially the same as those of native English speakers.

The findings, while not surprising, establish that the location and key properties of the language network appear to be universal. The work also lays the groundwork for future studies of linguistic elements that would be difficult or impossible to study in English speakers because English doesn’t have those features.

“This study is very foundational, extending some findings from English to a broad range of languages,” says Evelina Fedorenko, the Frederick A. and Carole J. Middleton Career Development Associate Professor of Neuroscience at MIT and a member of MIT’s McGovern Institute for Brain Research. “The hope is that now that we see that the basic properties seem to be general across languages, we can ask about potential differences between languages and language families in how they are implemented in the brain, and we can study phenomena that don’t really exist in English.”

Fedorenko is the senior author of the study, which appears today in Nature Neuroscience. Saima Malik-Moraleda, a PhD student in the Speech and Hearing Bioscience and Technology program at Harvard University, and Dima Ayyash, a former research assistant, are the lead authors of the paper.

Mapping language networks

The precise locations and shapes of language areas differ across individuals, so to find the language network, researchers ask each person to perform a language task while scanning their brains with functional magnetic resonance imaging (fMRI). Listening to or reading sentences in one’s native language should activate the language network. To distinguish this network from other brain regions, researchers also ask participants to perform tasks that should not activate it, such as listening to an unfamiliar language or solving math problems.

Several years ago, Fedorenko began designing these “localizer” tasks for speakers of languages other than English. While most studies of the language network have used English speakers as subjects, English does not include many features commonly seen in other languages. For example, in English, word order tends to be fixed, while in other languages there is more flexibility in how words are ordered. Many of those languages instead use the addition of morphemes, or segments of words, to convey additional meaning and relationships between words.

“There has been growing awareness for many years of the need to look at more languages, if you want make claims about how language works, as opposed to how English works,” Fedorenko says. “We thought it would be useful to develop tools to allow people to rigorously study language processing in the brain in other parts of the world. There’s now access to brain imaging technologies in many countries, but the basic paradigms that you would need to find the language-responsive areas in a person are just not there.”

For the new study, the researchers performed brain imaging of two speakers of 45 different languages, representing 12 different language families. Their goal was to see if key properties of the language network, such as location, left lateralization, and selectivity, were the same in those participants as in people whose native language is English.

The researchers decided to use “Alice in Wonderland” as the text that everyone would listen to, because it is one of the most widely translated works of fiction in the world. They selected 24 short passages and three long passages, each of which was recorded by a native speaker of the language. Each participant also heard nonsensical passages, which should not activate the language network, and was asked to do a variety of other cognitive tasks that should not activate it.

The team found that the language networks of participants in this study were found in approximately the same brain regions, and had the same selectivity, as those of native speakers of English.

“Language areas are selective,” Malik-Moraleda says. “They shouldn’t be responding during other tasks such as a spatial working memory task, and that was what we found across the speakers of 45 languages that we tested.”

Additionally, language regions that are typically activated together in English speakers, such as the frontal language areas and temporal language areas, were similarly synchronized in speakers of other languages.

The researchers also showed that among all of the subjects, the small amount of variation they saw between individuals who speak different languages was the same as the amount of variation that would typically be seen between native English speakers.

Similarities and differences

While the findings suggest that the overall architecture of the language network is similar across speakers of different languages, that doesn’t mean that there are no differences at all, Fedorenko says. As one example, researchers could now look for differences in speakers of languages that predominantly use morphemes, rather than word order, to help determine the meaning of a sentence.

“There are all sorts of interesting questions you can ask about morphological processing that don’t really make sense to ask in English, because it has much less morphology,” Fedorenko says.

Another possibility is studying whether speakers of languages that use differences in tone to convey different word meanings would have a language network with stronger links to auditory brain regions that encode pitch.

Right now, Fedorenko’s lab is working on a study in which they are comparing the ‘temporal receptive fields’ of speakers of six typologically different languages, including Turkish, Mandarin, and Finnish. The temporal receptive field is a measure of how many words the language processing system can handle at a time, and for English, it has been shown to be six to eight words long.

“The language system seems to be working on chunks of just a few words long, and we’re trying to see if this constraint is universal across these other languages that we’re testing,” Fedorenko says.

The researchers are also working on creating language localizer tasks and finding study participants representing additional languages beyond the 45 from this study.

The research was funded by the National Institutes of Health and research funds from MIT’s Department of Brain and Cognitive Sciences, the McGovern Institute, and the Simons Center for the Social Brain. Malik-Moraleda was funded by a la Caixa Fellowship and a Friends of McGovern fellowship.

A voice for change — in Spanish

Jessica Chomik-Morales had a bicultural childhood. She was born in Boca Raton, Florida, where her parents had come seeking a better education for their daughter than she would have access to in Paraguay. But when she wasn’t in school, Chomik-Morales was back in that small, South American country with her family. One of the consequences of growing up in two cultures was an early interest in human behavior. “I was always in observer mode,” Chomik-Morales says, recalling how she would tune in to the nuances of social interactions in order to adapt and fit in.

Today, that fascination with human behavior is driving Chomik-Morales as she works with MIT professor of cognitive science Laura Schulz and Walter A. Rosenblith Professor of Cognitive Neuroscience and McGovern Institute for Brain Research investigator Nancy Kanwisher as a post-baccalaureate research scholar, using functional brain imaging to investigate how the brain recognizes and understands causal relationships. Since arriving at MIT last fall, she’s worked with study volunteers to collect functional MRI (fMRI) scans and used computational approaches to interpret the images. She’s also refined her own goals for the future.

Jessica Chomik-Morales (right) with postdoctoral associate Héctor De Jesús-Cortés. Photo: Steph Stevens

She plans to pursue a career in clinical neuropsychology, which will merge her curiosity about the biological basis of behavior with a strong desire to work directly with people. “I’d love to see what kind of questions I could answer about the neural mechanisms driving outlier behavior using fMRI coupled with cognitive assessment,” she says. And she’s confident that her experience in MIT’s two-year post-baccalaureate program will help her get there. “It’s given me the tools I need, and the techniques and methods and good scientific practice,” she says. “I’m learning that all here. And I think it’s going to make me a more successful scientist in grad school.”

The road to MIT

Chomik-Morales’s path to MIT was not a straightforward trajectory through the U.S. school system. When her mom, and later her dad, were unable to return to the U.S., she started eight grade in the capital city of Asunción. It did not go well. She spent nearly every afternoon in the principal’s office, and soon her father was encouraging her to return to the United States. “You are an American,” he told her. “You have a right to the educational system there.”

Back in Florida, Chomik-Morales became a dedicated student, even while she worked assorted jobs and shuffled between the homes of families who were willing to host her. “I had to grow up,” she says. “My parents are sacrificing everything just so I can have a chance to be somebody. People don’t get out of Paraguay often, because there aren’t opportunities and it’s a very poor country. I was given an opportunity, and if I waste that, then that is disrespect not only to my parents, but to my lineage, to my country.”

As she graduated from high school and went on to earn a degree in cognitive neuroscience at Florida Atlantic University, Chomik-Morales found herself experiencing things that were completely foreign to her family. Though she spoke daily with her mom via WhatsApp, it was hard to share what she was learning in school or what she was doing in the lab. And while they celebrated her academic achievements, Chomik-Morales knew they didn’t really understand them. “Neither of my parents went to college,” she says. “My mom told me that she never thought twice about learning about neuroscience. She had this misconception that it was something that she would never be able to digest.”

Chomik-Morales believes that the wonders of neuroscience are for everybody. But she also knows that Spanish speakers like her mom have few opportunities to hear the kinds of accessible, engaging stories that might draw them in. So she’s working to change that. With support from the McGovern Institute, the National Science Foundation funded Science and Technology Center for Brains, Minds, and Machines, Chomik-Morales is hosting and producing a weekly podcast called “Mi Última Neurona” (“My Last Neuron”), which brings conversations with neuroscientists to Spanish speakers around the world.

Listeners hear how researchers at MIT and other institutions are exploring big concepts like consciousness and neurodegeneration, and learn about the approaches they use to study the brain in humans, animals, and computational models. Chomik-Morales wants listeners to get to know neuroscientists on a personal level too, so she talks with her guests about their career paths, their lives outside the lab, and often, their experiences as immigrants in the United States.

After recording an interview with Chomik-Morales that delved into science, art, and the educational system in his home country of Peru, postdoc Arturo Deza thinks “Mi Última Neurona” has the potential to inspire Spanish speakers in Latin America, as well immigrants in other countries. “Even if you’re not a scientist, it’s really going to captivate you and you’re going to get something out of it,” he says. To that point, Chomik-Morales’s mother has quickly become an enthusiastic listener, and even begun seeking out resources to learn more about the brain on her own.

Chomik-Morales hopes the stories her guests share on “Mi Última Neurona” will inspire a future generation of Hispanic neuroscientists. She also wants listeners to know that a career in science doesn’t have to mean leaving their country behind. “Gain whatever you need to gain from outside, and then, if it’s what you desire, you’re able to go back and help your own community,” she says. With “Mi Última Neurona,” she adds, she feels she is giving back to her roots.

Unexpected synergy

This story originally appeared in the Spring 2022 issue of BrainScan.

***

Recent results from cognitive neuroscientist Nancy Kanwisher’s lab have left her pondering the role of music in human evolution. “Music is this big mystery,” she says. “Every human society that’s been studied has music. No other animals have music in the way that humans do. And nobody knows why humans have music at all. This has been a puzzle for centuries.”

MIT neuroscientist and McGovern Investigator Nancy Kanwisher. Photo: Jussi Puikkonen/KNAW

Some biologists and anthropologists have reasoned that since there’s no clear evolutionary advantage for humans’ unique ability to create and respond to music, these abilities must have emerged when humans began to repurpose other brain functions. To appreciate song, they’ve proposed, we draw on parts of the brain dedicated to speech and language. It makes sense, Kanwisher says: music and language are both complex, uniquely human ways of communicating. “It’s very sensible to think that there might be common machinery,” she says. “But there isn’t.”

That conclusion is based on her team’s 2015 discovery of neurons in the human brain that respond only to music. They first became clued in to these music-sensitive cells when they asked volunteers to listen to a diverse panel of sounds inside an MRI scanner. Functional brain imaging picked up signals suggesting that some neurons were specialized to detect only music but the broad map of brain activity generated by an fMRI couldn’t pinpoint those cells.

Singing in the brain

Kanwisher’s team wanted to know more but neuroscientists who study the human brain can’t always probe its circuitry with the exactitude of their colleagues who study the brains of mice or rats. They can’t insert electrodes into human brains to monitor the neurons they’re interested in. Neurosurgeons, however, sometimes do — and thus, collaborating with neurosurgeons has created unique opportunities for Kanwisher and other McGovern investigators to learn about the human brain.

Kanwisher’s team collaborated with clinicians at Albany Medical Center to work with patients who are undergoing monitoring prior to surgical treatment for epilepsy. Before operating, a neurosurgeon must identify the spot in their patient’s brain that is triggering seizures. This means inserting electrodes into the brain to monitor specific areas over a few days or weeks. The electrodes they implant pinpoint activity far more precisely, both spatially and temporally, than an MRI. And with patients’ permission, researchers like Kanwisher can take advantage of the information they collect.

“The intracranial recording from human brains that’s possible from collaboration with neurosurgeons is extremely precious to us,” Kanwisher says. “All of the research is kind of opportunistic, on whatever the surgeons are doing for clinical reasons. But sometimes we get really lucky and the electrodes are right in an area where we have long-standing scientific questions that those data can answer.”

Song-selective neural population (yellow) in the “inflated” human brain. Image: Sam Norman-Haignere

The unexpected discovery of song-specific neurons, led by postdoctoral researcher Sam Norman-Haignere, who is now an assistant professor at the University of Rochester Medical Center, emerged from such a collaboration. The team worked with patients at Albany Medical Center whose presurgical monitoring encompassed the auditory-processing part of the brain that they were curious about. Sure enough, certain electrodes picked up activity only when patients were listening to music. The data indicated that in some of those locations, it didn’t matter what kind of music was playing: the cells fired in response to a range of sounds that included flute solos, heavy metal, and rap. But other locations became active exclusively in response to vocal music. “We did not have that hypothesis at all, Kanwisher says. “It reallytook our breath away,” she says.

When that discovery is considered along with findings from McGovern colleague Ev Fedorenko, who has shown that the brain’s language-processing regions do not respond to music, Kanwisher says it’s now clear that music and language are segregated in the human brain. The origins of our unique appreciation for music, however, remain a mystery.

Clinical advantage

Clinical collaborations are also important to researchers in Ann Graybiels lab, who rely largely on model organisms like mice and rats to investigate the fine details of neural circuits. Working with clinicians helps keep them focused on answering questions that matter to patients.

In studying how the brain makes decisions, the Graybiel lab has zeroed in on connections that are vital for making choices that carry both positive and negative consequences. This is the kind of decision-making that you might call on when considering whether to accept a job that pays more but will be more demanding than your current position, for example. In experiments with rats, mice, and monkeys, they’ve identified different neurons dedicated to triggering opposing actions “approach” or “avoid” in these complex decision-making tasks. They’ve also found evidence that both age and stress change how the brain deals with these kinds of decisions.

In work led by former Graybiel lab research scientist Ken-ichi Amemori, they have worked with psychiatrist Diego Pizzagalli at McLean Hospital to learn what happens in the human brain when people make these complex decisions.

By monitoring brain activity as people made decisions inside an MRI scanner, the team identified regions that lit up when people chose to “approach” or “avoid.” They also found parallel activity patterns in monkeys that performed the same task, supporting the relevance of animal studies to understanding this circuitry.

In people diagnosed with major depression, however, the brain responded to approach-avoidance conflict somewhat differently. Certain areas were not activated as strongly as they were in people without depression, regardless of whether subjects ultimately chose to “approach” or “avoid.” The team suspects that some of these differences might reflect a stronger tendency toward avoidance, in which potential rewards are less influential for decision-making, while an individual is experiencing major depression.

The brain activity associated with approach-avoidance conflict in humans appears to align with what Graybiel’s team has seen in mice, although clinical imaging cannot reveal nearly as much detail about the involved circuits. Graybiel says that gives her confidence that what they are learning in the lab, where they can manipulate and study neural circuits with precision, is important. “I think there’s no doubt that this is relevant to humans,” she says. “I want to get as far into the mechanisms as possible, because maybe we’ll hit something that’s therapeutically valuable, or maybe we will really get an intuition about how parts of the brain work. I think that will help people.”

An optimized solution for face recognition

The human brain seems to care a lot about faces. It’s dedicated a specific area to identifying them, and the neurons there are so good at their job that most of us can readily recognize thousands of individuals. With artificial intelligence, computers can now recognize faces with a similar efficiency—and neuroscientists at MIT’s McGovern Institute have found that a computational network trained to identify faces and other objects discovers a surprisingly brain-like strategy to sort them all out.

The finding, reported March 16, 2022, in Science Advances, suggests that the millions of years of evolution that have shaped circuits in the human brain have optimized our system for facial recognition.

“The human brain’s solution is to segregate the processing of faces from the processing of objects,” explains Katharina Dobs, who led the study as a postdoctoral researcher in McGovern investigator Nancy Kanwisher’s lab. The artificial network that she trained did the same. “And that’s the same solution that we hypothesize any system that’s trained to recognize faces and to categorize objects would find,” she adds.

“These two completely different systems have figured out what a—if not the—good solution is. And that feels very profound,” says Kanwisher.

Functionally specific brain regions

More than twenty years ago, Kanwisher’s team discovered a small spot in the brain’s temporal lobe that responds specifically to faces. This region, which they named the fusiform face area, is one of many brain regions Kanwisher and others have found that are dedicated to specific tasks, such as the detection of written words, the perception of vocal songs, and understanding language.

Kanwisher says that as she has explored how the human brain is organized, she has always been curious about the reasons for that organization. Does the brain really need special machinery for facial recognition and other functions? “‘Why questions’ are very difficult in science,” she says. But with a sophisticated type of machine learning called a deep neural network, her team could at least find out how a different system would handle a similar task.

Dobs, who is now a research group leader at Justus Liebig University Giessen in Germany, assembled hundreds of thousands of images with which to train a deep neural network in face and object recognition. The collection included the faces of more than 1,700 different people and hundreds of different kinds of objects, from chairs to cheeseburgers. All of these were presented to the network, with no clues about which was which. “We never told the system that some of those are faces, and some of those are objects. So it’s basically just one big task,” Dobs says. “It needs to recognize a face identity, as well as a bike or a pen.”

Visualization of the preferred stimulus for example face-ranked filters. While filters in early layers (e.g., Conv5) were maximally activated by simple features, filters responded to features that appear somewhat like face parts (e.g., nose and eyes) in mid-level layers (e.g., Conv9) and appear to represent faces in a more holistic manner in late convolutional layers. Image: Kanwisher lab

As the program learned to identify the objects and faces, it organized itself into an information-processing network with that included units specifically dedicated to face recognition. Like the brain, this specialization occurred during the later stages of image processing. In both the brain and the artificial network, early steps in facial recognition involve more general vision processing machinery, and final stages rely on face-dedicated components.

It’s not known how face-processing machinery arises in a developing brain, but based on their findings, Kanwisher and Dobs say networks don’t necessarily require an innate face-processing mechanism to acquire that specialization. “We didn’t build anything face-ish into our network,” Kanwisher says. “The networks managed to segregate themselves without being given a face-specific nudge.”

Kanwisher says it was thrilling seeing the deep neural network segregate itself into separate parts for face and object recognition. “That’s what we’ve been looking at in the brain for twenty-some years,” she says. “Why do we have a separate system for face recognition in the brain? This tells me it is because that is what an optimized solution looks like.”

Now, she is eager to use deep neural nets to ask similar questions about why other brain functions are organized the way they are. “We have a new way to ask why the brain is organized the way it is,” she says. “How much of the structure we see in human brains will arise spontaneously by training networks to do comparable tasks?”