A “golden era” to study the brain

As an undergraduate, Mitch Murdock was a rare science-humanities double major, specializing in both English and molecular, cellular, and developmental biology at Yale University. Today, as a doctoral student in the MIT Department of Brain and Cognitive Sciences, he sees obvious ways that his English education expanded his horizons as a neuroscientist.

“One of my favorite parts of English was trying to explore interiority, and how people have really complicated experiences inside their heads,” Murdock explains. “I was excited about trying to bridge that gap between internal experiences of the world and that actual biological substrate of the brain.”

Though he can see those connections now, it wasn’t until after Yale that Murdock became interested in brain sciences. As an undergraduate, he was in a traditional molecular biology lab. He even planned to stay there after graduation as a research technician; fortunately, though, he says his advisor Ron Breaker encouraged him to explore the field. That’s how Murdock ended up in a new lab run by Conor Liston, an associate professor at Weill Cornell Medicine, who studies how factors such as stress and sleep regulate the modeling of brain circuits.

It was in Liston’s lab that Murdock was first exposed to neuroscience and began to see the brain as the biological basis of the philosophical questions about experience and emotion that interested him. “It was really in his lab where I thought, ‘Wow, this is so cool. I have to do a PhD studying neuroscience,’” Murdock laughs.

During his time as a research technician, Murdock examined the impact of chronic stress on brain activity in mice. Specifically, he was interested in ketamine, a fast-acting antidepressant prone to being abused, with the hope that better understanding how ketamine works will help scientists find safer alternatives. He focused on dendritic spines, small organelles attached to neurons that help transmit electrical signals between neurons and provide the physical substrate for memory storage. His findings, Murdock explains, suggested that ketamine works by recovering dendritic spines that can be lost after periods of chronic stress.

After three years at Weill Cornell, Murdock decided to pursue doctoral studies in neuroscience, hoping to continue some of the work he started with Liston. He chose MIT because of the research being done on dendritic spines in the lab of Elly Nedivi, the William R. (1964) and Linda R. Young Professor of Neuroscience in The Picower Institute for Learning and Memory.

Once again, though, the opportunity to explore a wider set of interests fortuitously led Murdock to a new passion. During lab rotations at the beginning of his PhD program, Murdock spent time shadowing a physician at Massachusetts General Hospital who was working with Alzheimer’s disease patients.

“Everyone knows that Alzheimer’s doesn’t have a cure. But I realized that, really, if you have Alzheimer’s disease, there’s very little that can be done,” he says. “That was a big wake-up call for me.”

After that experience, Murdock strategically planned his remaining lab rotations, eventually settling into the lab of Li-Huei Tsai, the Picower Professor of Neuroscience and the director of the Picower Institute. For the past five years, Murdock has worked with Tsai on various strands of Alzheimer’s research.

In one project, for example, members of the Tsai lab have shown how certain kinds of non-invasive light and sound stimulation induce brain activity that can improve memory loss in mouse models of Alzheimer’s. Scientists think that, during sleep, small movements in blood vessels drive spinal fluid into the brain, which, in turn, flushes out toxic metabolic waste. Murdock’s research suggests that certain kinds of stimulation might drive a similar process, flushing out waste that can exacerbate memory loss.

Much of his work is focused on the activity of single cells in the brain. Are certain neurons or types of neurons genetically predisposed to degenerate, or do they break down randomly? Why do certain subtypes of cells appear to be dysfunctional earlier on in the course of Alzheimer’s disease? How do changes in blood flow in vascular cells affect degeneration? All of these questions, Murdock believes, will help scientists better understand the causes of Alzheimer’s, which will translate eventually into developing cures and therapies.

To answer these questions, Murdock relies on new single-cell sequencing techniques that he says have changed the way we think about the brain. “This has been a big advance for the field, because we know there are a lot of different cell types in the brain, and we think that they might contribute differentially to Alzheimer’s disease risk,” says Murdock. “We can’t think of the brain as only about neurons.”

Murdock says that that kind of “big-picture” approach — thinking about the brain as a compilation of many different cell types that are all interacting — is the central tenet of his research. To look at the brain in the kind of detail that approach requires, Murdock works with Ed Boyden, the Y. Eva Tan Professor in Neurotechnology, a professor of biological engineering and brain and cognitive sciences at MIT, a Howard Hughes Medical Institute investigator, and a member of MIT’s McGovern Institute for Brain Research and Koch Institute for Integrative Cancer Research. Working with Boyden has allowed Murdock to use new technologies such as expansion microscopy and genetically encoded sensors to aid his research.

That kind of new technology, he adds, has helped blow the field wide open. “This is such a cool time to be a neuroscientist because the tools available now make this a golden era to study the brain.” That rapid intellectual expansion applies to the study of Alzheimer’s as well, including newly understood connections between the immune system and Alzheimer’s — an area in which Murdock says he hopes to continue after graduation.

Right now, though, Murdock is focused on a review paper synthesizing some of the latest research. Given the mountains of new Alzheimer’s work coming out each year, he admits that synthesizing all the data is a bit “crazy,” but he couldn’t be happier to be in the middle of it. “There’s just so much that we are learning about the brain from these new techniques, and it’s just so exciting.”

Modeling the social mind

Typically, it would take two graduate students to do the research that Setayesh Radkani is doing.

Driven by an insatiable curiosity about the human mind, she is working on two PhD thesis projects in two different cognitive neuroscience labs at MIT. For one, she is studying punishment as a social tool to influence others. For the other, she is uncovering the neural processes underlying social learning — that is, learning from others. By piecing together these two research programs, Radkani is hoping to gain a better understanding of the mechanisms underpinning social influence in the mind and brain.

Radkani lived in Iran for most of her life, growing up alongside her younger brother in Tehran. The two spent a lot of time together and have long been each other’s best friends. Her father is a civil engineer, and her mother is a midwife. Her parents always encouraged her to explore new things and follow her own path, even if it wasn’t quite what they imagined for her. And her uncle helped cultivate her sense of curiosity, teaching her to “always ask why” as a way to understand how the world works.

Growing up, Radkani most loved learning about human psychology and using math to model the world around her. But she thought it was impossible to combine her two interests. Prioritizing math, she pursued a bachelor’s degree in electrical engineering at the Sharif University of Technology in Iran.

Then, late in her undergraduate studies, Radkani took a psychology course and discovered the field of cognitive neuroscience, in which scientists mathematically model the human mind and brain. She also spent a summer working in a computational neuroscience lab at the Swiss Federal Institute of Technology in Lausanne. Seeing a way to combine her interests, she decided to pivot and pursue the subject in graduate school.

An experience leading a project in her engineering ethics course during her final year of undergrad further helped her discover some of the questions that would eventually form the basis of her PhD. The project investigated why some students cheat and how to change this.

“Through this project I learned how complicated it is to understand the reasons that people engage in immoral behavior, and even more complicated than that is how to devise policies and react in these situations in order to change people’s attitudes,” Radkani says. “It was this experience that made me realize that I’m interested in studying the human social and moral mind.”

She began looking into social cognitive neuroscience research and stumbled upon a relevant TED talk by Rebecca Saxe, the John W. Jarve Professor in Brain and Cognitive Sciences at MIT, who would eventually become one of Radkani’s research advisors. Radkani knew immediately that she wanted to work with Saxe. But she needed to first get into the BCS PhD program at MIT, a challenging obstacle given her minimal background in the field.

After two application cycles and a year’s worth of graduate courses in cognitive neuroscience, Radkani was accepted into the program. But to come to MIT, she had to leave her family behind. Coming from Iran, Radkani has a single-entry visa, making it difficult for her to travel outside the U.S. She hasn’t been able to visit her family since starting her PhD and won’t be able to until at least after she graduates. Her visa also limits her research contributions, restricting her from attending conferences outside the U.S. “That is definitely a huge burden on my education and on my mental health,” she says.

Still, Radkani is grateful to be at MIT, indulging her curiosity in the human social mind. And she’s thankful for her supportive family, who she calls over FaceTime every day.

Modeling how people think about punishment

In Saxe’s lab, Radkani is researching how people approach and react to punishment, through behavioral studies and neuroimaging. By synthesizing these findings, she’s developing a computational model of the mind that characterizes how people make decisions in situations involving punishment, such as when a parent disciplines a child, when someone punishes their romantic partner, or when the criminal justice system sentences a defendant. With this model, Radkani says she hopes to better understand “when and why punishment works in changing behavior and influencing beliefs about right and wrong, and why sometimes it fails.”

Punishment isn’t a new research topic in cognitive neuroscience, Radkani says, but in previous studies, scientists have often only focused on people’s behavior in punitive situations and haven’t considered the thought processes that underlie those behaviors. Characterizing these thought processes, though, is key to understanding whether punishment in a situation can be effective in changing people’s attitudes.

People bring their prior beliefs into a punitive situation. Apart from moral beliefs about the appropriateness of different behaviors, “you have beliefs about the characteristics of the people involved, and you have theories about their intentions and motivations,” Radkani says. “All those come together to determine what you do or how you are influenced by punishment,” given the circumstances. Punishers decide a suitable punishment based on their interpretation of the situation, in light of their beliefs. Targets of punishment then decide whether they’ll change their attitude as a result of the punishment, depending on their own beliefs. Even outside observers make decisions, choosing whether to keep or change their moral beliefs based on what they see.

To capture these decision-making processes, Radkani is developing a computational model of the mind for punitive situations. The model mathematically represents people’s beliefs and how they interact with certain features of the situation to shape their decisions. The model then predicts a punisher’s decisions, and how punishment will influence the target and observers. Through this model, Radkani will provide a foundational understanding of how people think in various punitive situations.

Researching the neural mechanisms of social learning

In parallel, working in the lab of Professor Mehrdad Jazayeri, Radkani is studying social learning, uncovering its underlying neural processes. Through social learning, people learn from other people’s experiences and decisions, and incorporate this socially acquired knowledge into their own decisions or beliefs.

Humans are extraordinary in their social learning abilities, however our primary form of learning, shared by all other animals, is learning from self-experience. To investigate how learning from others is similar to or different from learning from our own experiences, Radkani has designed a two-player video game that involves both types of learning. During the game, she and her collaborators in Jazayeri’s lab record neural activity in the brain. By analyzing these neural measurements, they plan to uncover the computations carried out by neural circuits during social learning, and compare those to learning from self-experience.

Radkani first became curious about this comparison as a way to understand why people sometimes draw contrasting conclusions from very similar situations. “For example, if I get Covid from going to a restaurant, I’ll blame the restaurant and say it was not clean,” Radkani says. “But if I hear the same thing happen to my friend, I’ll say it’s because they were not careful.” Radkani wanted to know the root causes of this mismatch in how other people’s experiences affect our beliefs and judgements differently from our own similar experiences, particularly because it can lead to “errors that color the way that we judge other people,” she says.

By combining her two research projects, Radkani hopes to better understand how social influence works, particularly in moral situations. From there, she has a slew of research questions that she’s eager to investigate, including: How do people choose who to trust? And which types of people tend to be the most influential? As Radkani’s research grows, so does her curiosity.

Studies of autism tend to exclude women, researchers find

In recent years, researchers who study autism have made an effort to include more women and girls in their studies. However, despite these efforts, most studies of autism consistently enroll small numbers of female subjects or exclude them altogether, according to a new study from MIT.

The researchers found that a screening test commonly used to determine eligibility for studies of autism consistently winnows out a much higher percentage of women than men, creating a “leaky pipeline” that results in severe underrepresentation of women in studies of autism.

This lack of representation makes it more difficult to develop useful interventions or provide accurate diagnoses for girls and women, the researchers say.

“I think the findings favor having a more inclusive approach and widening the lens to end up being less biased in terms of who participates in research,” says John Gabrieli, the Grover Hermann Professor of Health Sciences and Technology and a professor of brain and cognitive sciences at MIT. “The more we understand autism in men and women and nonbinary individuals, the better services and more accurate diagnoses we can provide.”

Gabrieli, who is also a member of MIT’s McGovern Institute for Brain Research, is the senior author of the study, which appears in the journal Autism Research. Anila D’Mello, a former MIT postdoc who is now an assistant professor at the University of Texas Southwestern, is the lead author of the paper. MIT Technical Associate Isabelle Frosch, Research Coordinator Cindy Li, and Research Specialist Annie Cardinaux are also authors of the paper.

Gabrieli lab researchers Annie Cardinaux (left), Anila D’Mello (center), Cindy Li (right), and Isabelle Frosch (not pictured) have
uncovered sex biases in ASD research. Photo: Steph Stevens

Screening out females

Autism spectrum disorders are diagnosed based on observation of traits such as repetitive behaviors and difficulty with language and social interaction. Doctors may use a variety of screening tests to help them make a diagnosis, but these screens are not required.

For research studies of autism, it is routine to use a screening test called the Autism Diagnostic Observation Schedule (ADOS) to determine eligibility for the study. This test, which assesses social interaction, communication, play, and repetitive behaviors, provides a quantitative score in each category, and only participants who reach certain scores qualify for inclusion in studies.

While doing a study exploring how quickly the brains of autistic adults adapt to novel events in the environment, scientists in Gabrieli’s lab began to notice that the ADOS appeared to have unequal effects on male and female participation in research. As the study progressed, D’Mello noticed some significant brain differences between the male and female subjects in the study.

To investigate these differences further, D’Mello tried to find more female participants using an MIT database of autistic adults who have expressed interest in participating in research studies. However, when she sorted through the subjects, she found that only about half of the women in the database had met the ADOS cutoff scores typically required for inclusion in autism studies, compared to 80 percent of the males.

“We realized then that there’s a discrepancy and that the ADOS is essentially screening out who eventually participated in research,” D’Mello says. “We were really surprised at how many males we retained and how many females we lost to the ADOS.”

To see if this phenomenon was more widespread, the researchers looked at six publicly available datasets, which include more than 40,000 adults who have been diagnosed as autistic. For some of these datasets, participants were screened with ADOS to determine their eligibility to participate in studies, while for others, a “community diagnosis” — diagnosis from a doctor or other health care provider — was sufficient.

The researchers found that in datasets that required ADOS screening for eligibility, the ratio of male to female participants ended up being around 8:1, while in those that required only a community diagnosis the ratios ranged from about 2:1 to 1:1.

Previous studies have found differences between behavioral patterns in autistic men and women, but the ADOS test was originally developed using a largely male sample, which may explain why it often excludes women from research studies, D’Mello says.

“There were few females in the sample that was used to create this assessment, so it might be that it’s not great at picking up the female phenotype, which may differ in certain ways — primarily in domains like social communication,” she says.

Effects of exclusion

Failure to include more women and girls in studies of autism may contribute to shortcomings in the definitions of the disorder, the researchers say.

“The way we think about it is that the field evolved perhaps an implicit bias in how autism is defined, and it was driven disproportionately by analysis of males, and recruitment of males, and so on,” Gabrieli says. “So, the definition doesn’t fit as well, on average, with the different expression of autism that seems to be more common in females.”

This implicit bias has led to documented difficulties in receiving a diagnosis for girls and women, even when their symptoms are the same as those presented by autistic boys and men.

“Many females might be missed altogether in terms of diagnoses, and then our study shows that in the research setting, what is already a small pool gets whittled down at a much larger rate than that of males,” D’Mello says.

Excluding girls and women from this kind of research study can lead to treatments that don’t work as well for them, and it contributes to the perception that autism doesn’t affect women as much as men.

“The goal is that research should directly inform treatment, therapies, and public perception,” D’Mello says. “If the research is saying that there aren’t females with autism, or that the brain basis of autism only looks like the patterns established in males, then you’re not really helping females as much as you could be, and you’re not really getting at the truth of what the disorder might be.”

The researchers now plan to further explore some of the gender and sex-based differences that appear in autism, and how they arise. They also plan to expand the gender categories that they include. In the current study, the surveys that each participant filled out asked them to choose male or female, but the researchers have updated their questionnaire to include nonbinary and transgender options.

The research was funded by the Hock E. Tan and K. Lisa Yang Center for Autism Research, the Simons Center for the Social Brain at MIT, and the National Institutes of Mental Health.

How the brain generates rhythmic behavior

Many of our bodily functions, such as walking, breathing, and chewing, are controlled by brain circuits called central oscillators, which generate rhythmic firing patterns that regulate these behaviors.

MIT neuroscientists have now discovered the neuronal identity and mechanism underlying one of these circuits: an oscillator that controls the rhythmic back-and-forth sweeping of tactile whiskers, or whisking, in mice. This is the first time that any such oscillator has been fully characterized in mammals.

The MIT team found that the whisking oscillator consists of a population of inhibitory neurons in the brainstem that fires rhythmic bursts during whisking. As each neuron fires, it also inhibits some of the other neurons in the network, allowing the overall population to generate a synchronous rhythm that retracts the whiskers from their protracted positions.

“We have defined a mammalian oscillator molecularly, electrophysiologically, functionally, and mechanistically,” says Fan Wang, an MIT professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research. “It’s very exciting to see a clearly defined circuit and mechanism of how rhythm is generated in a mammal.”

Wang is the senior author of the study, which appears today in Nature. The lead authors of the paper are MIT research scientists Jun Takatoh and Vincent Prevosto.

Rhythmic behavior

Most of the research that clearly identified central oscillator circuits has been done in invertebrates. For example, Eve Marder’s lab at Brandeis University found cells in the stomatogastric ganglion in lobsters and crabs that generate oscillatory activity to control rhythmic motion of the digestive tract.

Characterizing oscillators in mammals, especially in awake behaving animals, has proven to be highly challenging. The oscillator that controls walking is believed to be distributed throughout the spinal cord, making it difficult to precisely identify the neurons and circuits involved. The oscillator that generates rhythmic breathing is located in a part of the brain stem called the pre-Bötzinger complex, but the exact identity of the oscillator neurons is not fully understood.

“There haven’t been detailed studies in awake behaving animals, where one can record from molecularly identified oscillator cells and manipulate them in a precise way,” Wang says.

Whisking is a prominent rhythmic exploratory behavior in many mammals, which use their tactile whiskers to detect objects and sense textures. In mice, whiskers extend and retract at a frequency of about 12 cycles per second. Several years ago, Wang’s lab set out try to identify the cells and the mechanism that control this oscillation.

To find the location of the whisking oscillator, the researchers traced back from the motor neurons that innervate whisker muscles. Using a modified rabies virus that infects axons, the researchers were able to label a group of cells presynaptic to these motor neurons in a part of the brainstem called the vibrissa intermediate reticular nucleus (vIRt). This finding was consistent with previous studies showing that damage to this part of the brain eliminates whisking.

The researchers then found that about half of these vIRt neurons express a protein called parvalbumin, and that this subpopulation of cells drives the rhythmic motion of the whiskers. When these neurons are silenced, whisking activity is abolished.

Next, the researchers recorded electrical activity from these parvalbumin-expressing vIRt neurons in brainstem in awake mice, a technically challenging task, and found that these neurons indeed have bursts of activity only during the whisker retraction period. Because these neurons provide inhibitory synaptic inputs to whisker motor neurons, it follows that rhythmic whisking is generated by a constant motor neuron protraction signal interrupted by the rhythmic retraction signal from these oscillator cells.

“That was a super satisfying and rewarding moment, to see that these cells are indeed the oscillator cells, because they fire rhythmically, they fire in the retraction phase, and they’re inhibitory neurons,” Wang says.

A maximum projection image showing tracked whiskers on the mouse muzzle. The right (control) side shows the back-and-forth rhythmic sweeping of the whiskers, while the experimental side where the whisking oscillator neurons are silenced, the whiskers move very little. Image: Wang Lab

“New principles”

The oscillatory bursting pattern of vIRt cells is initiated at the start of whisking. When the whiskers are not moving, these neurons fire continuously. When the researchers blocked vIRt neurons from inhibiting each other, the rhythm disappeared, and instead the oscillator neurons simply increased their rate of continuous firing.

This type of network, known as recurrent inhibitory network, differs from the types of oscillators that have been seen in the stomatogastric neurons in lobsters, in which neurons intrinsically generate their own rhythm.

“Now we have found a mammalian network oscillator that is formed by all inhibitory neurons,” Wang says.

The MIT scientists also collaborated with a team of theorists led by David Golomb at Ben-Gurion University, Israel, and David Kleinfeld at the University of California at San Diego. The theorists created a detailed computational model outlining how whisking is controlled, which fits well with all experimental data. A paper describing that model is appearing in an upcoming issue of Neuron.

Wang’s lab now plans to investigate other types of oscillatory circuits in mice, including those that control chewing and licking.

“We are very excited to find oscillators of these feeding behaviors and compare and contrast to the whisking oscillator, because they are all in the brain stem, and we want to know whether there’s some common theme or if there are many different ways to generate oscillators,” she says.

The research was funded by the National Institutes of Health.

Microscopy technique reveals hidden nanostructures in cells and tissues

Press Mentions

Inside a living cell, proteins and other molecules are often tightly packed together. These dense clusters can be difficult to image because the fluorescent labels used to make them visible can’t wedge themselves in between the molecules.

MIT researchers have now developed a novel way to overcome this limitation and make those “invisible” molecules visible. Their technique allows them to “de-crowd” the molecules by expanding a cell or tissue sample before labeling the molecules, which makes the molecules more accessible to fluorescent tags.

This method, which builds on a widely used technique known as expansion microscopy previously developed at MIT, should allow scientists to visualize molecules and cellular structures that have never been seen before.

“It’s becoming clear that the expansion process will reveal many new biological discoveries. If biologists and clinicians have been studying a protein in the brain or another biological specimen, and they’re labeling it the regular way, they might be missing entire categories of phenomena,” says Edward Boyden, the Y. Eva Tan Professor in Neurotechnology, a professor of biological engineering and brain and cognitive sciences at MIT, a Howard Hughes Medical Institute investigator, and a member of MIT’s McGovern Institute for Brain Research and Koch Institute for Integrative Cancer Research.

Using this technique, Boyden and his colleagues showed that they could image a nanostructure found in the synapses of neurons. They also imaged the structure of Alzheimer’s-linked amyloid beta plaques in greater detail than has been possible before.

“Our technology, which we named expansion revealing, enables visualization of these nanostructures, which previously remained hidden, using hardware easily available in academic labs,” says Deblina Sarkar, an assistant professor in the Media Lab and one of the lead authors of the study.

The senior authors of the study are Boyden; Li-Huei Tsai, director of MIT’s Picower Institute for Learning and Memory; and Thomas Blanpied, a professor of physiology at the University of Maryland. Other lead authors include Jinyoung Kang, an MIT postdoc, and Asmamaw Wassie, a recent MIT PhD recipient. The study appears today in Nature Biomedical Engineering.

De-crowding

Imaging a specific protein or other molecule inside a cell requires labeling it with a fluorescent tag carried by an antibody that binds to the target. Antibodies are about 10 nanometers long, while typical cellular proteins are usually about 2 to 5 nanometers in diameter, so if the target proteins are too densely packed, the antibodies can’t get to them.

This has been an obstacle to traditional imaging and also to the original version of expansion microscopy, which Boyden first developed in 2015. In the original version of expansion microscopy, researchers attached fluorescent labels to molecules of interest before they expanded the tissue. The labeling was done first, in part because the researchers had to use an enzyme to chop up proteins in the sample so the tissue could be expanded. This meant that the proteins couldn’t be labeled after the tissue was expanded.

To overcome that obstacle, the researchers had to find a way to expand the tissue while leaving the proteins intact. They used heat instead of enzymes to soften the tissue, allowing the tissue to expand 20-fold without being destroyed. Then, the separated proteins could be labeled with fluorescent tags after expansion.

With so many more proteins accessible for labeling, the researchers were able to identify tiny cellular structures within synapses, the connections between neurons that are densely packed with proteins. They labeled and imaged seven different synaptic proteins, which allowed them to visualize, in detail, “nanocolumns” consisting of calcium channels aligned with other synaptic proteins. These nanocolumns, which are believed to help make synaptic communication more efficient, were first discovered by Blanpied’s lab in 2016.

“This technology can be used to answer a lot of biological questions about dysfunction in synaptic proteins, which are involved in neurodegenerative diseases,” Kang says. “Until now there has been no tool to visualize synapses very well.”

New patterns

The researchers also used their new technique to image beta amyloid, a peptide that forms plaques in the brains of Alzheimer’s patients. Using brain tissue from mice, the researchers found that amyloid beta forms periodic nanoclusters, which had not been seen before. These clusters of amyloid beta also include potassium channels. The researchers also found amyloid beta molecules that formed helical structures along axons.

“In this paper, we don’t speculate as to what that biology might mean, but we show that it exists. That is just one example of the new patterns that we can see,” says Margaret Schroeder, an MIT graduate student who is also an author of the paper.

Sarkar says that she is fascinated by the nanoscale biomolecular patterns that this technology unveils. “With a background in nanoelectronics, I have developed electronic chips that require extremely precise alignment, in the nanofab. But when I see that in our brain Mother Nature has arranged biomolecules with such nanoscale precision, that really blows my mind,” she says.

Boyden and his group members are now working with other labs to study cellular structures such as protein aggregates linked to Parkinson’s and other diseases. In other projects, they are studying pathogens that infect cells and molecules that are involved in aging in the brain. Preliminary results from these studies have also revealed novel structures, Boyden says.

“Time and time again, you see things that are truly shocking,” he says. “It shows us how much we are missing with classical unexpanded staining.”

The researchers are also working on modifying the technique so they can image up to 20 proteins at a time. They are also working on adapting their process so that it can be used on human tissue samples.

Sarkar and her team, on the other hand, are developing tiny wirelessly powered nanoelectronic devices which could be distributed in the brain. They plan to integrate these devices with expansion revealing. “This can combine the intelligence of nanoelectronics with the nanoscopy prowess of expansion technology, for an integrated functional and structural understanding of the brain,” Sarkar says.

The research was funded by the National Institutes of Health, the National Science Foundation, the Ludwig Family Foundation, the JPB Foundation, the Open Philanthropy Project, John Doerr, Lisa Yang and the Tan-Yang Center for Autism Research at MIT, the U.S. Army Research Office, Charles Hieken, Tom Stocky, Kathleen Octavio, Lore McGovern, Good Ventures, and HHMI.

These neurons have food on the brain

A gooey slice of pizza. A pile of crispy French fries. Ice cream dripping down a cone on a hot summer day. When you look at any of these foods, a specialized part of your visual cortex lights up, according to a new study from MIT neuroscientists.

This newly discovered population of food-responsive neurons is located in the ventral visual stream, alongside populations that respond specifically to faces, bodies, places, and words. The unexpected finding may reflect the special significance of food in human culture, the researchers say.

“Food is central to human social interactions and cultural practices. It’s not just sustenance,” says Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience and a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds, and Machines. “Food is core to so many elements of our cultural identity, religious practice, and social interactions, and many other things that humans do.”

The findings, based on an analysis of a large public database of human brain responses to a set of 10,000 images, raise many additional questions about how and why this neural population develops. In future studies, the researchers hope to explore how people’s responses to certain foods might differ depending on their likes and dislikes, or their familiarity with certain types of food.

MIT postdoc Meenakshi Khosla is the lead author of the paper, along with MIT research scientist N. Apurva Ratan Murty. The study appears today in the journal Current Biology.

Visual categories

More than 20 years ago, while studying the ventral visual stream, the part of the brain that recognizes objects, Kanwisher discovered cortical regions that respond selectively to faces. Later, she and other scientists discovered other regions that respond selectively to places, bodies, or words. Most of those areas were discovered when researchers specifically set out to look for them. However, that hypothesis-driven approach can limit what you end up finding, Kanwisher says.

“There could be other things that we might not think to look for,” she says. “And even when we find something, how do we know that that’s actually part of the basic dominant structure of that pathway, and not something we found just because we were looking for it?”

To try to uncover the fundamental structure of the ventral visual stream, Kanwisher and Khosla decided to analyze a large, publicly available dataset of full-brain functional magnetic resonance imaging (fMRI) responses from eight human subjects as they viewed thousands of images.

“We wanted to see when we apply a data-driven, hypothesis-free strategy, what kinds of selectivities pop up, and whether those are consistent with what had been discovered before. A second goal was to see if we could discover novel selectivities that either haven’t been hypothesized before, or that have remained hidden due to the lower spatial resolution of fMRI data,” Khosla says.

To do that, the researchers applied a mathematical method that allows them to discover neural populations that can’t be identified from traditional fMRI data. An fMRI image is made up of many voxels — three-dimensional units that represent a cube of brain tissue. Each voxel contains hundreds of thousands of neurons, and if some of those neurons belong to smaller populations that respond to one type of visual input, their responses may be drowned out by other populations within the same voxel.

The new analytical method, which Kanwisher’s lab has previously used on fMRI data from the auditory cortex, can tease out responses of neural populations within each voxel of fMRI data.

Using this approach, the researchers found four populations that corresponded to previously identified clusters that respond to faces, places, bodies, and words. “That tells us that this method works, and it tells us that the things that we found before are not just obscure properties of that pathway, but major, dominant properties,” Kanwisher says.

Intriguingly, a fifth population also emerged, and this one appeared to be selective for images of food.

“We were first quite puzzled by this because food is not a visually homogenous category,” Khosla says. “Things like apples and corn and pasta all look so unlike each other, yet we found a single population that responds similarly to all these diverse food items.”

The food-specific population, which the researchers call the ventral food component (VFC), appears to be spread across two clusters of neurons, located on either side of the FFA. The fact that the food-specific populations are spread out between other category-specific populations may help explain why they have not been seen before, the researchers say.

“We think that food selectivity had been harder to characterize before because the populations that are selective for food are intermingled with other nearby populations that have distinct responses to other stimulus attributes. The low spatial resolution of fMRI prevents us from seeing this selectivity because the responses of different neural population get mixed in a voxel,” Khosla says.

“The technique which the researchers used to identify category-sensitive cells or areas is impressive, and it recovered known category-sensitive systems, making the food category findings most impressive,” says Paul Rozin, a professor of psychology at the University of Pennsylvania, who was not involved in the study. “I can’t imagine a way for the brain to reliably identify the diversity of foods based on sensory features. That makes this all the more fascinating, and likely to clue us in about something really new.”

Food vs non-food

The researchers also used the data to train a computational model of the VFC, based on previous models Murty had developed for the brain’s face and place recognition areas. This allowed the researchers to run additional experiments and predict the responses of the VFC. In one experiment, they fed the model matched images of food and non-food items that looked very similar — for example, a banana and a yellow crescent moon.

“Those matched stimuli have very similar visual properties, but the main attribute in which they differ is edible versus inedible,” Khosla says. “We could feed those arbitrary stimuli through the predictive model and see whether it would still respond more to food than non-food, without having to collect the fMRI data.”

They could also use the computational model to analyze much larger datasets, consisting of millions of images. Those simulations helped to confirm that the VFC is highly selective for images of food.

From their analysis of the human fMRI data, the researchers found that in some subjects, the VFC responded slightly more to processed foods such as pizza than unprocessed foods like apples. In the future they hope to explore how factors such as familiarity and like or dislike of a particular food might affect individuals’ responses to that food.

They also hope to study when and how this region becomes specialized during early childhood, and what other parts of the brain it communicates with. Another question is whether this food-selective population will be seen in other animals such as monkeys, who do not attach the cultural significance to food that humans do.

The research was funded by the National Institutes of Health, the National Eye Institute, and the National Science Foundation through the MIT Center for Brains, Minds, and Machines.

Whether speaking Turkish or Norwegian, the brain’s language network looks the same

Over several decades, neuroscientists have created a well-defined map of the brain’s “language network,” or the regions of the brain that are specialized for processing language. Found primarily in the left hemisphere, this network includes regions within Broca’s area, as well as in other parts of the frontal and temporal lobes.

However, the vast majority of those mapping studies have been done in English speakers as they listened to or read English texts. MIT neuroscientists have now performed brain imaging studies of speakers of 45 different languages. The results show that the speakers’ language networks appear to be essentially the same as those of native English speakers.

The findings, while not surprising, establish that the location and key properties of the language network appear to be universal. The work also lays the groundwork for future studies of linguistic elements that would be difficult or impossible to study in English speakers because English doesn’t have those features.

“This study is very foundational, extending some findings from English to a broad range of languages,” says Evelina Fedorenko, the Frederick A. and Carole J. Middleton Career Development Associate Professor of Neuroscience at MIT and a member of MIT’s McGovern Institute for Brain Research. “The hope is that now that we see that the basic properties seem to be general across languages, we can ask about potential differences between languages and language families in how they are implemented in the brain, and we can study phenomena that don’t really exist in English.”

Fedorenko is the senior author of the study, which appears today in Nature Neuroscience. Saima Malik-Moraleda, a PhD student in the Speech and Hearing Bioscience and Technology program at Harvard University, and Dima Ayyash, a former research assistant, are the lead authors of the paper.

Mapping language networks

The precise locations and shapes of language areas differ across individuals, so to find the language network, researchers ask each person to perform a language task while scanning their brains with functional magnetic resonance imaging (fMRI). Listening to or reading sentences in one’s native language should activate the language network. To distinguish this network from other brain regions, researchers also ask participants to perform tasks that should not activate it, such as listening to an unfamiliar language or solving math problems.

Several years ago, Fedorenko began designing these “localizer” tasks for speakers of languages other than English. While most studies of the language network have used English speakers as subjects, English does not include many features commonly seen in other languages. For example, in English, word order tends to be fixed, while in other languages there is more flexibility in how words are ordered. Many of those languages instead use the addition of morphemes, or segments of words, to convey additional meaning and relationships between words.

“There has been growing awareness for many years of the need to look at more languages, if you want make claims about how language works, as opposed to how English works,” Fedorenko says. “We thought it would be useful to develop tools to allow people to rigorously study language processing in the brain in other parts of the world. There’s now access to brain imaging technologies in many countries, but the basic paradigms that you would need to find the language-responsive areas in a person are just not there.”

For the new study, the researchers performed brain imaging of two speakers of 45 different languages, representing 12 different language families. Their goal was to see if key properties of the language network, such as location, left lateralization, and selectivity, were the same in those participants as in people whose native language is English.

The researchers decided to use “Alice in Wonderland” as the text that everyone would listen to, because it is one of the most widely translated works of fiction in the world. They selected 24 short passages and three long passages, each of which was recorded by a native speaker of the language. Each participant also heard nonsensical passages, which should not activate the language network, and was asked to do a variety of other cognitive tasks that should not activate it.

The team found that the language networks of participants in this study were found in approximately the same brain regions, and had the same selectivity, as those of native speakers of English.

“Language areas are selective,” Malik-Moraleda says. “They shouldn’t be responding during other tasks such as a spatial working memory task, and that was what we found across the speakers of 45 languages that we tested.”

Additionally, language regions that are typically activated together in English speakers, such as the frontal language areas and temporal language areas, were similarly synchronized in speakers of other languages.

The researchers also showed that among all of the subjects, the small amount of variation they saw between individuals who speak different languages was the same as the amount of variation that would typically be seen between native English speakers.

Similarities and differences

While the findings suggest that the overall architecture of the language network is similar across speakers of different languages, that doesn’t mean that there are no differences at all, Fedorenko says. As one example, researchers could now look for differences in speakers of languages that predominantly use morphemes, rather than word order, to help determine the meaning of a sentence.

“There are all sorts of interesting questions you can ask about morphological processing that don’t really make sense to ask in English, because it has much less morphology,” Fedorenko says.

Another possibility is studying whether speakers of languages that use differences in tone to convey different word meanings would have a language network with stronger links to auditory brain regions that encode pitch.

Right now, Fedorenko’s lab is working on a study in which they are comparing the ‘temporal receptive fields’ of speakers of six typologically different languages, including Turkish, Mandarin, and Finnish. The temporal receptive field is a measure of how many words the language processing system can handle at a time, and for English, it has been shown to be six to eight words long.

“The language system seems to be working on chunks of just a few words long, and we’re trying to see if this constraint is universal across these other languages that we’re testing,” Fedorenko says.

The researchers are also working on creating language localizer tasks and finding study participants representing additional languages beyond the 45 from this study.

The research was funded by the National Institutes of Health and research funds from MIT’s Department of Brain and Cognitive Sciences, the McGovern Institute, and the Simons Center for the Social Brain. Malik-Moraleda was funded by a la Caixa Fellowship and a Friends of McGovern fellowship.

Three distinct brain circuits in the thalamus contribute to Parkinson’s symptoms

Parkinson’s disease is best-known as a disorder of movement. Patients often experience tremors, loss of balance, and difficulty initiating movement. The disease also has lesser-known symptoms that are nonmotor, including depression.

In a study of a small region of the thalamus, MIT neuroscientists have now identified three distinct circuits that influence the development of both motor and nonmotor symptoms of Parkinson’s. Furthermore, they found that by manipulating these circuits, they could reverse Parkinson’s symptoms in mice.

The findings suggest that those circuits could be good targets for new drugs that could help combat many of the symptoms of Parkinson’s disease, the researchers say.

“We know that the thalamus is important in Parkinson’s disease, but a key question is how can you put together a circuit that that can explain many different things happening in Parkinson’s disease. Understanding different symptoms at a circuit level can help guide us in the development of better therapeutics,” says Guoping Feng, the James W. and Patricia T. Poitras Professor in Brain and Cognitive Sciences at MIT, a member of the Broad Institute of Harvard and MIT, and the associate director of the McGovern Institute for Brain Research at MIT.

Feng is the senior author of the study, which appears today in Nature. Ying Zhang, a J. Douglas Tan Postdoctoral Fellow at the McGovern Institute, and Dheeraj Roy, a NIH K99 Awardee and a McGovern Fellow at the Broad Institute, are the lead authors of the paper.

Tracing circuits

The thalamus consists of several different regions that perform a variety of functions. Many of these, including the parafascicular (PF) thalamus, help to control movement. Degeneration of these structures is often seen in patients with Parkinson’s disease, which is thought to contribute to their motor symptoms.

In this study, the MIT team set out to try to trace how the PF thalamus is connected to other brain regions, in hopes of learning more about its functions. They found that neurons of the PF thalamus project to three different parts of the basal ganglia, a cluster of structures involved in motor control and other functions: the caudate putamen (CPu), the subthalamic nucleus (STN), and the nucleus accumbens (NAc).

“We started with showing these different circuits, and we demonstrated that they’re mostly nonoverlapping, which strongly suggests that they have distinct functions,” Roy says.

Further studies revealed those functions. The circuit that projects to the CPu appears to be involved in general locomotion, and functions to dampen movement. When the researchers inhibited this circuit, mice spent more time moving around the cage they were in.

The circuit that extends into the STN, on the other hand, is important for motor learning — the ability to learn a new motor skill through practice. The researchers found that this circuit is necessary for a task in which the mice learn to balance on a rod that spins with increasing speed.

Lastly, the researchers found that, unlike the others, the circuit that connects the PF thalamus to the NAc is not involved in motor activity. Instead, it appears to be linked to motivation. Inhibiting this circuit generates depression-like behaviors in healthy mice, and they will no longer seek a reward such as sugar water.

Druggable targets

Once the researchers established the functions of these three circuits, they decided to explore how they might be affected in Parkinson’s disease. To do that, they used a mouse model of Parkinson’s, in which dopamine-producing neurons in the midbrain are lost.

They found that in this Parkinson’s model, the connection between the PF thalamus and the CPu was enhanced, and that this led to a decrease in overall movement. Additionally, the connections from the PF thalamus to the STN were weakened, which made it more difficult for the mice to learn the accelerating rod task.

Lastly, the researchers showed that in the Parkinson’s model, connections from the PF thalamus to the NAc were also interrupted, and that this led to depression-like symptoms in the mice, including loss of motivation.

Using chemogenetics or optogenetics, which allows them to control neuronal activity with a drug or light, the researchers found that they could manipulate each of these three circuits and in doing so, reverse each set of Parkinson’s symptoms. Then, they decided to look for molecular targets that might be “druggable,” and found that each of the three PF thalamus regions have cells that express different types of cholinergic receptors, which are activated by the neurotransmitter acetylcholine. By blocking or activating those receptors, depending on the circuit, they were also able to reverse the Parkinson’s symptoms.

“We found three distinct cholinergic receptors that can be expressed in these three different PF circuits, and if we use antagonists or agonists to modulate these three different PF populations, we can rescue movement, motor learning, and also depression-like behavior in PD mice,” Zhang says.

Parkinson’s patients are usually treated with L-dopa, a precursor of dopamine. While this drug helps patients regain motor control, it doesn’t help with motor learning or any nonmotor symptoms, and over time, patients become resistant to it.

The researchers hope that the circuits they characterized in this study could be targets for new Parkinson’s therapies. The types of neurons that they identified in the circuits of the mouse brain are also found in the nonhuman primate brain, and the researchers are now using RNA sequencing to find genes that are expressed specifically in those cells.

“RNA-sequencing technology will allow us to do a much more detailed molecular analysis in a cell-type specific way,” Feng says. “There may be better druggable targets in these cells, and once you know the specific cell types you want to modulate, you can identify all kinds of potential targets in them.”

The research was funded, in part, by the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics in Neuroscience at MIT, the Stanley Center for Psychiatric Research at the Broad Institute, the James and Patricia Poitras Center for Psychiatric Disorders Research at MIT, the National Institutes of Health BRAIN Initiative, and the National Institute of Mental Health.

New research center focused on brain-body relationship established at MIT

The inextricable link between our brains and our bodies has been gaining increasing recognition among researchers and clinicians over recent years. Studies have shown that the brain-body pathway is bidirectional — meaning that our mental state can influence our physical health and vice versa. But exactly how the two interact is less clear.

A new research center at MIT, funded by a $38 million gift to the McGovern Institute for Brain Research from philanthropist K. Lisa Yang, aims to unlock this mystery by creating and applying novel tools to explore the multidirectional, multilevel interplay between the brain and other body organ systems. This gift expands Yang’s exceptional philanthropic support of human health and basic science research at MIT over the past five years.

“Lisa Yang’s visionary gift enables MIT scientists and engineers to pioneer revolutionary technologies and undertake rigorous investigations into the brain’s complex relationship with other organ systems,” says MIT President L. Rafael Reif.  “Lisa’s tremendous generosity empowers MIT scientists to make pivotal breakthroughs in brain and biomedical research and, collectively, improve human health on a grand scale.”

The K. Lisa Yang Brain-Body Center will be directed by Polina Anikeeva, professor of materials science and engineering and brain and cognitive sciences at MIT and an associate investigator at the McGovern Institute. The center will harness the power of MIT’s collaborative, interdisciplinary life sciences research and engineering community to focus on complex conditions and diseases affecting both the body and brain, with a goal of unearthing knowledge of biological mechanisms that will lead to promising therapeutic options.

“Under Professor Anikeeva’s brilliant leadership, this wellspring of resources will encourage the very best work of MIT faculty, graduate fellows, and research — and ultimately make a real impact on the lives of many,” Reif adds.

microscope image of gut
Mouse small intestine stained to reveal cell nucleii (blue) and peripheral nerve fibers (red).
Image: Polina Anikeeva, Marie Manthey, Kareena Villalobos

Center goals  

Initial projects in the center will focus on four major lines of research:

  • Gut-Brain: Anikeeva’s group will expand a toolbox of new technologies and apply these tools to examine major neurobiological questions about gut-brain pathways and connections in the context of autism spectrum disorders, Parkinson’s disease, and affective disorders.
  • Aging: CRISPR pioneer Feng Zhang, the James and Patricia Poitras Professor of Neuroscience at MIT and investigator at the McGovern Institute, will lead a group in developing molecular tools for precision epigenomic editing and erasing accumulated “errors” of time, injury, or disease in various types of cells and tissues.
  • Pain: The lab of Fan Wang, investigator at the McGovern Institute and professor of brain and cognitive sciences, will design new tools and imaging methods to study autonomic responses, sympathetic-parasympathetic system balance, and brain-autonomic nervous system interactions, including how pain influences these interactions.
  • Acupuncture: Wang will also collaborate with Hilda (“Scooter”) Holcombe, a veterinarian in MIT’s Division of Comparative Medicine, to advance techniques for documenting changes in brain and peripheral tissues induced by acupuncture in mouse models. If successful, these techniques could lay the groundwork for deeper understandings of the mechanisms of acupuncture, specifically how the treatment stimulates the nervous system and restores function.

A key component of the K. Lisa Yang Brain-Body Center will be a focus on educating and training the brightest young minds who aspire to make true breakthroughs for individuals living with complex and often devastating diseases. A portion of center funding will endow the new K. Lisa Yang Brain-Body Fellows Program, which will support four annual fellowships for MIT graduate students and postdocs working to advance understanding of conditions that affect both the body and brain.

Mens sana in corpore sano

“A phrase I remember reading in secondary school has always stuck with me: ‘mens sana in corpore sano’ ‘a healthy mind in a healthy body,’” says Lisa Yang, a former investment banker committed to advocacy for individuals with visible and invisible disabilities. “When we look at how stress, nutrition, pain, immunity, and other complex factors impact our health, we truly see how inextricably linked our brains and bodies are. I am eager to help MIT scientists and engineers decode these links and make real headway in creating therapeutic strategies that result in longer, healthier lives.”

“This center marks a once-in-a-lifetime opportunity for labs like mine to conduct bold and risky studies into the complexities of brain-body connections,” says Anikeeva, who works at the intersection of materials science, electronics, and neurobiology. “The K. Lisa Yang Brain-Body Center will offer a pathbreaking, holistic approach that bridges multiple fields of study. I have no doubt that the center will result in revolutionary strides in our understanding of the inextricable bonds between the brain and the body’s peripheral organ systems, and a bold new way of thinking in how we approach human health overall.”

Lindsay Case and Guangyu Robert Yang named 2022 Searle Scholars

MIT cell biologist Lindsay Case and computational neuroscientist Guangyu Robert Yang have been named 2022 Searle Scholars, an award given annually to 15 outstanding U.S. assistant professors who have high potential for ongoing innovative research contributions in medicine, chemistry, or the biological sciences.

Case is an assistant professor of biology, while Yang is an assistant professor of brain and cognitive sciences and electrical engineering and computer science, and an associate investigator at the McGovern Institute for Brain Research. They will each receive $300,000 in flexible funding to support their high-risk, high-reward work over the next three years.

Lindsay Case

Case arrived at MIT in 2021, after completing a postdoc at the University of Texas Southwestern Medical Center in the lab of Michael Rosen. Prior to that, she earned her PhD from the University of North Carolina at Chapel Hill, working in the lab of Clare Waterman at the National Heart Lung and Blood Institute.

Situated in MIT’s Building 68, Case’s lab studies how molecules within cells organize themselves, and how such organization begets cellular function. Oftentimes, molecules will assemble at the cell’s plasma membrane — a complex signaling platform where hundreds of receptors sense information from outside the cell and initiate cellular changes in response. Through her experiments, Case has found that molecules at the plasma membrane can undergo a process known as phase separation, condensing to form liquid-like droplets.

As a Searle Scholar, Case is investigating the role that phase separation plays in regulating a specific class of signaling molecules called kinases. Her team will take a multidisciplinary approach to probe what happens when kinases phase separate into signaling clusters, and what cellular changes occur as a result. Because phase separation is emerging as a promising new target for small molecule therapies, this work will help identify kinases that are strong candidates for new therapeutic interventions to treat diseases such as cancer.

“I am honored to be recognized by the Searle Scholars Program, and thrilled to join such an incredible community of scientists,” Case says. “This support will enable my group to broaden our research efforts and take our preliminary findings in exciting new directions. I look forward to better understanding how phase separation impacts cellular function.”

Guangyu Robert Yang

Before coming to MIT in 2021, Yang trained in physics at Peking University, obtained a PhD in computational neuroscience at New York University with Xiao-Jing Wang, and further trained as a postdoc at the Center for Theoretical Neuroscience of Columbia University, as an intern at Google Brain, and as a junior fellow at the Simons Society of Fellows.

His research team at MIT, the MetaConscious Group, develops models of mental functions by incorporating multiple interacting modules. They are designing pipelines to process and compare large-scale experimental datasets that span modalities ranging from behavioral data to neural activity data to molecular data. These datasets are then be integrated to train individual computational modules based on the experimental tasks that were evaluated such as vision, memory, or movement.

Ultimately, Yang seeks to combine these modules into a “network of networks” that models higher-level brain functions such as the ability to flexibly and rapidly learn a variety of tasks. Such integrative models are rare because, until recently, it was not possible to acquire data that spans modalities and brain regions in real time as animals perform tasks. The time is finally right for integrative network models. Computational models that incorporate such multisystem, multilevel datasets will allow scientists to make new predictions about the neural basis of cognition and open a window to a mathematical understanding the mind.

“This is a new research direction for me, and I think for the field too. It comes with many exciting opportunities as well as challenges. Having this recognition from the Searle Scholars program really gives me extra courage to take on the uncertainties and challenges,” says Yang.

Since 1981, 647 scientists have been named Searle Scholars. Including this year, the program has awarded more than $147 million. Eighty-five Searle Scholars have been inducted into the National Academy of Sciences. Twenty scholars have been recognized with a MacArthur Fellowship, known as the “genius grant,” and two Searle Scholars have been awarded the Nobel Prize in Chemistry. The Searle Scholars Program is funded through the Searle Funds at The Chicago Community Trust and administered by Kinship Foundation.