Method offers inexpensive imaging at the scale of virus particles

Using an ordinary light microscope, MIT engineers have devised a technique for imaging biological samples with accuracy at the scale of 10 nanometers — which should enable them to image viruses and potentially even single biomolecules, the researchers say.

The new technique builds on expansion microscopy, an approach that involves embedding biological samples in a hydrogel and then expanding them before imaging them with a microscope. For the latest version of the technique, the researchers developed a new type of hydrogel that maintains a more uniform configuration, allowing for greater accuracy in imaging tiny structures.

This degree of accuracy could open the door to studying the basic molecular interactions that make life possible, says Edward Boyden, the Y. Eva Tan Professor in Neurotechnology, a professor of biological engineering and brain and cognitive sciences at MIT, and a member of MIT’s McGovern Institute for Brain Research and Koch Institute for Integrative Cancer Research.

“If you could see individual molecules and identify what kind they are, with single-digit-nanometer accuracy, then you might be able to actually look at the structure of life.”

“And structure, as a century of modern biology has told us, governs function,” says Boyden, who is the senior author of the new study.

The lead authors of the paper, which appears today in Nature Nanotechnology, are MIT Research Scientist Ruixuan Gao and Chih-Chieh “Jay” Yu PhD ’20. Other authors include Linyi Gao PhD ’20; former MIT postdoc Kiryl Piatkevich; Rachael Neve, director of the Gene Technology Core at Massachusetts General Hospital; James Munro, an associate professor of microbiology and physiological systems at University of Massachusetts Medical School; and Srigokul Upadhyayula, a former assistant professor of pediatrics at Harvard Medical School and an assistant professor in residence of cell and developmental biology at the University of California at Berkeley.

Low cost, high resolution

Many labs around the world have begun using expansion microscopy since Boyden’s lab first introduced it in 2015. With this technique, researchers physically enlarge their samples about fourfold in linear dimension before imaging them, allowing them to generate high-resolution images without expensive equipment. Boyden’s lab has also developed methods for labeling proteins, RNA, and other molecules in a sample so that they can be imaged after expansion.

“Hundreds of groups are doing expansion microscopy. There’s clearly pent-up demand for an easy, inexpensive method of nanoimaging,” Boyden says. “Now the question is, how good can we get? Can we get down to single-molecule accuracy? Because in the end, you want to reach a resolution that gets down to the fundamental building blocks of life.”

Other techniques such as electron microscopy and super-resolution imaging offer high resolution, but the equipment required is expensive and not widely accessible. Expansion microscopy, however, enables high-resolution imaging with an ordinary light microscope.

In a 2017 paper, Boyden’s lab demonstrated resolution of around 20 nanometers, using a process in which samples were expanded twice before imaging. This approach, as well as the earlier versions of expansion microscopy, relies on an absorbent polymer made from sodium polyacrylate, assembled using a method called free radical synthesis. These gels swell when exposed to water; however, one limitation of these gels is that they are not completely uniform in structure or density. This irregularity leads to small distortions in the shape of the sample when it’s expanded, limiting the accuracy that can be achieved.

To overcome this, the researchers developed a new gel called tetra-gel, which forms a more predictable structure. By combining tetrahedral PEG molecules with tetrahedral sodium polyacrylates, the researchers were able to create a lattice-like structure that is much more uniform than the free-radical synthesized sodium polyacrylate hydrogels they previously used.

Three-dimensional (3D) rendered movie of envelope proteins of an herpes simplex virus type 1 (HSV-1) virion expanded by tetra-gel (TG)-based three-round iterative expansion. The deconvolved puncta (white), the overlay of the deconvolved puncta (white) and the fitted centroids (red), and the extracted centroids (red) are shown from left to right. Expansion factor, 38.3×. Scale bars, 100 nm.
Credit: Ruixuan Gao and Boyden Lab

The researchers demonstrated the accuracy of this approach by using it to expand particles of herpes simplex virus type 1 (HSV-1), which have a distinctive spherical shape. After expanding the virus particles, the researchers compared the shapes to the shapes obtained by electron microscopy and found that the distortion was lower than that seen with previous versions of expansion microscopy, allowing them to achieve an accuracy of about 10 nanometers.

“We can look at how the arrangements of these proteins change as they are expanded and evaluate how close they are to the spherical shape. That’s how we validated it and determined how faithfully we can preserve the nanostructure of the shapes and the relative spatial arrangements of these molecules,” Ruixuan Gao says.

Single molecules

The researchers also used their new hydrogel to expand cells, including human kidney cells and mouse brain cells. They are now working on ways to improve the accuracy to the point where they can image individual molecules within such cells. One limitation on this degree of accuracy is the size of the antibodies used to label molecules in the cell, which are about 10 to 20 nanometers long. To image individual molecules, the researchers would likely need to create smaller labels or to add the labels after expansion was complete.

Left, HeLa cell with two-color labeling of clathrin-coated pits/vesicles and microtubules, expanded by TG-based two-round iterative expansion. Expansion factor, 15.6×. Scale bar, 10 μm (156 μm). Right, magnified view of the boxed region for each color channel. Scale bars, 1 μm (15.6 μm). Image: Boyden Lab

They are also exploring whether other types of polymers, or modified versions of the tetra-gel polymer, could help them realize greater accuracy.

If they can achieve accuracy down to single molecules, many new frontiers could be explored, Boyden says. For example, scientists could glimpse how different molecules interact with each other, which could shed light on cell signaling pathways, immune response activation, synaptic communication, drug-target interactions, and many other biological phenomena.

“We’d love to look at regions of a cell, like the synapse between two neurons, or other molecules involved in cell-cell signaling, and to figure out how all the parts talk to each other,” he says. “How do they work together and how do they go wrong in diseases?”

The research was funded by Lisa Yang, John Doerr, Open Philanthropy, the National Institutes of Health, the Howard Hughes Medical Institute Simons Faculty Scholars Program, the Intelligence Advanced Research Projects Activity, the U.S. Army Research Laboratory, the US-Israel Binational Science Foundation, the National Science Foundation, the Friends of the McGovern Fellowship, and the Fellows program of the Image and Data Analysis Core at Harvard Medical School.

What’s happening in your brain when you’re spacing out?

This story is adapted from a News@Northeastern post.

We all do it. One second you’re fully focused on the task in front of you, a conversation with a friend, or a professor’s lecture, and the next second your mind is wandering to your dinner plans.

But how does that happen?

“We spend so much of our daily lives engaged in things that are completely unrelated to what’s in front of us,” says Aaron Kucyi, neuroscientist and principal research scientist in the department of psychology at Northeastern. “And we know very little about how it works in the brain.”

So Kucyi and colleagues at Massachusetts General Hospital, Boston University, and the McGovern Institute at MIT started scanning people’s brains using functional magnetic resonance imaging (fMRI) to get an inside look. Their results, published Friday in the journal Nature Communications, add complexity to our understanding of the wandering mind.

It turns out that spacing out might not deserve the bad reputation that it receives. Many more parts of the brain seem to be engaged in mind-wandering than previously thought, supporting the idea that it’s actually a quite dynamic and fundamental function of our psychology.

“Many of those things that we do when we’re spacing out are very adaptive and important to our lives,” says Kucyi, the paper’s first author. We might be drafting an email in our heads while in the shower, or trying to remember the host’s spouse’s name while getting dressed for a party. Moments when our minds wander can allow space for creativity and planning for the future, he says, so it makes sense that many parts of the brain would be engaged in that kind of thinking.

But mind wandering may also be detrimental, especially for those suffering from mental illness, explains the study’s senior author, Susan Whitfield-Gabrieli. “For many of us, mind wandering may be a healthy, positive and constructive experience, like reminiscing about the past, planning for the future, or engaging in creative thinking,” says Whitfield-Gabrieli, a professor of psychology at Northeastern University and a McGovern Institute research affiliate. “But for those suffering from mental illness such as depression, anxiety or psychosis, reminiscing about the past may transform into ruminating about the past, planning for the future may become obsessively worrying about the future and creative thinking may evolve into delusional thinking.”

Identifying the brain circuits associated with mind wandering, she says, may reveal new targets and better treatment options for people suffering from these disorders.

McGovern research affiliate Susan Whitfield-Gabrieli in the Martinos Imaging Center.

Inside the wandering mind

To study wandering minds, the researchers first had to set up a situation in which people were likely to lose focus. They recruited test subjects at the McGovern Institute’s Martinos Imaging Center to complete a simple, repetitive, and rather boring task. With an fMRI scanner mapping their brain activity, participants were instructed to press a button whenever an image of a city scene appeared on a screen in front of them and withhold a response when a mountain image appeared.

Throughout the experiment, the subjects were asked whether they were focused on the task at hand. If a subject said their mind was wandering, the researchers took a close look at their brain scans from right before they reported loss of focus. The data was then fed into a machine-learning algorithm to identify patterns in the neurological connections involved in mind-wandering (called “stimulus-independent, task-unrelated thought” by the scientists).

Scientists previously identified a specialized system in the brain considered to be responsible for mind-wandering. Called the “default mode network,” these parts of the brain activated when someone’s thoughts were drifting away from their immediate surroundings and deactivated when they were focused. The other parts of the brain, that theory went, were quiet when the mind was wandering, says Kucyi.

The researchers used a technique called “connectome-based predictive modeling” to identify patterns in the brain connections involved in mind-wandering. Image courtesy of the researchers.

The “default mode network” did light up in Kucyi’s data. But parts of the brain associated with other functions also appeared to activate when his subjects reported that their minds had wandered.

For example, the “default mode network” and networks in the brain related to controlling or maintaining a train of thought also seemed to be communicating with one another, perhaps helping explain the ability to go down a rabbit hole in your mind when you’re distracted from a task. There was also a noticeable lack of communication between the “default mode network” and the systems associated with sensory input, which makes sense, as the mind is wandering away from the person’s immediate environment.

“It makes sense that virtually the whole brain is involved,” Kucyi says. “Mind-wandering is a very complex operation in the brain and involves drawing from our memory, making predictions about the future, dynamically switching between topics that we’re thinking about, fluctuations in our mood, and engaging in vivid visual imagery while ignoring immediate visual input,” just to name a few functions.

The “default mode network” still seems to be key, Kucyi says. Virtual computer analysis suggests that if you took the regions of the brain in that network out of the equation, the other brain regions would not be able to pick up the slack in mind-wandering.

Kucyi, however, didn’t just want to identify regions of the brain that lit up when someone said their mind was wandering. He also wanted to be able to use that generalized pattern of brain activity to be able to predict whether or not a subject would say that their focus had drifted away from the task in front of them.

That’s where the machine-learning analysis of the data came in. The idea, Kucyi says, is that “you could bring a new person into the scanner and not even ask them whether they were mind-wandering or not, and have a good estimate from their brain data whether they were.”

The ADHD brain

To test the patterns identified through machine learning, the researchers brought in a new set of test subjects – people diagnosed with ADHD. When the fMRI scans lit up the parts of the brain Kucyi and his colleagues had identified as engaged in mind-wandering in the first part of the study, the new test subjects reported that their thoughts had drifted from the images of cities and mountains in front of them. It worked.

Kucyi doesn’t expect fMRI scans to become a new way to diagnose ADHD, however. That wasn’t the goal. Perhaps down the road it could be used to help develop treatments, he suggests. But this study was focused on “informing the biological mechanisms behind it.”

John Gabrieli, a co-author on the study and director of the imaging center at MIT’s McGovern Institute, adds that “there is recent evidence that ADHD patients with more mind-wandering have many more everyday practical and clinical difficulties than ADHD patients with less mind-wandering. This is the first evidence about the brain basis for that important difference, and points to what neural systems ought to be the targets of intervention to help ADHD patients who struggle the most.”

For Kucyi, the study of “mind-wandering” goes beyond ADHD. And the contents of those straying thoughts may be telling, he says.

“We just asked people whether they were focused on the task or away from the task, but we have no idea what they were thinking about,” he says. “What are people thinking about? For example, are those more positive thoughts or negative thoughts?” Such answers, which he hopes to explore in future research, could help scientists better understand other pathologies such as depression and anxiety, which often involve rumination on upsetting or worrisome thoughts.

Whitfield-Gabrieli and her team are already exploring whether behavioral interventions, such as mindfulness based real-time fMRI neurofeedback, can be used to help train people suffering from mental illness to modulate their own brain networks and reduce hallucinations, ruminations, and other troubling symptoms.

“We hope that our research will have clinical implications that extend far beyond the potential for identifying treatment targets for ADHD,” she says.

Individual neurons responsible for complex social reasoning in humans identified

This story is adapted from a January 27, 2021 press release from Massachusetts General Hospital.

The ability to understand others’ hidden thoughts and beliefs is an essential component of human social behavior. Now, neuroscientists have for the first time identified specific neurons critical for social reasoning, a cognitive process that requires individuals to acknowledge and predict others’ hidden beliefs and thoughts.

The findings, published in Nature, open new avenues of study into disorders that affect social behavior, according to the authors.

In the study, a team of Harvard Medical School investigators based at Massachusetts General Hospital and colleagues from MIT took a rare look at how individual neurons represent the beliefs of others. They did so by recording neuron activity in patients undergoing neurosurgery to alleviate symptoms of motor disorders such as Parkinson’s disease.

Theory of mind

The researcher team, which included McGovern scientists Ev Fedorenko and Rebecca Saxe, focused on a complex social cognitive process called “theory of mind.” To illustrate this, let’s say a friend appears to be sad on her birthday. One may infer she is sad because she didn’t get a present or she is upset at growing older.

“When we interact, we must be able to form predictions about another person’s unstated intentions and thoughts,” said senior author Ziv Williams, HMS associate professor of neurosurgery at Mass General. “This ability requires us to paint a mental picture of someone’s beliefs, which involves acknowledging that those beliefs may be different from our own and assessing whether they are true or false.”

This social reasoning process develops during early childhood and is fundamental to successful social behavior. Individuals with autism, schizophrenia, bipolar affective disorder, and traumatic brain injuries are believed to have a deficit of theory-of-mind ability.

For the study, 15 patients agreed to perform brief behavioral tasks before undergoing neurosurgery for placement of deep-brain stimulation for motor disorders. Microelectrodes inserted into the dorsomedial prefrontal cortex recorded the behavior of individual neurons as patients listened to short narratives and answered questions about them.

For example, participants were presented with the following scenario to evaluate how they considered another’s belief of reality: “You and Tom see a jar on the table. After Tom leaves, you move the jar to a cabinet. Where does Tom believe the jar to be?”

Social computation

The participants had to make inferences about another’s beliefs after hearing each story. The experiment did not change the planned surgical approach or alter clinical care.

“Our study provides evidence to support theory of mind by individual neurons,” said study first author Mohsen Jamali, HMS instructor in neurosurgery at Mass General. “Until now, it wasn’t clear whether or how neurons were able to perform these social cognitive computations.”

The investigators found that some neurons are specialized and respond only when assessing another’s belief as false, for example. Other neurons encode information to distinguish one person’s beliefs from another’s. Still other neurons create a representation of a specific item, such as a cup or food item, mentioned in the story. Some neurons may multitask and aren’t dedicated solely to social reasoning.

“Each neuron is encoding different bits of information,” Jamali said. “By combining the computations of all the neurons, you get a very detailed representation of the contents of another’s beliefs and an accurate prediction of whether they are true or false.”

Now that scientists understand the basic cellular mechanism that underlies human theory of mind, they have an operational framework to begin investigating disorders in which social behavior is affected, according to Williams.

“Understanding social reasoning is also important to many different fields, such as child development, economics, and sociology, and could help in the development of more effective treatments for conditions such as autism spectrum disorder,” Williams said.

Previous research on the cognitive processes that underlie theory of mind has involved functional MRI studies, where scientists watch which parts of the brain are active as volunteers perform cognitive tasks.

But the imaging studies capture the activity of many thousands of neurons all at once. In contrast, Williams and colleagues recorded the computations of individual neurons. This provided a detailed picture of how neurons encode social information.

“Individual neurons, even within a small area of the brain, are doing very different things, not all of which are involved in social reasoning,” Williams said. “Without delving into the computations of single cells, it’s very hard to build an understanding of the complex cognitive processes underlying human social behavior and how they go awry in mental disorders.”

Adapted from a Mass General news release.

Two MIT Brain and Cognitive Sciences faculty members earn funding from the G. Harold and Leila Y. Mathers Foundation

Two MIT neuroscientists have received grants from the G. Harold and Leila Y. Mathers Foundation to screen for genes that could help brain cells withstand Parkinson’s disease and to map how gene expression changes in the brain in response to drugs of abuse.

Myriam Heiman, an associate professor in MIT’s Department of Brain and Cognitive Sciences and a core member of the Picower Institute for Learning and Memory and the Broad Institute of MIT and Harvard, and Alan Jasanoff, who is also a professor in biological engineering, brain and cognitive sciences, nuclear science and engineering and an associate investigator at the McGovern Institute for Brain Research, each received three-year awards that formally begin January 1, 2021.

Jasanoff, who also directs MIT’s Center for Neurobiological Engineering, is known for developing sensors that monitor molecular hallmarks of neural activity in the living brain, in real time, via noninvasive MRI brain scanning. One of the MRI-detectable sensors that he has developed is for dopamine, a neuromodulator that is key to learning what behaviors and contexts lead to reward. Addictive drugs artificially drive dopamine release, thereby hijacking the brain’s reward prediction system. Studies have shown that dopamine and drugs of abuse activate gene transcription in specific brain regions, and that this gene expression changes as animals are repeatedly exposed to drugs. Despite the important implications of these neuroplastic changes for the process of addiction, in which drug-seeking behaviors become compulsive, there are no effective tools available to measure gene expression across the brain in real time.

Cerebral vasculature in mouse brain. The Jasanoff lab hopes to develop a method for mapping gene expression the brain with related labeling characteristics .
Image: Alan Jasanoff

With the new Mathers funding, Jasanoff is developing new MRI-detectable sensors for gene expression. With these cutting-edge tools, Jasanoff proposes to make an activity atlas of how the brain responds to drugs of abuse, both upon initial exposure and over repeated doses that simulate the experiences of drug addicted individuals.

“Our studies will relate drug-induced brain activity to longer term changes that reshape the brain in addiction,” says Jasanoff. “We hope these studies will suggest new biomarkers or treatments.”

Dopamine-producing neurons in a brain region called the substantia nigra are known to be especially vulnerable to dying in Parkinson’s disease, leading to the severe motor difficulties experienced during the progression of the incurable, chronic neurodegenerative disorder. The field knows little about what puts specific cells at such dire risk, or what molecular mechanisms might help them resist the disease. In her research on Huntington’s disease, another incurable neurodegenerative disorder in which a specific neuron population in the striatum is especially vulnerable, Heiman has been able to use an innovative method her lab pioneered to discover genes whose expression promotes neuron survival, yielding potential new drug targets. The technique involves conducting an unbiased screen in which her lab knocks out each of the 22,000 genes expressed in the mouse brain one by one in neurons in disease model mice and healthy controls. The technique allows her to determine which genes, when missing, contribute to neuron death amid disease and therefore which genes are particularly needed for survival. The products of those genes can then be evaluated as drug targets. With the new Mathers award, Heiman plans to apply the method to study Parkinson’s disease.

An immunofluorescence image taken in a brain region called the substantia nigra (SN) highlights tyrosine hydroxylase, a protein expressed by dopamine neurons. This type of neuron in the SN is especially vulnerable to neurodegeneration in Parkinson’s disease. Image: Preston Ge/Heiman Lab

“There is currently no molecular explanation for the brain cell loss seen in Parkinson’s disease or a cure for this devastating disease,” Heiman said. “This award will allow us to perform unbiased, genome-wide genetic screens in the brains of mouse models of Parkinson’s disease, probing for genes that allow brain cells to survive the effects of cellular perturbations associated with Parkinson’s disease. I’m extremely grateful for this generous support and recognition of our work from the Mathers Foundation, and hope that our study will elucidate new therapeutic targets for the treatment and even prevention of Parkinson’s disease.”

To the brain, reading computer code is not the same as reading language

In some ways, learning to program a computer is similar to learning a new language. It requires learning new symbols and terms, which must be organized correctly to instruct the computer what to do. The computer code must also be clear enough that other programmers can read and understand it.

In spite of those similarities, MIT neuroscientists have found that reading computer code does not activate the regions of the brain that are involved in language processing. Instead, it activates a distributed network called the multiple demand network, which is also recruited for complex cognitive tasks such as solving math problems or crossword puzzles.

However, although reading computer code activates the multiple demand network, it appears to rely more on different parts of the network than math or logic problems do, suggesting that coding does not precisely replicate the cognitive demands of mathematics either.

“Understanding computer code seems to be its own thing. It’s not the same as language, and it’s not the same as math and logic,” says Anna Ivanova, an MIT graduate student and the lead author of the study.

Evelina Fedorenko, the Frederick A. and Carole J. Middleton Career Development Associate Professor of Neuroscience and a member of the McGovern Institute for Brain Research, is the senior author of the paper, which appears today in eLife. Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory and Tufts University were also involved in the study.

Language and cognition

McGovern Investivator Ev Fedorenko in the Martinos Imaging Center at MIT. Photo: Caitlin Cunningham

A major focus of Fedorenko’s research is the relationship between language and other cognitive functions. In particular, she has been studying the question of whether other functions rely on the brain’s language network, which includes Broca’s area and other regions in the left hemisphere of the brain. In previous work, her lab has shown that music and math do not appear to activate this language network.

“Here, we were interested in exploring the relationship between language and computer programming, partially because computer programming is such a new invention that we know that there couldn’t be any hardwired mechanisms that make us good programmers,” Ivanova says.

There are two schools of thought regarding how the brain learns to code, she says. One holds that in order to be good at programming, you must be good at math. The other suggests that because of the parallels between coding and language, language skills might be more relevant. To shed light on this issue, the researchers set out to study whether brain activity patterns while reading computer code would overlap with language-related brain activity.

The two programming languages that the researchers focused on in this study are known for their readability — Python and ScratchJr, a visual programming language designed for children age 5 and older. The subjects in the study were all young adults proficient in the language they were being tested on. While the programmers lay in a functional magnetic resonance (fMRI) scanner, the researchers showed them snippets of code and asked them to predict what action the code would produce.

The researchers saw little to no response to code in the language regions of the brain. Instead, they found that the coding task mainly activated the so-called multiple demand network. This network, whose activity is spread throughout the frontal and parietal lobes of the brain, is typically recruited for tasks that require holding many pieces of information in mind at once, and is responsible for our ability to perform a wide variety of mental tasks.

“It does pretty much anything that’s cognitively challenging, that makes you think hard,” says Ivanova, who was also named one of the McGovern Institute’s rising stars in neuroscience.

Previous studies have shown that math and logic problems seem to rely mainly on the multiple demand regions in the left hemisphere, while tasks that involve spatial navigation activate the right hemisphere more than the left. The MIT team found that reading computer code appears to activate both the left and right sides of the multiple demand network, and ScratchJr activated the right side slightly more than the left. This finding goes against the hypothesis that math and coding rely on the same brain mechanisms.

Effects of experience

The researchers say that while they didn’t identify any regions that appear to be exclusively devoted to programming, such specialized brain activity might develop in people who have much more coding experience.

“It’s possible that if you take people who are professional programmers, who have spent 30 or 40 years coding in a particular language, you may start seeing some specialization, or some crystallization of parts of the multiple demand system,” Fedorenko says. “In people who are familiar with coding and can efficiently do these tasks, but have had relatively limited experience, it just doesn’t seem like you see any specialization yet.”

In a companion paper appearing in the same issue of eLife, a team of researchers from Johns Hopkins University also reported that solving code problems activates the multiple demand network rather than the language regions.

The findings suggest there isn’t a definitive answer to whether coding should be taught as a math-based skill or a language-based skill. In part, that’s because learning to program may draw on both language and multiple demand systems, even if — once learned — programming doesn’t rely on the language regions, the researchers say.

“There have been claims from both camps — it has to be together with math, it has to be together with language,” Ivanova says. “But it looks like computer science educators will have to develop their own approaches for teaching code most effectively.”

The research was funded by the National Science Foundation, the Department of the Brain and Cognitive Sciences at MIT, and the McGovern Institute for Brain Research.

A hunger for social contact

Since the coronavirus pandemic began in the spring, many people have only seen their close friends and loved ones during video calls, if at all. A new study from MIT finds that the longings we feel during this kind of social isolation share a neural basis with the food cravings we feel when hungry.

The researchers found that after one day of total isolation, the sight of people having fun together activates the same brain region that lights up when someone who hasn’t eaten all day sees a picture of a plate of cheesy pasta.

“People who are forced to be isolated crave social interactions similarly to the way a hungry person craves food.”

“Our finding fits the intuitive idea that positive social interactions are a basic human need, and acute loneliness is an aversive state that motivates people to repair what is lacking, similar to hunger,” says Rebecca Saxe, the John W. Jarve Professor of Brain and Cognitive Sciences at MIT, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

The research team collected the data for this study in 2018 and 2019, long before the coronavirus pandemic and resulting lockdowns. Their new findings, described today in Nature Neuroscience, are part of a larger research program focusing on how social stress affects people’s behavior and motivation.

Former MIT postdoc Livia Tomova, who is now a research associate at Cambridge University, is the lead author of the paper. Other authors include Kimberly Wang, a McGovern Institute research associate; Todd Thompson, a McGovern Institute scientist; Atsushi Takahashi, assistant director of the Martinos Imaging Center; Gillian Matthews, a research scientist at the Salk Institute for Biological Studies; and Kay Tye, a professor at the Salk Institute.

Social craving

The new study was partly inspired by a recent paper from Tye, a former member of MIT’s Picower Institute for Learning and Memory. In that 2016 study, she and Matthews, then an MIT postdoc, identified a cluster of neurons in the brains of mice that represent feelings of loneliness and generate a drive for social interaction following isolation. Studies in humans have shown that being deprived of social contact can lead to emotional distress, but the neurological basis of these feelings is not well-known.

“We wanted to see if we could experimentally induce a certain kind of social stress, where we would have control over what the social stress was,” Saxe says. “It’s a stronger intervention of social isolation than anyone had tried before.”

To create that isolation environment, the researchers enlisted healthy volunteers, who were mainly college students, and confined them to a windowless room on MIT’s campus for 10 hours. They were not allowed to use their phones, but the room did have a computer that they could use to contact the researchers if necessary.

“There were a whole bunch of interventions we used to make sure that it would really feel strange and different and isolated,” Saxe says. “They had to let us know when they were going to the bathroom so we could make sure it was empty. We delivered food to the door and then texted them when it was there so they could go get it. They really were not allowed to see people.”

After the 10-hour isolation ended, each participant was scanned in an MRI machine. This posed additional challenges, as the researchers wanted to avoid any social contact during the scanning. Before the isolation period began, each subject was trained on how to get into the machine, so that they could do it by themselves, without any help from the researcher.

“Normally, getting somebody into an MRI machine is actually a really social process. We engage in all kinds of social interactions to make sure people understand what we’re asking them, that they feel safe, that they know we’re there,” Saxe says. “In this case, the subjects had to do it all by themselves, while the researcher, who was gowned and masked, just stood silently by and watched.”

Each of the 40 participants also underwent 10 hours of fasting, on a different day. After the 10-hour period of isolation or fasting, the participants were scanned while looking at images of food, images of people interacting, and neutral images such as flowers. The researchers focused on a part of the brain called the substantia nigra, a tiny structure located in the midbrain, which has previously been linked with hunger cravings and drug cravings. The substantia nigra is also believed to share evolutionary origins with a brain region in mice called the dorsal raphe nucleus, which is the area that Tye’s lab showed was active following social isolation in their 2016 study.

The researchers hypothesized that when socially isolated subjects saw photos of people enjoying social interactions, the “craving signal” in their substantia nigra would be similar to the signal produced when they saw pictures of food after fasting. This was indeed the case. Furthermore, the amount of activation in the substantia nigra was correlated with how strongly the patients rated their feelings of craving either food or social interaction.

Degrees of loneliness

The researchers also found that people’s responses to isolation varied depending on their normal levels of loneliness. People who reported feeling chronically isolated months before the study was done showed weaker cravings for social interaction after the 10-hour isolation period than people who reported a richer social life.

“For people who reported that their lives were really full of satisfying social interactions, this intervention had a bigger effect on their brains and on their self-reports,” Saxe says.

The researchers also looked at activation patterns in other parts of the brain, including the striatum and the cortex, and found that hunger and isolation each activated distinct areas of those regions. That suggests that those areas are more specialized to respond to different types of longings, while the substantia nigra produces a more general signal representing a variety of cravings.

Now that the researchers have established that they can observe the effects of social isolation on brain activity, Saxe says they can now try to answer many additional questions. Those questions include how social isolation affect people’s behavior, whether virtual social contacts such as video calls help to alleviate cravings for social interaction, and how isolation affects different age groups.

The researchers also hope to study whether the brain responses that they saw in this study could be used to predict how the same participants responded to being isolated during the lockdowns imposed during the early stages of the coronavirus pandemic.

The research was funded by a SFARI Explorer Grant from the Simons Foundation, a MINT grant from the McGovern Institute, the National Institutes of Health, including an NIH Pioneer Award, a Max Kade Foundation Fellowship, and an Erwin Schroedinger Fellowship from the Austrian Science Fund.

Face-specific brain area responds to faces even in people born blind

More than 20 years ago, neuroscientist Nancy Kanwisher and others discovered that a small section of the brain located near the base of the skull responds much more strongly to faces than to other objects we see. This area, known as the fusiform face area, is believed to be specialized for identifying faces.

Now, in a surprising new finding, Kanwisher and her colleagues have shown that this same region also becomes active in people who have been blind since birth, when they touch a three-dimensional model of a face with their hands. The finding suggests that this area does not require visual experience to develop a preference for faces.

“That doesn’t mean that visual input doesn’t play a role in sighted subjects — it probably does,” she says. “What we showed here is that visual input is not necessary to develop this particular patch, in the same location, with the same selectivity for faces. That was pretty astonishing.”

Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience and a member of MIT’s McGovern Institute for Brain Research, is the senior author of the study. N. Apurva Ratan Murty, an MIT postdoc, is the lead author of the study, which appears this week in the Proceedings of the National Academy of Sciences. Other authors of the paper include Santani Teng, a former MIT postdoc; Aude Oliva, a senior research scientist, co-director of the MIT Quest for Intelligence, and MIT director of the MIT-IBM Watson AI Lab; and David Beeler and Anna Mynick, both former lab technicians.

Selective for faces

Studying people who were born blind allowed the researchers to tackle longstanding questions regarding how specialization arises in the brain. In this case, they were specifically investigating face perception, but the same unanswered questions apply to many other aspects of human cognition, Kanwisher says.

“This is part of a broader question that scientists and philosophers have been asking themselves for hundreds of years, about where the structure of the mind and brain comes from,” she says. “To what extent are we products of experience, and to what extent do we have built-in structure? This is a version of that question asking about the particular role of visual experience in constructing the face area.”

The new work builds on a 2017 study from researchers in Belgium. In that study, congenitally blind subjects were scanned with functional magnetic resonance imaging (fMRI) as they listened to a variety of sounds, some related to faces (such as laughing or chewing), and others not. That study found higher responses in the vicinity of the FFA to face-related sounds than to sounds such as a ball bouncing or hands clapping.

In the new study, the MIT team wanted to use tactile experience to measure more directly how the brains of blind people respond to faces. They created a ring of 3D-printed objects that included faces, hands, chairs, and mazes, and rotated them so that the subject could handle each one while in the fMRI scanner.

They began with normally sighted subjects and found that when they handled the 3D objects, a small area that corresponded to the location of the FFA was preferentially active when the subjects touched the faces, compared to when they touched other objects. This activity, which was weaker than the signal produced when sighted subjects looked at faces, was not surprising to see, Kanwisher says.

“We know that people engage in visual imagery, and we know from prior studies that visual imagery can activate the FFA. So the fact that you see the response with touch in a sighted person is not shocking because they’re visually imagining what they’re feeling,” she says.

The researchers then performed the same experiments, using tactile input only, with 15 subjects who reported being blind since birth. To their surprise, they found that the brain showed face-specific activity in the same area as the sighted subjects, at levels similar to when sighted people handled the 3D-printed faces.

“When we saw it in the first few subjects, it was really shocking, because no one had seen individual face-specific activations in the fusiform gyrus in blind subjects previously,” Murty says.

Patterns of connection

The researchers also explored several hypotheses that have been put forward to explain why face-selectivity always seems to develop in the same region of the brain. One prominent hypothesis suggests that the FFA develops face-selectivity because it receives visual input from the fovea (the center of the retina), and we tend to focus on faces at the center of our visual field. However, since this region developed in blind people with no foveal input, the new findings do not support this idea.

Another hypothesis is that the FFA has a natural preference for curved shapes. To test that idea, the researchers performed another set of experiments in which they asked the blind subjects to handle a variety of 3D-printed shapes, including cubes, spheres, and eggs. They found that the FFA did not show any preference for the curved objects over the cube-shaped objects.

The researchers did find evidence for a third hypothesis, which is that face selectivity arises in the FFA because of its connections to other parts of the brain. They were able to measure the FFA’s “connectivity fingerprint” — a measure of the correlation between activity in the FFA and activity in other parts of the brain — in both blind and sighted subjects.

They then used the data from each group to train a computer model to predict the exact location of the brain’s selective response to faces based on the FFA connectivity fingerprint. They found that when the model was trained on data from sighted patients, it could accurately predict the results in blind subjects, and vice versa. They also found evidence that connections to the frontal and parietal lobes of the brain, which are involved in high-level processing of sensory information, may be the most important in determining the role of the FFA.

“It’s suggestive of this very interesting story that the brain wires itself up in development not just by taking perceptual information and doing statistics on the input and allocating patches of brain, according to some kind of broadly agnostic statistical procedure,” Kanwisher says. “Rather, there are endogenous constraints in the brain present at birth, in this case, in the form of connections to higher-level brain regions, and these connections are perhaps playing a causal role in its development.”

The research was funded by the National Institutes of Health Shared Instrumentation Grant to the Athinoula Martinos Center at MIT, a National Eye Institute Training Grant, the Smith-Kettlewell Eye Research Institute’s Rehabilitation Engineering Research Center, an Office of Naval Research Vannevar Bush Faculty Fellowship, an NIH Pioneer Award, and a National Science Foundation Science and Technology Center Grant.

Full paper at PNAS

Learning from social isolation

“Livia Tomova, a postdoc in the Saxe Lab, recently completed a study about social isolation and its impact on the brain. Michelle Hung and I had a lot of exposure to her research in the lab. When “social distancing” measures hit MIT, we tried to process how the implementation of these policies would impact the landscape of our social lives.

We came up with some hypotheses and agreed that the coronavirus pandemic would fundamentally change life as we know it.

So we developed a survey to measure how the social behavior of MIT students, postdocs, and staff changes over the course of the pandemic. Our study is still in its very early stages, but it has been an incredibly fulfilling experience to be a part of Michelle’s development as a scientist.

Heather Kosakowski’s daughter in Woods Hole, Massachusetts. Photo: Heather Kosakowski

After the undergraduates left, graduate students were also strongly urged to leave graduate student housing. My daughter (age 11) and I live in a 28th-floor apartment and her school was canceled. One of my advisors, Nancy Kanwisher, had a vacant apartment in Woods Hole that she offered to let lab members stay in. As more and more resources for children were being closed or shut down, I decided to take her up on the offer. Wood’s Hole is my daughter’s absolute favorite place and I feel extremely lucky to have such a generous option. My daughter has been coping really well with all of these changes.

While my research is at an exciting stage, I miss being on campus with the students from my cohort and my lab mates and my weekly in-person meetings with my advisors. One way I’ve been coping with this reality is by listening to stories of other people’s experiences. We are all human and we are all in the midst of a pandemic but, we are all experiencing the pandemic in different ways. I find the diversity of our experience intriguing. I have been fortunate to have friends write stories about their experiences, so that I can post them on my blog. I only have a handful of stories right now but, it has been really fun for me to listen, and humbling for me to share each individual’s unique experience.”


Heather Kosakowski is a graduate student in the labs of Rebecca Saxe and Nancy Kanwisher where she studies the infant brain and the developmental origins of object recognition, language, and music. Heather is also a Marine Corps veteran and single mom who manages a blog that “ties together different aspects of my experience, past and present, with the hopes that it might make someone else out there feel less alone.”

#WeAreMcGovern

How dopamine drives brain activity

Using a specialized magnetic resonance imaging (MRI) sensor, MIT neuroscientists have discovered how dopamine released deep within the brain influences both nearby and distant brain regions.

Dopamine plays many roles in the brain, most notably related to movement, motivation, and reinforcement of behavior. However, until now it has been difficult to study precisely how a flood of dopamine affects neural activity throughout the brain. Using their new technique, the MIT team found that dopamine appears to exert significant effects in two regions of the brain’s cortex, including the motor cortex.

“There has been a lot of work on the immediate cellular consequences of dopamine release, but here what we’re looking at are the consequences of what dopamine is doing on a more brain-wide level,” says Alan Jasanoff, an MIT professor of biological engineering, brain and cognitive sciences, and nuclear science and engineering. Jasanoff is also an associate member of MIT’s McGovern Institute for Brain Research and the senior author of the study.

The MIT team found that in addition to the motor cortex, the remote brain area most affected by dopamine is the insular cortex. This region is critical for many cognitive functions related to perception of the body’s internal states, including physical and emotional states.

MIT postdoc Nan Li is the lead author of the study, which appears today in Nature.

Tracking dopamine

Like other neurotransmitters, dopamine helps neurons to communicate with each other over short distances. Dopamine holds particular interest for neuroscientists because of its role in motivation, addiction, and several neurodegenerative disorders, including Parkinson’s disease. Most of the brain’s dopamine is produced in the midbrain by neurons that connect to the striatum, where the dopamine is released.

For many years, Jasanoff’s lab has been developing tools to study how molecular phenomena such as neurotransmitter release affect brain-wide functions. At the molecular scale, existing techniques can reveal how dopamine affects individual cells, and at the scale of the entire brain, functional magnetic resonance imaging (fMRI) can reveal how active a particular brain region is. However, it has been difficult for neuroscientists to determine how single-cell activity and brain-wide function are linked.

“There have been very few brain-wide studies of dopaminergic function or really any neurochemical function, in large part because the tools aren’t there,” Jasanoff says. “We’re trying to fill in the gaps.”

About 10 years ago, his lab developed MRI sensors that consist of magnetic proteins that can bind to dopamine. When this binding occurs, the sensors’ magnetic interactions with surrounding tissue weaken, dimming the tissue’s MRI signal. This allows researchers to continuously monitor dopamine levels in a specific part of the brain.

In their new study, Li and Jasanoff set out to analyze how dopamine released in the striatum of rats influences neural function both locally and in other brain regions. First, they injected their dopamine sensors into the striatum, which is located deep within the brain and plays an important role in controlling movement. Then they electrically stimulated a part of the brain called the lateral hypothalamus, which is a common experimental technique for rewarding behavior and inducing the brain to produce dopamine.

Then, the researchers used their dopamine sensor to measure dopamine levels throughout the striatum. They also performed traditional fMRI to measure neural activity in each part of the striatum. To their surprise, they found that high dopamine concentrations did not make neurons more active. However, higher dopamine levels did make the neurons remain active for a longer period of time.

“When dopamine was released, there was a longer duration of activity, suggesting a longer response to the reward,” Jasanoff says. “That may have something to do with how dopamine promotes learning, which is one of its key functions.”

Long-range effects

After analyzing dopamine release in the striatum, the researchers set out to determine this dopamine might affect more distant locations in the brain. To do that, they performed traditional fMRI imaging on the brain while also mapping dopamine release in the striatum. “By combining these techniques we could probe these phenomena in a way that hasn’t been done before,” Jasanoff says.

The regions that showed the biggest surges in activity in response to dopamine were the motor cortex and the insular cortex. If confirmed in additional studies, the findings could help researchers understand the effects of dopamine in the human brain, including its roles in addiction and learning.

“Our results could lead to biomarkers that could be seen in fMRI data, and these correlates of dopaminergic function could be useful for analyzing animal and human fMRI,” Jasanoff says.

The research was funded by the National Institutes of Health and a Stanley Fahn Research Fellowship from the Parkinson’s Disease Foundation.

Uncovering the functional architecture of a historic brain area

In 1840 a patient named Leborgne was admitted to a hospital near Paris: he was only able repeat the word “Tan.” This loss of speech drew the attention of Paul Broca who, after Leborgne’s death, identified lesions in his frontal lobe in the left hemisphere. These results echoed earlier findings from French neurologist Marc Dax. Now known as “Broca’s area,” the roles of this brain region have been extended to mental functions far beyond speech articulation. So much so, that the underlying functional organization of Broca’s area has become a source of discussion and some confusion.

McGovern Investigator Ev Fedorenko is now calling, in a paper at Trends in Cognitive Sciences, for recognition that Broca’s area consists of functionally distinct, specialized regions, with one sub-region very much dedicated to language processing.

“Broca’s area is one of the first regions you learn about in introductory psychology and neuroscience classes, and arguably laid the foundation for human cognitive neuroscience,” explains Ev Fedorenko, who is also an assistant professor in MIT’s Department of Brain and Cognitive Sciences. “This patch of cortex and its connections with other brain areas and networks provides a microcosm for probing some core questions about the human brain.”

Broca’s area, shown in red. Image: Wikimedia

Language is a uniquely human capability, and thus the discovery of Broca’s area immediately captured the attention of researchers.

“Because language is universal across cultures, but unique to the human species, studying Broca’s area and constraining theories of language accordingly promises to provide a window into one of the central abilities that make humans so special,” explains co-author Idan Blank, a former postdoc at the McGovern Institute who is now an assistant professor of psychology at UCLA.

Function over form

Broca’s area is found in the posterior portion of the left inferior frontal gyrus (LIFG). Arguments and theories abound as to its function. Some consider the region as dedicated to language or syntactic processing, others argue that it processes multiple types of inputs, and still others argue it is working at a high level, implementing working memory and cognitive control. Is Broca’s area a highly specialized circuit, dedicated to the human-specific capacity for language and largely independent from the rest high-level cognition, or is it a CPU-like region, overseeing diverse aspects of the mind and orchestrating their operations?

“Patient investigations and neuroimaging studies have now associated Broca’s region with many processes,” explains Blank. “On the one hand, its language-related functions have expanded far beyond articulation, on the other, non-linguistic functions within Broca’s area—fluid intelligence and problem solving, working memory, goal-directed behavior, inhibition, etc.—are fundamental to ‘all of cognition.’”

While brain anatomy is a common path to defining subregions in Broca’s area, Fedorenko and Blank argue that instead this approach can muddy the water. In fact, the anatomy of the brain, in terms of cortical folds and visible landmarks that originally stuck out to anatomists, vary from individual to individual in terms of their alignment with the underlying functions of brain regions. While these variations might seem small, they potentially have a huge impact on conclusions about functional regions based on traditional analysis methods. This means that the same bit of anatomy (like, say, the posterior portion of a gyrus) could be doing different things in different brains.

“In both investigations of patients with brain damage and much of brain imaging work, a lot of confusion has stemmed from the use of macroanatomical areas (like the inferior frontal gyrus (IFG)) as ‘units of analysis’,” explains Fedorenko. “When some researchers found IFG activation for a syntactic manipulation, and others for a working memory manipulation, the field jumped to the conclusion that syntactic processing relies on working memory. But these effects might actually be arising in totally distinct parts of the IFG.”

The only way to circumvent this problem is to turn to functional data and aggregate information from functionally defined areas across individuals. Using this approach, across four lines of evidence from the last decade, Fedorenko and Blank came to a clear conclusion: Broca’s area is not a monolithic region with a single function, but contains distinct areas, one dedicated to language processing, and another that supports domain-general functions like working memory.

“We just have to stop referring to macroanatomical brain regions (like gyri and sulci, or their parts) when talking about the functional architecture of the brain,” explains Fedorenko. “I am delighted to see that more and more labs across the world are recognizing the inter-individual variability that characterizes the human brain– this shift is putting us on the right path to making fundamental discoveries about how our brain works.”

Indeed, accounting for distinct functional regions, within Broca’s area and elsewhere, seems essential going forward if we are to truly understand the complexity of the human brain.