How motion conveys emotion in the face

While a static emoji can stand in for emotion, in real life we are constantly reading into the feelings of others through subtle facial movements. The lift of an eyebrow, the flicker around the lips as a smile emerges, a subtle change around the eyes (or the sudden rolling of the eyes), are all changes that feed into our ability to understand the emotional state, and the attitude, of others towards us. Ben Deen and Rebecca Saxe have now monitored changes in brain activity as subjects followed face movements in movies of avatars. Their findings argue that we can generalize across individual face part movements in other people, but that a particular cortical region, the face-responsive superior temporal sulcus (fSTS), is also responding to isolated movements of individual face parts. Indeed, the fSTS seems to be tied to kinematics, individual face part movement, more than the implied emotional cause of that movement.

We know that the brain responds to dynamic changes in facial expression, and that these are associated with activity in the fSTS, but how do calculations of these movements play out in the brain?

Do we understand emotional changes by adding up individual features (lifting of eyebrows + rounding of mouth= surprise), or are we assessing the entire face in a more holistic way that results in more generalized representations? McGovern Investigator Rebecca Saxe and her graduate student Ben Deen set out to answer this question using behavioral analysis and brain imaging, specifically fMRI.

“We had a good sense of what stimuli the fSTS responds strongly to,” explains Ben Deen, “but didn’t really have any sense of how those inputs are processed in the region – what sorts of features are represented, whether the representation is more abstract or more tied to visual features, etc. The hope was to use multivoxel pattern analysis, which has proven to be a remarkably useful method for characterizing representational content, to address these questions and get a better sense of what the region is actually doing.”

Facial movements were conveyed to subjects using animated “avatars.” By presenting avatars that made isolated eye and eyebrow movements (brow raise, eye closing, eye roll, scowl) or mouth movements (smile, frown, mouth opening, snarl), as well as composites of these movements, the researchers were able to assess whether our interpretation of the latter is distinct from the sum of its parts. To do this, Deen and Saxe first took a behavioral approach where people reported on what combinations of eye and mouth movements in a whole avatar face, or one where the top and bottom parts of the face were misaligned. What they found was that movement in the mouth region can influence perception of movement in the eye region, arguably due to some level of holistic processing. The authors then asked whether there were cortical differences upon viewing isolated vs. combined face part movements. They found that changes in fSTS, but not other brain regions, had patterns of activity that seemed to discriminate between different facial movements. Indeed, they could decode which part of the avatar’s face is being perceived as moving from fSTS activity. The researchers could even model the fSTS response to combined features linearly based on the response to individual face parts. In short, though the behavorial data indicate that there is holistic processing of complex facial movement, it is also clear that isolated parts-based representations are also present, a sort of intermediate state.

As part of this work, Deen and Saxe took the important step of pre-registering their experimental parameters, before collecting any data, at the Open Science Framework. This step allows others to more easily reproduce the analysis they conducted, since all parameters (the task that subjects are carrying out, the number of subjects needed, the rationale for this number, and the scripts used to analyze data) are openly available.

“Preregistration had a big impact on our workflow for the study,” explained Deen. “More of the work was done up front, in coming up with all of the analysis details and agonizing over whether we were choosing the right strategy, before seeing any of the data. When you tie your hands by making these decisions up front, you start thinking much more carefully about them.”

Pre-registration does remove post-hoc researcher subjectivity from the analysis. As an example, because Deen and Saxe predicted that the people would be accurately able to discriminate between faces per se, they decided ahead of the experiment to focus on analyzing reaction time, rather than looking at the collected data and deciding to focus on this number after the fact. This adds to the overall objectivity of the experiment and is increasingly seen as a robust way to conduct such experiments.

MRI sensor images deep brain activity

Calcium is a critical signaling molecule for most cells, and it is especially important in neurons. Imaging calcium in brain cells can reveal how neurons communicate with each other; however, current imaging techniques can only penetrate a few millimeters into the brain.

MIT researchers have now devised a new way to image calcium activity that is based on magnetic resonance imaging (MRI) and allows them to peer much deeper into the brain. Using this technique, they can track signaling processes inside the neurons of living animals, enabling them to link neural activity with specific behaviors.

“This paper describes the first MRI-based detection of intracellular calcium signaling, which is directly analogous to powerful optical approaches used widely in neuroscience but now enables such measurements to be performed in vivo in deep tissue,” says Alan Jasanoff, an MIT professor of biological engineering, brain and cognitive sciences, and nuclear science and engineering, and an associate member of MIT’s McGovern Institute for Brain Research.

Jasanoff is the senior author of the paper, which appears in the Feb. 22 issue of Nature Communications. MIT postdocs Ali Barandov and Benjamin Bartelle are the paper’s lead authors. MIT senior Catherine Williamson, recent MIT graduate Emily Loucks, and Arthur Amos Noyes Professor Emeritus of Chemistry Stephen Lippard are also authors of the study.

Getting into cells

In their resting state, neurons have very low calcium levels. However, when they fire an electrical impulse, calcium floods into the cell. Over the past several decades, scientists have devised ways to image this activity by labeling calcium with fluorescent molecules. This can be done in cells grown in a lab dish, or in the brains of living animals, but this kind of microscopy imaging can only penetrate a few tenths of a millimeter into the tissue, limiting most studies to the surface of the brain.

“There are amazing things being done with these tools, but we wanted something that would allow ourselves and others to look deeper at cellular-level signaling,” Jasanoff says.

To achieve that, the MIT team turned to MRI, a noninvasive technique that works by detecting magnetic interactions between an injected contrast agent and water molecules inside cells.

Many scientists have been working on MRI-based calcium sensors, but the major obstacle has been developing a contrast agent that can get inside brain cells. Last year, Jasanoff’s lab developed an MRI sensor that can measure extracellular calcium concentrations, but these were based on nanoparticles that are too large to enter cells.

To create their new intracellular calcium sensors, the researchers used building blocks that can pass through the cell membrane. The contrast agent contains manganese, a metal that interacts weakly with magnetic fields, bound to an organic compound that can penetrate cell membranes. This complex also contains a calcium-binding arm called a chelator.

Once inside the cell, if calcium levels are low, the calcium chelator binds weakly to the manganese atom, shielding the manganese from MRI detection. When calcium flows into the cell, the chelator binds to the calcium and releases the manganese, which makes the contrast agent appear brighter in an MRI image.

“When neurons, or other brain cells called glia, become stimulated, they often experience more than tenfold increases in calcium concentration. Our sensor can detect those changes,” Jasanoff says.

Precise measurements

The researchers tested their sensor in rats by injecting it into the striatum, a region deep within the brain that is involved in planning movement and learning new behaviors. They then used potassium ions to stimulate electrical activity in neurons of the striatum, and were able to measure the calcium response in those cells.

Jasanoff hopes to use this technique to identify small clusters of neurons that are involved in specific behaviors or actions. Because this method directly measures signaling within cells, it can offer much more precise information about the location and timing of neuron activity than traditional functional MRI (fMRI), which measures blood flow in the brain.

“This could be useful for figuring out how different structures in the brain work together to process stimuli or coordinate behavior,” he says.

In addition, this technique could be used to image calcium as it performs many other roles, such as facilitating the activation of immune cells. With further modification, it could also one day be used to perform diagnostic imaging of the brain or other organs whose functions rely on calcium, such as the heart.

The research was funded by the National Institutes of Health and the MIT Simons Center for the Social Brain.

Mapping the brain at high resolution

Researchers have developed a new way to image the brain with unprecedented resolution and speed. Using this approach, they can locate individual neurons, trace connections between them, and visualize organelles inside neurons, over large volumes of brain tissue.

The new technology combines a method for expanding brain tissue, making it possible to image at higher resolution, with a rapid 3-D microscopy technique known as lattice light-sheet microscopy. In a paper appearing in Science Jan. 17, the researchers showed that they could use these techniques to image the entire fruit fly brain, as well as large sections of the mouse brain, much faster than has previously been possible. The team includes researchers from MIT, the University of California at Berkeley, the Howard Hughes Medical Institute, and Harvard Medical School/Boston Children’s Hospital.

This technique allows researchers to map large-scale circuits within the brain while also offering unique insight into individual neurons’ functions, says Edward Boyden, the Y. Eva Tan Professor in Neurotechnology, an associate professor of biological engineering and of brain and cognitive sciences at MIT, and a member of MIT’s McGovern Institute for Brain Research, Media Lab, and Koch Institute for Integrative Cancer Research.

“A lot of problems in biology are multiscale,” Boyden says. “Using lattice light-sheet microscopy, along with the expansion microscopy process, we can now image at large scale without losing sight of the nanoscale configuration of biomolecules.”

Boyden is one of the study’s senior authors, along with Eric Betzig, a senior fellow at the Janelia Research Campus and a professor of physics and molecular and cell biology at UC Berkeley. The paper’s lead authors are MIT postdoc Ruixuan Gao, former MIT postdoc Shoh Asano, and Harvard Medical School Assistant Professor Srigokul Upadhyayula.

Large-scale imaging

In 2015, Boyden’s lab developed a way to generate very high-resolution images of brain tissue using an ordinary light microscope. Their technique relies on expanding tissue before imaging it, allowing them to image the tissue at a resolution of about 60 nanometers. Previously, this kind of imaging could be achieved only with very expensive high-resolution microscopes, known as super-resolution microscopes.

In the new study, Boyden teamed up with Betzig and his colleagues at HHMI’s Janelia Research Campus to combine expansion microscopy with lattice light-sheet microscopy. This technology, which Betzig developed several years ago, has some key traits that make it ideal to pair with expansion microscopy: It can image large samples rapidly, and it induces much less photodamage than other fluorescent microscopy techniques.

“The marrying of the lattice light-sheet microscope with expansion microscopy is essential to achieve the sensitivity, resolution, and scalability of the imaging that we’re doing,” Gao says.

Imaging expanded tissue samples generates huge amounts of data — up to tens of terabytes per sample — so the researchers also had to devise highly parallelized computational image-processing techniques that could break down the data into smaller chunks, analyze it, and stitch it back together into a coherent whole.

In the Science paper, the researchers demonstrated the power of their new technique by imaging layers of neurons in the somatosensory cortex of mice, after expanding the tissue volume fourfold. They focused on a type of neuron known as pyramidal cells, one of the most common excitatory neurons found in the nervous system. To locate synapses, or connections, between these neurons, they labeled proteins found in the presynaptic and postsynaptic regions of the cells. This also allowed them to compare the density of synapses in different parts of the cortex.

Using this technique, it is possible to analyze millions of synapses in just a few days.

“We counted clusters of postsynaptic markers across the cortex, and we saw differences in synaptic density in different layers of the cortex,” Gao says. “Using electron microscopy, this would have taken years to complete.”

The researchers also studied patterns of axon myelination in different neurons. Myelin is a fatty substance that insulates axons and whose disruption is a hallmark of multiple sclerosis. The researchers were able to compute the thickness of the myelin coating in different segments of axons, and they measured the gaps between stretches of myelin, which are important because they help conduct electrical signals. Previously, this kind of myelin tracing would have required months to years for human annotators to perform.

This technology can also be used to image tiny organelles inside neurons. In the new paper, the researchers identified mitochondria and lysosomes, and they also measured variations in the shapes of these organelles.

Circuit analysis

The researchers demonstrated that this technique could be used to analyze brain tissue from other organisms as well; they used it to image the entire brain of the fruit fly, which is the size of a poppy seed and contains about 100,000 neurons. In one set of experiments, they traced an olfactory circuit that extends across several brain regions, imaged all dopaminergic neurons, and counted all synapses across the brain. By comparing multiple animals, they also found differences in the numbers and arrangements of synaptic boutons within each animal’s olfactory circuit.

In future work, Boyden envisions that this technique could be used to trace circuits that control memory formation and recall, to study how sensory input leads to a specific behavior, or to analyze how emotions are coupled to decision-making.

“These are all questions at a scale that you can’t answer with classical technologies,” he says.

The system could also have applications beyond neuroscience, Boyden says. His lab is planning to work with other researchers to study how HIV evades the immune system, and the technology could also be adapted to study how cancer cells interact with surrounding cells, including immune cells.

The research was funded by John Doerr, K. Lisa Yang and Y. Eva Tan, the Open Philanthropy Project, the National Institutes of Health, the Howard Hughes Medical Institute, the HHMI-Simons Faculty Scholars Program, the U.S. Army Research Laboratory and Army Research Office, the US-Israel Binational Science Foundation, Biogen, and Ionis Pharmaceuticals.

Rebecca Saxe

Mind Reading

How do we think about the thoughts of other people? How are some thoughts universal and others specific to a culture or an individual?

Rebecca Saxe is tackling these and other thorny questions surrounding human thought in adults, children, and infants. Leveraging behavioral testing, brain imaging, and computational modeling, her lab is focusing on a diverse set of research questions including what people learn from punishment, the role of generosity in social relationships, and the navigation and language abilities in toddlers. The team is also using computational models to deconstruct complex thought processes, such as how humans predict the emotions of others. This research not only expands the junction of sociology and neuroscience, but also unravels—and gives clarity to—the social threads that form the fabric of society.

Virtual Tour of Saxe Lab

Alan Jasanoff

Next Generation Brain Imaging

One of the greatest challenges of modern neuroscience is to relate high-level operations of the brain and mind to well-defined biological processes that arise from molecules and cells. The Jasanoff lab is creating a suite of experimental approaches designed to achieve this by permitting brain-wide dynamics of neural signaling and plasticity to be imaged for the first time, with molecular specificity. These potentially transformative approaches use novel probes detectable by magnetic resonance imaging (MRI) and other noninvasive readouts. The probes afford qualitatively new ways to study healthy and pathological aspects of integrated brain function in mechanistically-informative detail, in animals and possibly also people.

Nancy Kanwisher

Architecture of the Mind

What is the nature of the human mind? Philosophers have debated this question for centuries, but Nancy Kanwisher approaches this question empirically, using brain imaging to look for components of the human mind that reside in particular regions of the brain. Her lab has identified cortical regions that are selectively engaged in the perception of faces, places, and bodies, and other regions specialized for uniquely human functions including the music, language, and thinking about other people’s thoughts. More recently, her lab has begun using artificial neural networks to unpack these findings and examine why, from a computational standpoint, the brain exhibits functional specification in the first place.

John Gabrieli

Images of Mind

John Gabrieli’s goal is to understand the organization of memory, thought, and emotion in the human brain. In collaboration with clinical colleagues, Gabrieli uses brain imaging to better understand, diagnose, and select treatments for neurological and psychiatric diseases.

A major focus of the Gabrieli lab is the neural basis of learning in children. His team found structural differences in the brains of young children who are at risk for reading difficulties, even before they start learning to read. By studying these differences in children, Gabrieli hopes to identify ways to improve learning in the classroom and inform effective educational policies and practices.

Gabrieli is also interested in using the tools of neuroscience to personalize medicine. His team showed that brain scans can identify children who are vulnerable to depression before symptoms even appear, opening the possibility of earlier interventions to prevent episodes of depression. Brain scans may also help help predict which individuals with social anxiety disorder are most likely to benefit from a particular therapeutic intervention. Gabrieli’s team continues to explore the role of neuroimaging in other brain disorders, including schizophrenia, addiction, and bipolar disorder.

His team also studies a range of other research topics, including new strategies to cope with emotional stress, the benefits of mindfulness for academic performance and mental health, and the value of embracing neurodiversity to better understand autism.

Satrajit Ghosh

Personalized Medicine

A fundamental problem in psychiatry is that there are no biological markers for diagnosing mental illness or for indicating how best to treat it. Treatment decisions are based entirely on symptoms, and doctors and their patients will typically try one treatment, then if it does not work, try another, and perhaps another. Satrajit Ghosh hopes to change this picture, and his research suggests that individual brain scans and speaking patterns can hold valuable information for guiding psychiatrists and patients. His research group develops novel analytic platforms that use such information to create robust, predictive models around human health. Current areas include depression, suicide, anxiety disorders, autism, Parkinson’s disease, and brain tumors.

Robert Desimone

Paying Attention

Our brains are constantly bombarded with sensory information. The ability to distinguish relevant information from irrelevant distractions is a critical skill, one that is impaired in many brain disorders. By studying the visual system of humans and animals, Robert Desimone has shown that when we attend to something specific, neurons in certain brain regions fire in unison – like a chorus rising above the noise – allowing the relevant information to be “heard” more efficiently by other regions of the brain.

Brain activity pattern may be early sign of schizophrenia

Schizophrenia, a brain disorder that produces hallucinations, delusions, and cognitive impairments, usually strikes during adolescence or young adulthood. While some signs can suggest that a person is at high risk for developing the disorder, there is no way to definitively diagnose it until the first psychotic episode occurs.

MIT neuroscientists working with researchers at Beth Israel Deaconess Medical Center, Brigham and Women’s Hospital, and the Shanghai Mental Health Center have now identified a pattern of brain activity correlated with development of schizophrenia, which they say could be used as a marker to diagnose the disease earlier.

“You can consider this pattern to be a risk factor. If we use these types of brain measurements, then maybe we can predict a little bit better who will end up developing psychosis, and that may also help tailor interventions,” says Guusje Collin, a visiting scientist at MIT’s McGovern Institute for Brain Research and the lead author of the paper.

The study, which appeared in the journal Molecular Psychiatry on Nov. 8, was performed at the Shanghai Mental Health Center. Susan Whitfield-Gabrieli, a visiting scientist at the McGovern Institute and a professor of psychology at Northeastern University, is one of the principal investigators for the study, along with Jijun Wang of the Shanghai Mental Health Center, William Stone of Beth Israel Deaconess Medical Center, the late Larry Seidman of Beth Israel Deaconess Medical Center, and Martha Shenton of Brigham and Women’s Hospital.

Abnormal connections

Before they experience a psychotic episode, characterized by sudden changes in behavior and a loss of touch with reality, patients can experience milder symptoms such as disordered thinking. This kind of thinking can lead to behaviors such as jumping from topic to topic at random, or giving answers unrelated to the original question. Previous studies have shown that about 25 percent of people who experience these early symptoms go on to develop schizophrenia.

The research team performed the study at the Shanghai Mental Health Center because the huge volume of patients who visit the hospital annually gave them a large enough sample of people at high risk of developing schizophrenia.

The researchers followed 158 people between the ages of 13 and 34 who were identified as high-risk because they had experienced early symptoms. The team also included 93 control subjects, who did not have any risk factors. At the beginning of the study, the researchers used functional magnetic resonance imaging (fMRI) to measure a type of brain activity involving “resting state networks.” Resting state networks consist of brain regions that preferentially connect with and communicate with each other when the brain is not performing any particular cognitive task.

“We were interested in looking at the intrinsic functional architecture of the brain to see if we could detect early aberrant brain connectivity or networks in individuals who are in the clinically high-risk phase of the disorder,” Whitfield-Gabrieli says.

One year after the initial scans, 23 of the high-risk patients had experienced a psychotic episode and were diagnosed with schizophrenia. In those patients’ scans, taken before their diagnosis, the researchers found a distinctive pattern of activity that was different from the healthy control subjects and the at-risk subjects who had not developed psychosis.

For example, in most people, a part of the brain known as the superior temporal gyrus, which is involved in auditory processing, is highly connected to brain regions involved in sensory perception and motor control. However, in patients who developed psychosis, the superior temporal gyrus became more connected to limbic regions, which are involved in processing emotions. This could help explain why patients with schizophrenia usually experience auditory hallucinations, the researchers say.

Meanwhile, the high-risk subjects who did not develop psychosis showed network connectivity nearly identical to that of the healthy subjects.

Early intervention

This type of distinctive brain activity could be useful as an early indicator of schizophrenia, especially since it is possible that it could be seen in even younger patients. The researchers are now performing similar studies with younger at-risk populations, including children with a family history of schizophrenia.

“That really gets at the heart of how we can translate this clinically, because we can get in earlier and earlier to identify aberrant networks in the hopes that we can do earlier interventions, and possibly even prevent psychiatric disorders,” Whitfield-Gabrieli says.

She and her colleagues are now testing early interventions that could help to combat the symptoms of schizophrenia, including cognitive behavioral therapy and neural feedback. The neural feedback approach involves training patients to use mindfulness meditation to reduce activity in the superior temporal gyrus, which tends to increase before and during auditory hallucinations.

The researchers also plan to continue following the patients in the current study, and they are now analyzing some additional data on the white matter connections in the brains of these patients, to see if those connections might yield additional differences that could also serve as early indicators of disease.

The research was funded by the National Institutes of Health, the Ministry of Science and Technology of China, and the Poitras Center for Psychiatric Disorders Research at MIT. Collin was supported by a Marie Curie Global Fellowship grant from the European Commission.