International Dyslexia Association recognizes John Gabrieli with highest honor

Cognitive neuroscientist John Gabrieli has been named the 2021 winner of the Samuel Torrey Orton Award, the International Dyslexia Association’s highest honor. The award recognizes achievements of leading researchers and practitioners in the dyslexia field, as well as those of individuals with dyslexia who exhibit leadership and serve as role models in their communities.

“I am grateful to the International Dyslexia Association for this recognition,” said Gabrieli, who is the Grover Hermann Professor of Health Sciences and Technology, a professor of brain and cognitive sciences, and a member of MIT’s McGovern Institute for Brain Research. “The association has been such an advocate for individuals and their families who struggle with dyslexia, and has also been such a champion for the relevant science. I am humbled to join the company of previous recipients of this award who have done so much to help us understand dyslexia and how individuals with dyslexia can be supported to flourish in their growth and development.”

Gabrieli, who is also the director of MIT’s Athinoula A. Martinos Imaging Center, uses neuroimaging and behavioral tests to understand how the human brain powers learning, thinking, and feeling.  For the last two decades, Gabrieli has sought to unravel the neuroscience behind learning and reading disabilities and, ultimately, convert that understanding into new and better education interventions—a sort of translational medicine for the classroom.

“We want to get every kid to be an adequate reader by the end of the third grade,” Gabrieli says. “That’s the ultimate goal: to help all children become learners.”

In March of 2018, Gabrieli and the MIT Integrated Learning Initiative—MITili, which he also directs—announced a $30 million-dollar grant from the Chan Zuckerberg Initiative for a collaboration between MIT, the Harvard Graduate School of Education, and Florida State University. This partnership, called “Reach Every Reader” aims to make significant progress on the crisis in early literacy – including tools to identify children at risk for dyslexia and other learning disabilities before they even learn to read.

“John is especially deserving of this award,” says Hugh Catts, Gabrieli’s colleague at Reach Every Reader. Catts is a professor and director of the School of Communications Science and Disorders at Florida State University. “His work has been seminal to our understanding of the neural basis of learning and learning difficulties such as dyslexia. He has been a strong advocate for individuals with dyslexia and a mentor to leading experts in the field,” says Catts, who is also received the Orton Award in 2008.

“It’s a richly deserved honor,”says Sanjay Sarma, the Fred Fort Flowers (1941) and Daniel Fort Flowers (1941) Professor of Mechanical Engineering at MIT. “John’s research is a cornerstone of MIT’s efforts to make education more equitable and accessible for all. His contributions to learning science inform so much of what we do, and his advocacy continues to raise public awareness of dyslexia and helps us better reach the dyslexic community through literacy initiatives such as Reach Every Reader. We’re so pleased that his work has been recognized with the Samuel Torrey Orton Award,” says Sarma, who is also Vice President for Open Learning at MIT.

Gabrieli will deliver the Samuel Torrey Orton and Joan Lyday Orton Memorial Lecture this fall in North Carolina as part of the 2021 International Dyslexia Association’s Annual Reading, Literacy and Learning Conference.

 

 

Exploring the unknown

View the interactive version of this story in our Summer 2021 issue of BrainScan.

 

McGovern Investigator Ed Boyden.

McGovern Investigator Ed Boyden says his lab’s vision is clear.

“We want to understand how our brains take our sensory inputs, generate emotions and memories and decisions, and ultimately result in motor outputs. We want to be able to see the building blocks of life, and how they go into disarray in brain diseases. We want to be able to control the signals of the brain, so we can repair it,” Boyden says.

To get there, he and his team are exploring the brain’s complexity at every scale, from the function and architecture of its neural networks to the molecules that work together to process information.

And when they don’t have the tools to take them where they want to go, they create them, opening new frontiers for neuroscientists everywhere.

Open to discovery

Boyden’s team is highly interdisciplinary and collaborative. Its specialty, Boyden says, is problem solving. Creativity, adaptability, and deep curiosity are essential, because while many of neuroscience’s challenges are clear, the best way to address them is not. In its search for answers, Boyden’s lab is betting that an important path to discovery begins with finding new ways to explore.

They’ve made that possible with an innovative imaging approach called expansion microscopy (ExM). ExM physically enlarges biological samples so that minute details become visible under a standard laboratory microscope, enabling researchers everywhere to peer into spaces that once went unseen (see video below).

To use the technique, researchers permeate a biological sample with an absorbent gel, then add water, causing the components of the gel to spread apart and the tissue to expand.

This year, postdoctoral researcher Ruixuan Gao and graduate student Chih-Chieh (Jay) Yu made the method more precise, with a new material that anchors a sample’s molecules within a crystal-like lattice, better preserving structure during expansion than the irregular mesh-like composition of the original gel. The advance is an important step toward being able to image expanded samples with single-molecule precision, Gao says.

A revealing look

By opening space within the brain, ExM has let Boyden’s team venture into those spaces in new ways.

Areas of research and brain disorders page
Graduate student Oz Wassie examines expanded brain tissue. Photo: Justin Knight

In work led by Deblina Sarkar (who is now an assistant professor at MIT’s Media Lab), Jinyoung Kang, and Asmamaw (Oz) Wassie, they showed that they can pull apart proteins in densely packed regions like synapses so that it is easier to introduce fluorescent labels, illuminating proteins that were once too crowded to see. The process, called expansion revealing, has made it possible to visualize in intact brain tissue important structures such as ion channels that help transmit signals and fine-scale amyloid clusters in Alzheimer’s model mice.

Another reaction the lab has adapted to the expanded-brain context is RNA sequencing—an important tool for understanding cellular diversity. “Typically, the first thing you do in a sequencing project is you grind up the tissue, and you lose the spatial dimension,” explains Daniel Goodwin, a graduate student in Boyden’s lab. But when sequencing reactions are performed inside cells instead, new information is revealed.

Confocal image showing targeted ExSeq of a 34-panel gene set across a slice of mouse hippocampus. Green indicates YFP, magenta indicates reads identified with ExSeq, and white indicates reads localized within YFP-expressing cells. Image courtesy of the researchers.

Goodwin and fellow Boyden lab members Shahar Alon, Anubhav Sinha, Oz Wassie, and Fei Chen developed expansion sequencing (ExSeq), which copies RNA molecules, nucleotide by nucleotide, directly inside expanded tissue, using fluorescent labels that spell out the molecules’ codes just as they would in a sequencer.

The approach shows researchers which genes are turned on in which cells, as well as where those RNA molecules are—revealing, for example, which genes are active in the neuronal projections that carry out the brain’s communications. A next step, Sinha says, is to integrate expansion sequencing with other technologies to obtain even deeper insights.

That might include combining information revealed with ExSeq with a topographical map of the same cells’ genomes, using a method Boyden’s lab and collaborators Chen (who is now a core member of the Broad Institute) and Jason Buenrostro at Harvard have developed for DNA sequencing. That information is important because the shape of the genome varies across cells and circumstances, and that has consequences for how the genetic code is used.

Using similar techniques to those that make ExSeq possible, graduate students Andrew Payne, Zachary Chiang, and Paul Reginato figured out how to recreate the steps of commercial DNA sequencing within the genome’s natural environment.

By pinpointing the location of specific DNA sequences inside cells, the new method, called in situ genome sequencing (IGS) allows researchers to watch a genome reorganize itself in a developing embryo.

They haven’t yet performed this analysis inside expanded tissue, but Payne says integrating in situ genome sequencing (IGS) with ExM should open up new opportunities to study genomes’ structure.

Signaling clusters

Alongside these efforts, Boyden’s team is working to give researchers better tools to explore how molecules move, change, and interact, including a modular system that lets users assemble sets of sensors into clusters to simultaneously monitor multiple cellular activities.

Molecular sensors use fluorescence to report on certain changes inside cells, such as the calcium that surges into a neuron after it fires. But they come in a limited palette, so in most experiments only one or two things can be seen at once.

Graduate student Shannon Johnson and postdoctoral fellow Changyang Linghu solved this problem by putting different sensors at different points throughout a cell so they can report on different signals. Their technique, called spatial multiplexing, links sensors to molecular scaffolds designed to cling to their own kind. Sensors built on the same scaffold form islands inside cells, so when they light up their signals are distinct from those produced by other sensor islands.

Eventually, as new sensors and scaffolds become available, Johnson says the technique might be used to simultaneously follow dozens of molecular signals in living cells. The more precise information they can help people uncover, the better, Boyden says.

“The brain is so full of surprises, we don’t know where the next big discovery will come from,” he says. With new support from the recently established K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics in Neuroscience, the Boyden lab is positioned to make these big discoveries.

“My dream would be to image the signaling dynamics of the brain, and then perturb the dynamics, and then use expansion methods to make a map of the brain. If we can get those three data sets—the dynamics, the causality, and the molecular organization—I think stitching those together could potentially yield deep insights into how the brain works, and how we can repair it in disease states.”

Abnormal brain connectivity may precede schizophrenia onset

The cerebellum is named “little brain” for its distinctive structure. Although the cerebellum was long considered only for its role in maintaining the balance and timing of movements, it has become evident that it is also important for balanced thoughts and emotions, belying the diversity of functions that “little brain” implies.

In a new study published in Schizophrenia Bulletin, McGovern Research Affiliate and Northeastern University Professor of Psychiatry Susan Whitfield-Gabrieli shows for the first time that cerebellar dysfunction actually precedes the onset of psychosis in schizophrenia, a brain disorder characterized by severe thought and emotional imbalances.

“This study exemplifies the concept of “neuroprediction,” the discovery of brain-based biomarkers that allow early detection and therefore early intervention for mental disorders,” says Whitfield-Gabrieli.

Cerebellar connectivity and schizophrenia

Early evidence that the cerebellum is involved in more than movement came from numerous reports that people with brain damage originating in the cerebellum can have severely disordered thought processes. Now cerebellar abnormalities have been identified in numerous neurodevelopmental and neuropsychiatric conditions including autism, attention-deficit hyperactivity disorder (ADHD), Alzheimer’s disease, and schizophrenia.

Whitfield-Gabrieli has focused on how symptoms in these disorders correlate with how well the cerebellum is connected to other brain regions, including regions of the cerebral cortex, the characteristically-folded, outer part of the brain. Active connections in the brain of people who are resting or who are engaged in a mental task can be found by functional magnetic resonance imaging (fMRI), a brain scanning technique that detects when and where oxygen is being used by cells. If oxygen usage in two brain regions consistently peaks at the same time while someone is in the scanner, they are considered to be functionally connected.

Connectivity differences prior to psychosis

In her new study, Whitfield-Gabrieli explored whether brain scans could reveal cerebellar abnormalities in people at-risk for schizophrenia.

To do this, she and her colleagues compared cerebellar connectivity among at-risk adolescents and young adults who went on to develop psychosis within the following year versus those that remained stable or improved. The at-risk participants were identified in an international collaboration called the Shanghai At Risk for Psychosis (SHARP) program that recruited people who were seeking help at China’s largest outpatient mental health center. Of the 144 adolescents and young adults at-risk for schizophrenia at the outset of the study, 23 went on to develop the disorder. Notably, this group showed fMRI patterns of cerebellar dysfunction at the outset of the study, before they developed psychosis.

Abnormal brain architecture

All of the brain scans were evaluated to determine the degree to which three specific cerebellar regions were connected to the cerebral cortex, a brain region that does not finish development until young adulthood. The cerebellar regions of interest to Whitfield-Gabrieli are part of the “dentate nuclei,” so named because they look like a set of jagged teeth. Neurons in the dentate nuclei serve to integrate inputs from the rest of the cerebellum and send the compiled information out to the rest of the brain. Whitfield-Gabrieli and colleagues divided the dentate nuclei into three zones according to what parts of the cerebral cortex they are functionally connected to while people are relaxing, doing visual tasks, or engaging in a motor task or receiving some sort of stimulation.

The team found abnormal connectivity for all three zones of the dentate nuclei in the individuals who later went on to develop schizophrenia. Since the connectivity patterns varied across regions within the three zones, with some regions over-connected and others under-connected to the cerebral cortex in the group that developed psychosis, separated high-resolution analyses of the different connections was key.

Previous work established that cerebellar abnormalities are associated with schizophrenia but this study is the first to show that functional connections between the deep cerebellar nuclei and the cerebral cortex might precede disease onset.  “Treatments for mental disorders are inherently reactive to suffering and incapacity. A proactive approach by which abnormal brain architecture is identified prior to clinical diagnosis has the potential to prevent suffering by helping people before they become ill, one of my ultimate goals” said Whitfield-Gabrieli.

This study was supported by the Poitras Center for Psychiatric Disorders Research at MIT), US National Institute of Mental Health (R21 MH 093294, R01 MH 101052, R01 MH 111448, and R01 MH 64023), Ministry of Science and Technology of China (2016 YFC 1306803), European Union’s Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No. 749201 and by a VA Merit Award.

Investigating the embattled brain

Omar Rutledge served as a US Army infantryman in the 1st Armored and 25th Infantry Divisions. He was deployed in support of Operation Iraqi Freedom from March 2003 to July 2004. Photo: Omar Rutledge

As an Iraq war veteran, Omar Rutledge is deeply familiar with post-traumatic stress – recurring thoughts and memories that persist long after a danger has passed – and he knows that a brain altered by trauma is not easily fixed. But as a graduate student in the Department of Brain and Cognitive Sciences, Rutledge is determined to change that. He wants to understand exactly how trauma alters the brain – and whether the tools of neuroscience can be used to help fellow veterans with post-traumatic stress disorder (PTSD) heal from their experiences.

“In the world of PTSD research, I look to my left and to my right, and I don’t see other veterans, certainly not former infantrymen,” says Rutledge, who served in the US Army and was deployed to Iraq from March 2003 to July 2004. “If there are so few of us in this space, I feel like I have an obligation to make a difference for all who suffer from the traumatic experiences of war.”

Rutledge is uniquely positioned to make such a difference in the lab of McGovern Investigator John Gabrieli, where researchers use technologies like magnetic resonance imaging (MRI), electroencephalography (EEG), and magnetoencephalography (MEG) to peer into the human brain and explore how it powers our thoughts, memories, and emotions. Rutledge is studying how PTSD weakens the connection between the amygdala, which is responsible for emotions like fear, and the prefrontal cortex, which regulates or controls these emotional responses. He hopes these studies will eventually lead to the development of wearable technologies that can retrain the brain to be less responsive to triggering events.

“I feel like it has been a mission of mine to do this kind of work.”

Though Covid-19 has unexpectedly paused some aspects of his research, Rutledge is pursuing another line of research inspired both by the mandatory social distancing protocols imposed during the lockdown and his own experiences with social isolation. Does chronic social isolation cause physical or chemical changes in the brain similar to those seen in PTSD? And does loneliness exacerbate symptoms of PTSD?

“There’s this hypervigilance that occurs in loneliness, and there’s also something very similar that occurs in PTSD — a heightened awareness of potential threats,” says Rutledge, who is the recipient of Michael Ferrara Graduate Fellowship provided by the Poitras Center, a fellowship made possible by the many friends and family of Michael Ferrara. “The combination of the two may lead to more adverse reactions in people with PTSD.”

In the future, Rutledge hopes to explore whether chronic loneliness impairs reasoning and logic skills and has a deeper impact on veterans who have PTSD.

Although his research tends to resurface painful memories of his own combat experiences, Rutledge says if it can help other veterans heal, it’s worth it.  “In the process, it makes me a little bit stronger as well,” he adds.

Method offers inexpensive imaging at the scale of virus particles

Using an ordinary light microscope, MIT engineers have devised a technique for imaging biological samples with accuracy at the scale of 10 nanometers — which should enable them to image viruses and potentially even single biomolecules, the researchers say.

The new technique builds on expansion microscopy, an approach that involves embedding biological samples in a hydrogel and then expanding them before imaging them with a microscope. For the latest version of the technique, the researchers developed a new type of hydrogel that maintains a more uniform configuration, allowing for greater accuracy in imaging tiny structures.

This degree of accuracy could open the door to studying the basic molecular interactions that make life possible, says Edward Boyden, the Y. Eva Tan Professor in Neurotechnology, a professor of biological engineering and brain and cognitive sciences at MIT, and a member of MIT’s McGovern Institute for Brain Research and Koch Institute for Integrative Cancer Research.

“If you could see individual molecules and identify what kind they are, with single-digit-nanometer accuracy, then you might be able to actually look at the structure of life.”

“And structure, as a century of modern biology has told us, governs function,” says Boyden, who is the senior author of the new study.

The lead authors of the paper, which appears today in Nature Nanotechnology, are MIT Research Scientist Ruixuan Gao and Chih-Chieh “Jay” Yu PhD ’20. Other authors include Linyi Gao PhD ’20; former MIT postdoc Kiryl Piatkevich; Rachael Neve, director of the Gene Technology Core at Massachusetts General Hospital; James Munro, an associate professor of microbiology and physiological systems at University of Massachusetts Medical School; and Srigokul Upadhyayula, a former assistant professor of pediatrics at Harvard Medical School and an assistant professor in residence of cell and developmental biology at the University of California at Berkeley.

Low cost, high resolution

Many labs around the world have begun using expansion microscopy since Boyden’s lab first introduced it in 2015. With this technique, researchers physically enlarge their samples about fourfold in linear dimension before imaging them, allowing them to generate high-resolution images without expensive equipment. Boyden’s lab has also developed methods for labeling proteins, RNA, and other molecules in a sample so that they can be imaged after expansion.

“Hundreds of groups are doing expansion microscopy. There’s clearly pent-up demand for an easy, inexpensive method of nanoimaging,” Boyden says. “Now the question is, how good can we get? Can we get down to single-molecule accuracy? Because in the end, you want to reach a resolution that gets down to the fundamental building blocks of life.”

Other techniques such as electron microscopy and super-resolution imaging offer high resolution, but the equipment required is expensive and not widely accessible. Expansion microscopy, however, enables high-resolution imaging with an ordinary light microscope.

In a 2017 paper, Boyden’s lab demonstrated resolution of around 20 nanometers, using a process in which samples were expanded twice before imaging. This approach, as well as the earlier versions of expansion microscopy, relies on an absorbent polymer made from sodium polyacrylate, assembled using a method called free radical synthesis. These gels swell when exposed to water; however, one limitation of these gels is that they are not completely uniform in structure or density. This irregularity leads to small distortions in the shape of the sample when it’s expanded, limiting the accuracy that can be achieved.

To overcome this, the researchers developed a new gel called tetra-gel, which forms a more predictable structure. By combining tetrahedral PEG molecules with tetrahedral sodium polyacrylates, the researchers were able to create a lattice-like structure that is much more uniform than the free-radical synthesized sodium polyacrylate hydrogels they previously used.

Three-dimensional (3D) rendered movie of envelope proteins of an herpes simplex virus type 1 (HSV-1) virion expanded by tetra-gel (TG)-based three-round iterative expansion. The deconvolved puncta (white), the overlay of the deconvolved puncta (white) and the fitted centroids (red), and the extracted centroids (red) are shown from left to right. Expansion factor, 38.3×. Scale bars, 100 nm.
Credit: Ruixuan Gao and Boyden Lab

The researchers demonstrated the accuracy of this approach by using it to expand particles of herpes simplex virus type 1 (HSV-1), which have a distinctive spherical shape. After expanding the virus particles, the researchers compared the shapes to the shapes obtained by electron microscopy and found that the distortion was lower than that seen with previous versions of expansion microscopy, allowing them to achieve an accuracy of about 10 nanometers.

“We can look at how the arrangements of these proteins change as they are expanded and evaluate how close they are to the spherical shape. That’s how we validated it and determined how faithfully we can preserve the nanostructure of the shapes and the relative spatial arrangements of these molecules,” Ruixuan Gao says.

Single molecules

The researchers also used their new hydrogel to expand cells, including human kidney cells and mouse brain cells. They are now working on ways to improve the accuracy to the point where they can image individual molecules within such cells. One limitation on this degree of accuracy is the size of the antibodies used to label molecules in the cell, which are about 10 to 20 nanometers long. To image individual molecules, the researchers would likely need to create smaller labels or to add the labels after expansion was complete.

Left, HeLa cell with two-color labeling of clathrin-coated pits/vesicles and microtubules, expanded by TG-based two-round iterative expansion. Expansion factor, 15.6×. Scale bar, 10 μm (156 μm). Right, magnified view of the boxed region for each color channel. Scale bars, 1 μm (15.6 μm). Image: Boyden Lab

They are also exploring whether other types of polymers, or modified versions of the tetra-gel polymer, could help them realize greater accuracy.

If they can achieve accuracy down to single molecules, many new frontiers could be explored, Boyden says. For example, scientists could glimpse how different molecules interact with each other, which could shed light on cell signaling pathways, immune response activation, synaptic communication, drug-target interactions, and many other biological phenomena.

“We’d love to look at regions of a cell, like the synapse between two neurons, or other molecules involved in cell-cell signaling, and to figure out how all the parts talk to each other,” he says. “How do they work together and how do they go wrong in diseases?”

The research was funded by Lisa Yang, John Doerr, Open Philanthropy, the National Institutes of Health, the Howard Hughes Medical Institute Simons Faculty Scholars Program, the Intelligence Advanced Research Projects Activity, the U.S. Army Research Laboratory, the US-Israel Binational Science Foundation, the National Science Foundation, the Friends of the McGovern Fellowship, and the Fellows program of the Image and Data Analysis Core at Harvard Medical School.

What’s happening in your brain when you’re spacing out?

This story is adapted from a News@Northeastern post.

We all do it. One second you’re fully focused on the task in front of you, a conversation with a friend, or a professor’s lecture, and the next second your mind is wandering to your dinner plans.

But how does that happen?

“We spend so much of our daily lives engaged in things that are completely unrelated to what’s in front of us,” says Aaron Kucyi, neuroscientist and principal research scientist in the department of psychology at Northeastern. “And we know very little about how it works in the brain.”

So Kucyi and colleagues at Massachusetts General Hospital, Boston University, and the McGovern Institute at MIT started scanning people’s brains using functional magnetic resonance imaging (fMRI) to get an inside look. Their results, published Friday in the journal Nature Communications, add complexity to our understanding of the wandering mind.

It turns out that spacing out might not deserve the bad reputation that it receives. Many more parts of the brain seem to be engaged in mind-wandering than previously thought, supporting the idea that it’s actually a quite dynamic and fundamental function of our psychology.

“Many of those things that we do when we’re spacing out are very adaptive and important to our lives,” says Kucyi, the paper’s first author. We might be drafting an email in our heads while in the shower, or trying to remember the host’s spouse’s name while getting dressed for a party. Moments when our minds wander can allow space for creativity and planning for the future, he says, so it makes sense that many parts of the brain would be engaged in that kind of thinking.

But mind wandering may also be detrimental, especially for those suffering from mental illness, explains the study’s senior author, Susan Whitfield-Gabrieli. “For many of us, mind wandering may be a healthy, positive and constructive experience, like reminiscing about the past, planning for the future, or engaging in creative thinking,” says Whitfield-Gabrieli, a professor of psychology at Northeastern University and a McGovern Institute research affiliate. “But for those suffering from mental illness such as depression, anxiety or psychosis, reminiscing about the past may transform into ruminating about the past, planning for the future may become obsessively worrying about the future and creative thinking may evolve into delusional thinking.”

Identifying the brain circuits associated with mind wandering, she says, may reveal new targets and better treatment options for people suffering from these disorders.

McGovern research affiliate Susan Whitfield-Gabrieli in the Martinos Imaging Center.

Inside the wandering mind

To study wandering minds, the researchers first had to set up a situation in which people were likely to lose focus. They recruited test subjects at the McGovern Institute’s Martinos Imaging Center to complete a simple, repetitive, and rather boring task. With an fMRI scanner mapping their brain activity, participants were instructed to press a button whenever an image of a city scene appeared on a screen in front of them and withhold a response when a mountain image appeared.

Throughout the experiment, the subjects were asked whether they were focused on the task at hand. If a subject said their mind was wandering, the researchers took a close look at their brain scans from right before they reported loss of focus. The data was then fed into a machine-learning algorithm to identify patterns in the neurological connections involved in mind-wandering (called “stimulus-independent, task-unrelated thought” by the scientists).

Scientists previously identified a specialized system in the brain considered to be responsible for mind-wandering. Called the “default mode network,” these parts of the brain activated when someone’s thoughts were drifting away from their immediate surroundings and deactivated when they were focused. The other parts of the brain, that theory went, were quiet when the mind was wandering, says Kucyi.

The researchers used a technique called “connectome-based predictive modeling” to identify patterns in the brain connections involved in mind-wandering. Image courtesy of the researchers.

The “default mode network” did light up in Kucyi’s data. But parts of the brain associated with other functions also appeared to activate when his subjects reported that their minds had wandered.

For example, the “default mode network” and networks in the brain related to controlling or maintaining a train of thought also seemed to be communicating with one another, perhaps helping explain the ability to go down a rabbit hole in your mind when you’re distracted from a task. There was also a noticeable lack of communication between the “default mode network” and the systems associated with sensory input, which makes sense, as the mind is wandering away from the person’s immediate environment.

“It makes sense that virtually the whole brain is involved,” Kucyi says. “Mind-wandering is a very complex operation in the brain and involves drawing from our memory, making predictions about the future, dynamically switching between topics that we’re thinking about, fluctuations in our mood, and engaging in vivid visual imagery while ignoring immediate visual input,” just to name a few functions.

The “default mode network” still seems to be key, Kucyi says. Virtual computer analysis suggests that if you took the regions of the brain in that network out of the equation, the other brain regions would not be able to pick up the slack in mind-wandering.

Kucyi, however, didn’t just want to identify regions of the brain that lit up when someone said their mind was wandering. He also wanted to be able to use that generalized pattern of brain activity to be able to predict whether or not a subject would say that their focus had drifted away from the task in front of them.

That’s where the machine-learning analysis of the data came in. The idea, Kucyi says, is that “you could bring a new person into the scanner and not even ask them whether they were mind-wandering or not, and have a good estimate from their brain data whether they were.”

The ADHD brain

To test the patterns identified through machine learning, the researchers brought in a new set of test subjects – people diagnosed with ADHD. When the fMRI scans lit up the parts of the brain Kucyi and his colleagues had identified as engaged in mind-wandering in the first part of the study, the new test subjects reported that their thoughts had drifted from the images of cities and mountains in front of them. It worked.

Kucyi doesn’t expect fMRI scans to become a new way to diagnose ADHD, however. That wasn’t the goal. Perhaps down the road it could be used to help develop treatments, he suggests. But this study was focused on “informing the biological mechanisms behind it.”

John Gabrieli, a co-author on the study and director of the imaging center at MIT’s McGovern Institute, adds that “there is recent evidence that ADHD patients with more mind-wandering have many more everyday practical and clinical difficulties than ADHD patients with less mind-wandering. This is the first evidence about the brain basis for that important difference, and points to what neural systems ought to be the targets of intervention to help ADHD patients who struggle the most.”

For Kucyi, the study of “mind-wandering” goes beyond ADHD. And the contents of those straying thoughts may be telling, he says.

“We just asked people whether they were focused on the task or away from the task, but we have no idea what they were thinking about,” he says. “What are people thinking about? For example, are those more positive thoughts or negative thoughts?” Such answers, which he hopes to explore in future research, could help scientists better understand other pathologies such as depression and anxiety, which often involve rumination on upsetting or worrisome thoughts.

Whitfield-Gabrieli and her team are already exploring whether behavioral interventions, such as mindfulness based real-time fMRI neurofeedback, can be used to help train people suffering from mental illness to modulate their own brain networks and reduce hallucinations, ruminations, and other troubling symptoms.

“We hope that our research will have clinical implications that extend far beyond the potential for identifying treatment targets for ADHD,” she says.

Individual neurons responsible for complex social reasoning in humans identified

This story is adapted from a January 27, 2021 press release from Massachusetts General Hospital.

The ability to understand others’ hidden thoughts and beliefs is an essential component of human social behavior. Now, neuroscientists have for the first time identified specific neurons critical for social reasoning, a cognitive process that requires individuals to acknowledge and predict others’ hidden beliefs and thoughts.

The findings, published in Nature, open new avenues of study into disorders that affect social behavior, according to the authors.

In the study, a team of Harvard Medical School investigators based at Massachusetts General Hospital and colleagues from MIT took a rare look at how individual neurons represent the beliefs of others. They did so by recording neuron activity in patients undergoing neurosurgery to alleviate symptoms of motor disorders such as Parkinson’s disease.

Theory of mind

The researcher team, which included McGovern scientists Ev Fedorenko and Rebecca Saxe, focused on a complex social cognitive process called “theory of mind.” To illustrate this, let’s say a friend appears to be sad on her birthday. One may infer she is sad because she didn’t get a present or she is upset at growing older.

“When we interact, we must be able to form predictions about another person’s unstated intentions and thoughts,” said senior author Ziv Williams, HMS associate professor of neurosurgery at Mass General. “This ability requires us to paint a mental picture of someone’s beliefs, which involves acknowledging that those beliefs may be different from our own and assessing whether they are true or false.”

This social reasoning process develops during early childhood and is fundamental to successful social behavior. Individuals with autism, schizophrenia, bipolar affective disorder, and traumatic brain injuries are believed to have a deficit of theory-of-mind ability.

For the study, 15 patients agreed to perform brief behavioral tasks before undergoing neurosurgery for placement of deep-brain stimulation for motor disorders. Microelectrodes inserted into the dorsomedial prefrontal cortex recorded the behavior of individual neurons as patients listened to short narratives and answered questions about them.

For example, participants were presented with the following scenario to evaluate how they considered another’s belief of reality: “You and Tom see a jar on the table. After Tom leaves, you move the jar to a cabinet. Where does Tom believe the jar to be?”

Social computation

The participants had to make inferences about another’s beliefs after hearing each story. The experiment did not change the planned surgical approach or alter clinical care.

“Our study provides evidence to support theory of mind by individual neurons,” said study first author Mohsen Jamali, HMS instructor in neurosurgery at Mass General. “Until now, it wasn’t clear whether or how neurons were able to perform these social cognitive computations.”

The investigators found that some neurons are specialized and respond only when assessing another’s belief as false, for example. Other neurons encode information to distinguish one person’s beliefs from another’s. Still other neurons create a representation of a specific item, such as a cup or food item, mentioned in the story. Some neurons may multitask and aren’t dedicated solely to social reasoning.

“Each neuron is encoding different bits of information,” Jamali said. “By combining the computations of all the neurons, you get a very detailed representation of the contents of another’s beliefs and an accurate prediction of whether they are true or false.”

Now that scientists understand the basic cellular mechanism that underlies human theory of mind, they have an operational framework to begin investigating disorders in which social behavior is affected, according to Williams.

“Understanding social reasoning is also important to many different fields, such as child development, economics, and sociology, and could help in the development of more effective treatments for conditions such as autism spectrum disorder,” Williams said.

Previous research on the cognitive processes that underlie theory of mind has involved functional MRI studies, where scientists watch which parts of the brain are active as volunteers perform cognitive tasks.

But the imaging studies capture the activity of many thousands of neurons all at once. In contrast, Williams and colleagues recorded the computations of individual neurons. This provided a detailed picture of how neurons encode social information.

“Individual neurons, even within a small area of the brain, are doing very different things, not all of which are involved in social reasoning,” Williams said. “Without delving into the computations of single cells, it’s very hard to build an understanding of the complex cognitive processes underlying human social behavior and how they go awry in mental disorders.”

Adapted from a Mass General news release.

Two MIT Brain and Cognitive Sciences faculty members earn funding from the G. Harold and Leila Y. Mathers Foundation

Two MIT neuroscientists have received grants from the G. Harold and Leila Y. Mathers Foundation to screen for genes that could help brain cells withstand Parkinson’s disease and to map how gene expression changes in the brain in response to drugs of abuse.

Myriam Heiman, an associate professor in MIT’s Department of Brain and Cognitive Sciences and a core member of the Picower Institute for Learning and Memory and the Broad Institute of MIT and Harvard, and Alan Jasanoff, who is also a professor in biological engineering, brain and cognitive sciences, nuclear science and engineering and an associate investigator at the McGovern Institute for Brain Research, each received three-year awards that formally begin January 1, 2021.

Jasanoff, who also directs MIT’s Center for Neurobiological Engineering, is known for developing sensors that monitor molecular hallmarks of neural activity in the living brain, in real time, via noninvasive MRI brain scanning. One of the MRI-detectable sensors that he has developed is for dopamine, a neuromodulator that is key to learning what behaviors and contexts lead to reward. Addictive drugs artificially drive dopamine release, thereby hijacking the brain’s reward prediction system. Studies have shown that dopamine and drugs of abuse activate gene transcription in specific brain regions, and that this gene expression changes as animals are repeatedly exposed to drugs. Despite the important implications of these neuroplastic changes for the process of addiction, in which drug-seeking behaviors become compulsive, there are no effective tools available to measure gene expression across the brain in real time.

Cerebral vasculature in mouse brain. The Jasanoff lab hopes to develop a method for mapping gene expression the brain with related labeling characteristics .
Image: Alan Jasanoff

With the new Mathers funding, Jasanoff is developing new MRI-detectable sensors for gene expression. With these cutting-edge tools, Jasanoff proposes to make an activity atlas of how the brain responds to drugs of abuse, both upon initial exposure and over repeated doses that simulate the experiences of drug addicted individuals.

“Our studies will relate drug-induced brain activity to longer term changes that reshape the brain in addiction,” says Jasanoff. “We hope these studies will suggest new biomarkers or treatments.”

Dopamine-producing neurons in a brain region called the substantia nigra are known to be especially vulnerable to dying in Parkinson’s disease, leading to the severe motor difficulties experienced during the progression of the incurable, chronic neurodegenerative disorder. The field knows little about what puts specific cells at such dire risk, or what molecular mechanisms might help them resist the disease. In her research on Huntington’s disease, another incurable neurodegenerative disorder in which a specific neuron population in the striatum is especially vulnerable, Heiman has been able to use an innovative method her lab pioneered to discover genes whose expression promotes neuron survival, yielding potential new drug targets. The technique involves conducting an unbiased screen in which her lab knocks out each of the 22,000 genes expressed in the mouse brain one by one in neurons in disease model mice and healthy controls. The technique allows her to determine which genes, when missing, contribute to neuron death amid disease and therefore which genes are particularly needed for survival. The products of those genes can then be evaluated as drug targets. With the new Mathers award, Heiman plans to apply the method to study Parkinson’s disease.

An immunofluorescence image taken in a brain region called the substantia nigra (SN) highlights tyrosine hydroxylase, a protein expressed by dopamine neurons. This type of neuron in the SN is especially vulnerable to neurodegeneration in Parkinson’s disease. Image: Preston Ge/Heiman Lab

“There is currently no molecular explanation for the brain cell loss seen in Parkinson’s disease or a cure for this devastating disease,” Heiman said. “This award will allow us to perform unbiased, genome-wide genetic screens in the brains of mouse models of Parkinson’s disease, probing for genes that allow brain cells to survive the effects of cellular perturbations associated with Parkinson’s disease. I’m extremely grateful for this generous support and recognition of our work from the Mathers Foundation, and hope that our study will elucidate new therapeutic targets for the treatment and even prevention of Parkinson’s disease.”

To the brain, reading computer code is not the same as reading language

In some ways, learning to program a computer is similar to learning a new language. It requires learning new symbols and terms, which must be organized correctly to instruct the computer what to do. The computer code must also be clear enough that other programmers can read and understand it.

In spite of those similarities, MIT neuroscientists have found that reading computer code does not activate the regions of the brain that are involved in language processing. Instead, it activates a distributed network called the multiple demand network, which is also recruited for complex cognitive tasks such as solving math problems or crossword puzzles.

However, although reading computer code activates the multiple demand network, it appears to rely more on different parts of the network than math or logic problems do, suggesting that coding does not precisely replicate the cognitive demands of mathematics either.

“Understanding computer code seems to be its own thing. It’s not the same as language, and it’s not the same as math and logic,” says Anna Ivanova, an MIT graduate student and the lead author of the study.

Evelina Fedorenko, the Frederick A. and Carole J. Middleton Career Development Associate Professor of Neuroscience and a member of the McGovern Institute for Brain Research, is the senior author of the paper, which appears today in eLife. Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory and Tufts University were also involved in the study.

Language and cognition

McGovern Investivator Ev Fedorenko in the Martinos Imaging Center at MIT. Photo: Caitlin Cunningham

A major focus of Fedorenko’s research is the relationship between language and other cognitive functions. In particular, she has been studying the question of whether other functions rely on the brain’s language network, which includes Broca’s area and other regions in the left hemisphere of the brain. In previous work, her lab has shown that music and math do not appear to activate this language network.

“Here, we were interested in exploring the relationship between language and computer programming, partially because computer programming is such a new invention that we know that there couldn’t be any hardwired mechanisms that make us good programmers,” Ivanova says.

There are two schools of thought regarding how the brain learns to code, she says. One holds that in order to be good at programming, you must be good at math. The other suggests that because of the parallels between coding and language, language skills might be more relevant. To shed light on this issue, the researchers set out to study whether brain activity patterns while reading computer code would overlap with language-related brain activity.

The two programming languages that the researchers focused on in this study are known for their readability — Python and ScratchJr, a visual programming language designed for children age 5 and older. The subjects in the study were all young adults proficient in the language they were being tested on. While the programmers lay in a functional magnetic resonance (fMRI) scanner, the researchers showed them snippets of code and asked them to predict what action the code would produce.

The researchers saw little to no response to code in the language regions of the brain. Instead, they found that the coding task mainly activated the so-called multiple demand network. This network, whose activity is spread throughout the frontal and parietal lobes of the brain, is typically recruited for tasks that require holding many pieces of information in mind at once, and is responsible for our ability to perform a wide variety of mental tasks.

“It does pretty much anything that’s cognitively challenging, that makes you think hard,” says Ivanova, who was also named one of the McGovern Institute’s rising stars in neuroscience.

Previous studies have shown that math and logic problems seem to rely mainly on the multiple demand regions in the left hemisphere, while tasks that involve spatial navigation activate the right hemisphere more than the left. The MIT team found that reading computer code appears to activate both the left and right sides of the multiple demand network, and ScratchJr activated the right side slightly more than the left. This finding goes against the hypothesis that math and coding rely on the same brain mechanisms.

Effects of experience

The researchers say that while they didn’t identify any regions that appear to be exclusively devoted to programming, such specialized brain activity might develop in people who have much more coding experience.

“It’s possible that if you take people who are professional programmers, who have spent 30 or 40 years coding in a particular language, you may start seeing some specialization, or some crystallization of parts of the multiple demand system,” Fedorenko says. “In people who are familiar with coding and can efficiently do these tasks, but have had relatively limited experience, it just doesn’t seem like you see any specialization yet.”

In a companion paper appearing in the same issue of eLife, a team of researchers from Johns Hopkins University also reported that solving code problems activates the multiple demand network rather than the language regions.

The findings suggest there isn’t a definitive answer to whether coding should be taught as a math-based skill or a language-based skill. In part, that’s because learning to program may draw on both language and multiple demand systems, even if — once learned — programming doesn’t rely on the language regions, the researchers say.

“There have been claims from both camps — it has to be together with math, it has to be together with language,” Ivanova says. “But it looks like computer science educators will have to develop their own approaches for teaching code most effectively.”

The research was funded by the National Science Foundation, the Department of the Brain and Cognitive Sciences at MIT, and the McGovern Institute for Brain Research.

A hunger for social contact

Since the coronavirus pandemic began in the spring, many people have only seen their close friends and loved ones during video calls, if at all. A new study from MIT finds that the longings we feel during this kind of social isolation share a neural basis with the food cravings we feel when hungry.

The researchers found that after one day of total isolation, the sight of people having fun together activates the same brain region that lights up when someone who hasn’t eaten all day sees a picture of a plate of cheesy pasta.

“People who are forced to be isolated crave social interactions similarly to the way a hungry person craves food.”

“Our finding fits the intuitive idea that positive social interactions are a basic human need, and acute loneliness is an aversive state that motivates people to repair what is lacking, similar to hunger,” says Rebecca Saxe, the John W. Jarve Professor of Brain and Cognitive Sciences at MIT, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

The research team collected the data for this study in 2018 and 2019, long before the coronavirus pandemic and resulting lockdowns. Their new findings, described today in Nature Neuroscience, are part of a larger research program focusing on how social stress affects people’s behavior and motivation.

Former MIT postdoc Livia Tomova, who is now a research associate at Cambridge University, is the lead author of the paper. Other authors include Kimberly Wang, a McGovern Institute research associate; Todd Thompson, a McGovern Institute scientist; Atsushi Takahashi, assistant director of the Martinos Imaging Center; Gillian Matthews, a research scientist at the Salk Institute for Biological Studies; and Kay Tye, a professor at the Salk Institute.

Social craving

The new study was partly inspired by a recent paper from Tye, a former member of MIT’s Picower Institute for Learning and Memory. In that 2016 study, she and Matthews, then an MIT postdoc, identified a cluster of neurons in the brains of mice that represent feelings of loneliness and generate a drive for social interaction following isolation. Studies in humans have shown that being deprived of social contact can lead to emotional distress, but the neurological basis of these feelings is not well-known.

“We wanted to see if we could experimentally induce a certain kind of social stress, where we would have control over what the social stress was,” Saxe says. “It’s a stronger intervention of social isolation than anyone had tried before.”

To create that isolation environment, the researchers enlisted healthy volunteers, who were mainly college students, and confined them to a windowless room on MIT’s campus for 10 hours. They were not allowed to use their phones, but the room did have a computer that they could use to contact the researchers if necessary.

“There were a whole bunch of interventions we used to make sure that it would really feel strange and different and isolated,” Saxe says. “They had to let us know when they were going to the bathroom so we could make sure it was empty. We delivered food to the door and then texted them when it was there so they could go get it. They really were not allowed to see people.”

After the 10-hour isolation ended, each participant was scanned in an MRI machine. This posed additional challenges, as the researchers wanted to avoid any social contact during the scanning. Before the isolation period began, each subject was trained on how to get into the machine, so that they could do it by themselves, without any help from the researcher.

“Normally, getting somebody into an MRI machine is actually a really social process. We engage in all kinds of social interactions to make sure people understand what we’re asking them, that they feel safe, that they know we’re there,” Saxe says. “In this case, the subjects had to do it all by themselves, while the researcher, who was gowned and masked, just stood silently by and watched.”

Each of the 40 participants also underwent 10 hours of fasting, on a different day. After the 10-hour period of isolation or fasting, the participants were scanned while looking at images of food, images of people interacting, and neutral images such as flowers. The researchers focused on a part of the brain called the substantia nigra, a tiny structure located in the midbrain, which has previously been linked with hunger cravings and drug cravings. The substantia nigra is also believed to share evolutionary origins with a brain region in mice called the dorsal raphe nucleus, which is the area that Tye’s lab showed was active following social isolation in their 2016 study.

The researchers hypothesized that when socially isolated subjects saw photos of people enjoying social interactions, the “craving signal” in their substantia nigra would be similar to the signal produced when they saw pictures of food after fasting. This was indeed the case. Furthermore, the amount of activation in the substantia nigra was correlated with how strongly the patients rated their feelings of craving either food or social interaction.

Degrees of loneliness

The researchers also found that people’s responses to isolation varied depending on their normal levels of loneliness. People who reported feeling chronically isolated months before the study was done showed weaker cravings for social interaction after the 10-hour isolation period than people who reported a richer social life.

“For people who reported that their lives were really full of satisfying social interactions, this intervention had a bigger effect on their brains and on their self-reports,” Saxe says.

The researchers also looked at activation patterns in other parts of the brain, including the striatum and the cortex, and found that hunger and isolation each activated distinct areas of those regions. That suggests that those areas are more specialized to respond to different types of longings, while the substantia nigra produces a more general signal representing a variety of cravings.

Now that the researchers have established that they can observe the effects of social isolation on brain activity, Saxe says they can now try to answer many additional questions. Those questions include how social isolation affect people’s behavior, whether virtual social contacts such as video calls help to alleviate cravings for social interaction, and how isolation affects different age groups.

The researchers also hope to study whether the brain responses that they saw in this study could be used to predict how the same participants responded to being isolated during the lockdowns imposed during the early stages of the coronavirus pandemic.

The research was funded by a SFARI Explorer Grant from the Simons Foundation, a MINT grant from the McGovern Institute, the National Institutes of Health, including an NIH Pioneer Award, a Max Kade Foundation Fellowship, and an Erwin Schroedinger Fellowship from the Austrian Science Fund.