New clues to brain changes in Huntington’s disease

Huntington’s disease is a fatal inherited disorder that strikes most often in middle age with mood disturbances, uncontrollable limb movements, and cognitive decline. Years before symptom onset, brain imaging shows degeneration of the striatum, a brain region important for the rapid selection of behavioral actions. As the striatal neurons degenerate, their “identity” proteins, the building blocks that give particular cell types their unique function, are gradually turned off.

A new study from the lab of Institute Professor Ann Graybiel has found a surprising exception to this rule. The researchers discovered that in mouse models of Huntington’s disease, the cell identity protein MOR1, named as the Mu type Opioid Receptor, actually becomes more abundant as the striatal neurons degenerate.

“This is one of the most striking immunohistochemical change that I have ever seen in the literature of Huntington’s disease model animals,” says Ryoma Morigaki, a research scientist in the Graybiel laboratory and lead author of the report, who worked with Tomoko Yoshida and others in the Graybiel lab.

Immunohistochemical stainings using anti-mu-opioid receptor antibody. Wild type mouse striatum (left) and Q175 Huntington’s disease model mouse striatum (right) at 19 months old. Image: Ryoma Morigaki

More opioid receptors

MOR1 is a receptor on the surface of neurons that binds to opioids that are produced by the body or those taken for pain relief, such as morphine. The natural opioid in the brain is a small molecule called enkephalin, and it is normally produced by the same striatal neurons that degenerate in the earliest stages of Huntington’s disease.

The research team speculates that the striatum increases the quantity of MOR1 receptors in Huntington’s disease models to compensate for plummeting levels of enkephalin, but they also believe this upregulation may play a role in the perception of reward.

Previous work suggests that MOR1 has distinct signaling mechanisms related to its function in pain perception and its function in drug-seeking. These distinct mechanisms might be related to the fact that MOR1 is produced as multiple “isoforms,” slight variations of a protein that can be read out from the same gene. The MOR1 isoform that is found in the striatum is thought to be more important for drug-seeking behaviors than for pain perception. This in turn means that MOR1 might play a role in a key striatal function, which is to learn what actions are most likely to lead to reward.

“It is now recognized that mood disturbances can pre-date the overt motor abnormalities of Huntington’s patients by many years. These can even be the most disturbing symptoms for patients and their families. The finding that this receptor for opioids becomes so elevated in mood-related sites of the striatum, at least in a mouse model of the disorder, may give a hint to the underlying circuit dysfunction leading to these problems,” says Ann Graybiel.

Clues for treatment

MOR1 is used as a standard to identify subsets of neurons that are located within small clusters of neurons in the striatum that were previously discovered by Ann Graybiel and named striosomes.

“The most exciting point for me is the involvement of striatal compartments [striosomes] in the pathogenesis of Huntington’s disease,” says Morigaki, who has now moved to the University of Fukoshima in Japan and is a practicing neurosurgeon who treats movement disorders.

MOR1-positive striosomal neurons are of high interest in part because they have direct connections to the same dopamine-producing neurons that are thought to degenerate in Parkinson’s disease. Whereas Parkinson’s disease is characterized by a loss of dopamine and loss of movement, Huntington’s disease is characterized by ups and downs in dopamine and excessive movements. In fact, the only drugs that are FDA-approved to treat Huntington’s disease are drugs that minimize dopamine release, thereby working to dampen the abnormal movements. But these treatments come with potentially severe side-effects such as depression and suicide.

This latest discovery might provide mechanistic clues to dopamine fluctuations in Huntington’s disease and provide avenues for more specific treatments.

This research was funded by the CHDI Foundation (A-5552), Broderick Fund for Phytocannabinoid Research at MIT, NIH/NIMH R01 MH060379, the Saks Kavanaugh Foundation, JSPS KAKENHI Grants #16KK0182, 17K10899 and 20K17932 , Dr. Tenley Albright, Kathleen Huber, and Dr. Stephan and Mrs. Anne Kott.

Storytelling brings MIT neuroscience community together

When the coronavirus pandemic shut down offices, labs, and classrooms across the MIT campus last spring, many members of the MIT community found it challenging to remain connected to one another in meaningful ways. Motivated by a desire to bring the neuroscience community back together, the McGovern Institute hosted a virtual storytelling competition featuring a selection of postdocs, grad students, and staff from across the institute.

“This has been an unprecedented year for us all,” says McGovern Institute Director Robert Desimone. “It has been twenty years since Pat and Lore McGovern founded the McGovern Institute, and despite the challenges this anniversary year has brought to our community, I have been inspired by the strength and perseverance demonstrated by our faculty, postdocs, students and staff. The resilience of this neuroscience community – and MIT as a whole – is indeed something to celebrate.”

The McGovern Institute had initially planned to hold a large 20th anniversary celebration in the atrium of Building 46 in the fall of 2020, but the pandemic made a gathering of this size impossible. The institute instead held a series of virtual events, including the November 12 story slam on the theme of resilience.

Neuroscientists find a way to improve object-recognition models

Computer vision models known as convolutional neural networks can be trained to recognize objects nearly as accurately as humans do. However, these models have one significant flaw: Very small changes to an image, which would be nearly imperceptible to a human viewer, can trick them into making egregious errors such as classifying a cat as a tree.

A team of neuroscientists from MIT, Harvard University, and IBM have developed a way to alleviate this vulnerability, by adding to these models a new layer that is designed to mimic the earliest stage of the brain’s visual processing system. In a new study, they showed that this layer greatly improved the models’ robustness against this type of mistake.

A grid showing the visualization of many common image corruption types. First row, original image, followed by the noise corruptions; second row, blur corruptions; third row, weather corruptions; fourth row, digital corruptions.
Credits: Courtesy of the researchers.

“Just by making the models more similar to the brain’s primary visual cortex, in this single stage of processing, we see quite significant improvements in robustness across many different types of perturbations and corruptions,” says Tiago Marques, an MIT postdoc and one of the lead authors of the study.

Convolutional neural networks are often used in artificial intelligence applications such as self-driving cars, automated assembly lines, and medical diagnostics. Harvard graduate student Joel Dapello, who is also a lead author of the study, adds that “implementing our new approach could potentially make these systems less prone to error and more aligned with human vision.”

“Good scientific hypotheses of how the brain’s visual system works should, by definition, match the brain in both its internal neural patterns and its remarkable robustness. This study shows that achieving those scientific gains directly leads to engineering and application gains,” says James DiCarlo, the head of MIT’s Department of Brain and Cognitive Sciences, an investigator in the Center for Brains, Minds, and Machines and the McGovern Institute for Brain Research, and the senior author of the study.

The study, which is being presented at the NeurIPS conference this month, is also co-authored by MIT graduate student Martin Schrimpf, MIT visiting student Franziska Geiger, and MIT-IBM Watson AI Lab Director David Cox.

Mimicking the brain

Recognizing objects is one of the visual system’s primary functions. In just a small fraction of a second, visual information flows through the ventral visual stream to the brain’s inferior temporal cortex, where neurons contain information needed to classify objects. At each stage in the ventral stream, the brain performs different types of processing. The very first stage in the ventral stream, V1, is one of the most well-characterized parts of the brain and contains neurons that respond to simple visual features such as edges.

“It’s thought that V1 detects local edges or contours of objects, and textures, and does some type of segmentation of the images at a very small scale. Then that information is later used to identify the shape and texture of objects downstream,” Marques says. “The visual system is built in this hierarchical way, where in early stages neurons respond to local features such as small, elongated edges.”

For many years, researchers have been trying to build computer models that can identify objects as well as the human visual system. Today’s leading computer vision systems are already loosely guided by our current knowledge of the brain’s visual processing. However, neuroscientists still don’t know enough about how the entire ventral visual stream is connected to build a model that precisely mimics it, so they borrow techniques from the field of machine learning to train convolutional neural networks on a specific set of tasks. Using this process, a model can learn to identify objects after being trained on millions of images.

Many of these convolutional networks perform very well, but in most cases, researchers don’t know exactly how the network is solving the object-recognition task. In 2013, researchers from DiCarlo’s lab showed that some of these neural networks could not only accurately identify objects, but they could also predict how neurons in the primate brain would respond to the same objects much better than existing alternative models. However, these neural networks are still not able to perfectly predict responses along the ventral visual stream, particularly at the earliest stages of object recognition, such as V1.

These models are also vulnerable to so-called “adversarial attacks.” This means that small changes to an image, such as changing the colors of a few pixels, can lead the model to completely confuse an object for something different — a type of mistake that a human viewer would not make.

A comparison of adversarial images with different perturbation strengths.
Credits: Courtesy of the researchers.

As a first step in their study, the researchers analyzed the performance of 30 of these models and found that models whose internal responses better matched the brain’s V1 responses were also less vulnerable to adversarial attacks. That is, having a more brain-like V1 seemed to make the model more robust. To further test and take advantage of that idea, the researchers decided to create their own model of V1, based on existing neuroscientific models, and place it at the front of convolutional neural networks that had already been developed to perform object recognition.

When the researchers added their V1 layer, which is also implemented as a convolutional neural network, to three of these models, they found that these models became about four times more resistant to making mistakes on images perturbed by adversarial attacks. The models were also less vulnerable to misidentifying objects that were blurred or distorted due to other corruptions.

“Adversarial attacks are a big, open problem for the practical deployment of deep neural networks. The fact that adding neuroscience-inspired elements can improve robustness substantially suggests that there is still a lot that AI can learn from neuroscience, and vice versa,” Cox says.

Better defense

Currently, the best defense against adversarial attacks is a computationally expensive process of training models to recognize the altered images. One advantage of the new V1-based model is that it doesn’t require any additional training. It is also better able to handle a wide range of distortions, beyond adversarial attacks.

The researchers are now trying to identify the key features of their V1 model that allows it to do a better job resisting adversarial attacks, which could help them to make future models even more robust. It could also help them learn more about how the human brain is able to recognize objects.

“One big advantage of the model is that we can map components of the model to particular neuronal populations in the brain,” Dapello says. “We can use this as a tool for novel neuroscientific discoveries, and also continue developing this model to improve its performance under this challenging task.”

The research was funded by the PhRMA Foundation Postdoctoral Fellowship in Informatics, the Semiconductor Research Corporation, DARPA, the MIT Shoemaker Fellowship, the U.S. Office of Naval Research, the Simons Foundation, and the MIT-IBM Watson AI Lab.

A large-scale tool to investigate the function of autism spectrum disorder genes

Scientists at Harvard University, the Broad Institute of MIT and Harvard, and MIT have developed a technology to investigate the function of many different genes in many different cell types at once, in a living organism. They applied the large-scale method to study dozens of genes that are associated with autism spectrum disorder, identifying how specific cell types in the developing mouse brain are impacted by mutations.

The “Perturb-Seq” method, published in the journal Science, is an efficient way to identify potential biological mechanisms underlying autism spectrum disorder, which is an important first step toward developing treatments for the complex disease. The method is also broadly applicable to other organs, enabling scientists to better understand a wide range of disease and normal processes.

“For many years, genetic studies have identified a multitude of risk genes that are associated with the development of autism spectrum disorder. The challenge in the field has been to make the connection between knowing what the genes are, to understanding how the genes actually affect cells and ultimately behavior,” said co-senior author Paola Arlotta, the Golub Family Professor of Stem Cell and Regenerative Biology at Harvard. “We applied the Perturb-Seq technology to an intact developing organism for the first time, showing the potential of measuring gene function at scale to better understand a complex disorder.”

The study was also led by co-senior authors Aviv Regev, who was a core member of the Broad Institute during the study and is currently Executive Vice President of Genentech Research and Early Development, and Feng Zhang, a core member of the Broad Institute and an investigator at MIT’s McGovern Institute.

To investigate gene function at a large scale, the researchers combined two powerful genomic technologies. They used CRISPR-Cas9 genome editing to make precise changes, or perturbations, in 35 different genes linked to autism spectrum disorder risk. Then, they analyzed changes in the developing mouse brain using single-cell RNA sequencing, which allowed them to see how gene expression changed in over 40,000 individual cells.

By looking at the level of individual cells, the researchers could compare how the risk genes affected different cell types in the cortex — the part of the brain responsible for complex functions including cognition and sensation. They analyzed networks of risk genes together to find common effects.

“We found that both neurons and glia — the non-neuronal cells in the brain — are directly affected by different sets of these risk genes,” said Xin Jin, lead author of the study and a Junior Fellow of the Harvard Society of Fellows. “Genes and molecules don’t generate cognition per se — they need to impact specific cell types in the brain to do so. We are interested in understanding how these different cell types can contribute to the disorder.”

To get a sense of the model’s potential relevance to the disorder in humans, the researchers compared their results to data from post-mortem human brains. In general, they found that in the post-mortem human brains with autism spectrum disorder, some of the key genes with altered expression were also affected in the Perturb-seq data.

“We now have a really rich dataset that allows us to draw insights, and we’re still learning a lot about it every day,” Jin said. “As we move forward with studying disease mechanisms in more depth, we can focus on the cell types that may be really important.”

“The field has been limited by the sheer time and effort that it takes to make one model at a time to test the function of single genes. Now, we have shown the potential of studying gene function in a developing organism in a scalable way, which is an exciting first step to understanding the mechanisms that lead to autism spectrum disorder and other complex psychiatric conditions, and to eventually develop treatments for these devastating conditions,” said Arlotta, who is also an institute member of the Broad Institute and part of the Broad’s Stanley Center for Psychiatric Research. “Our work also paves the way for Perturb-Seq to be applied to organs beyond the brain, to enable scientists to better understand the development or function of different tissue types, as well as pathological conditions.”

“Through genome sequencing efforts, a very large number of genes have been identified that, when mutated, are associated with human diseases. Traditionally, understanding the role of these genes would involve in-depth studies of each gene individually. By developing Perturb-seq for in vivo applications, we can start to screen all of these genes in animal models in a much more efficient manner, enabling us to understand mechanistically how mutations in these genes can lead to disease,” said Zhang, who is also the James and Patricia Poitras Professor of Neuroscience at MIT and a professor of brain and cognitive sciences and biological engineering at MIT.

This study was funded by the Stanley Center for Psychiatric Research at the Broad Institute, the National Institutes of Health, the Brain and Behavior Research Foundation’s NARSAD Young Investigator Grant, Harvard University’s William F. Milton Fund, the Klarman Cell Observatory, the Howard Hughes Medical Institute, a Center for Cell Circuits grant from the National Human Genome Research Institute’s Centers of Excellence in Genomic Science, the New York Stem Cell Foundation, the Mathers Foundation, the Poitras Center for Psychiatric Disorders Research at MIT, the Hock E. Tan and K. Lisa Yang Center for Autism Research at MIT, and J. and P. Poitras.

How humans use objects in novel ways to solve problems

Human beings are naturally creative tool users. When we need to drive in a nail but don’t have a hammer, we easily realize that we can use a heavy, flat object like a rock in its place. When our table is shaky, we quickly find that we can put a stack of paper under the table leg to stabilize it. But while these actions seem so natural to us, they are believed to be a hallmark of great intelligence — only a few other species use objects in novel ways to solve their problems, and none can do so as flexibly as people. What provides us with these powerful capabilities for using objects in this way?

In a new paper published in the Proceedings of the National Academy of Sciences describing work conducted at MIT’s Center for Brains, Minds and Machines, researchers Kelsey Allen, Kevin Smith, and Joshua Tenenbaum study the cognitive components that underlie this sort of improvised tool use. They designed a novel task, the Virtual Tools game, that taps into tool-use abilities: People must select one object from a set of “tools” that they can place in a two-dimensional, computerized scene to accomplish a goal, such as getting a ball into a certain container. Solving the puzzles in this game requires reasoning about a number of physical principles, including launching, blocking, or supporting objects.

The team hypothesized that there are three capabilities that people rely on to solve these puzzles: a prior belief that guides people’s actions toward those that will make a difference in the scene, the ability to imagine the effect of their actions, and a mechanism to quickly update their beliefs about what actions are likely to provide a solution. They built a model that instantiated these principles, called the “Sample, Simulate, Update,” or “SSUP,” model, and had it play the same game as people. They found that SSUP solved each puzzle at similar rates and in similar ways as people did. On the other hand, a popular deep learning model that could play Atari games well but did not have the same object and physical structures was unable to generalize its knowledge to puzzles it was not directly trained on.

This research provides a new framework for studying and formalizing the cognition that supports human tool use. The team hopes to extend this framework to not just study tool use, but also how people can create innovative new tools for new problems, and how humans transmit this information to build from simple physical tools to complex objects like computers or airplanes that are now part of our daily lives.

Kelsey Allen, a PhD student in the Computational Cognitive Science Lab at MIT, is excited about how the Virtual Tools game might support other cognitive scientists interested in tool use: “There is just so much more to explore in this domain. We have already started collaborating with researchers across multiple different institutions on projects ranging from studying what it means for games to be fun, to studying how embodiment affects disembodied physical reasoning. I hope that others in the cognitive science community will use the game as a tool to better understand how physical models interact with decision-making and planning.”

Joshua Tenenbaum, professor of computational cognitive science at MIT, sees this work as a step toward understanding not only an important aspect of human cognition and culture, but also how to build more human-like forms of intelligence in machines. “Artificial Intelligence researchers have been very excited about the potential for reinforcement learning (RL) algorithms to learn from trial-and-error experience, as humans do, but the real trial-and-error learning that humans benefit from unfolds over just a handful of trials — not millions or billions of experiences, as in today’s RL systems,” Tenenbaum says. “The Virtual Tools game allows us to study this very rapid and much more natural form of trial-and-error learning in humans, and the fact that the SSUP model is able to capture the fast learning dynamics we see in humans suggests it may also point the way towards new AI approaches to RL that can learn from their successes, their failures, and their near misses as quickly and as flexibly as people do.”

A hunger for social contact

Since the coronavirus pandemic began in the spring, many people have only seen their close friends and loved ones during video calls, if at all. A new study from MIT finds that the longings we feel during this kind of social isolation share a neural basis with the food cravings we feel when hungry.

The researchers found that after one day of total isolation, the sight of people having fun together activates the same brain region that lights up when someone who hasn’t eaten all day sees a picture of a plate of cheesy pasta.

“People who are forced to be isolated crave social interactions similarly to the way a hungry person craves food.”

“Our finding fits the intuitive idea that positive social interactions are a basic human need, and acute loneliness is an aversive state that motivates people to repair what is lacking, similar to hunger,” says Rebecca Saxe, the John W. Jarve Professor of Brain and Cognitive Sciences at MIT, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

The research team collected the data for this study in 2018 and 2019, long before the coronavirus pandemic and resulting lockdowns. Their new findings, described today in Nature Neuroscience, are part of a larger research program focusing on how social stress affects people’s behavior and motivation.

Former MIT postdoc Livia Tomova, who is now a research associate at Cambridge University, is the lead author of the paper. Other authors include Kimberly Wang, a McGovern Institute research associate; Todd Thompson, a McGovern Institute scientist; Atsushi Takahashi, assistant director of the Martinos Imaging Center; Gillian Matthews, a research scientist at the Salk Institute for Biological Studies; and Kay Tye, a professor at the Salk Institute.

Social craving

The new study was partly inspired by a recent paper from Tye, a former member of MIT’s Picower Institute for Learning and Memory. In that 2016 study, she and Matthews, then an MIT postdoc, identified a cluster of neurons in the brains of mice that represent feelings of loneliness and generate a drive for social interaction following isolation. Studies in humans have shown that being deprived of social contact can lead to emotional distress, but the neurological basis of these feelings is not well-known.

“We wanted to see if we could experimentally induce a certain kind of social stress, where we would have control over what the social stress was,” Saxe says. “It’s a stronger intervention of social isolation than anyone had tried before.”

To create that isolation environment, the researchers enlisted healthy volunteers, who were mainly college students, and confined them to a windowless room on MIT’s campus for 10 hours. They were not allowed to use their phones, but the room did have a computer that they could use to contact the researchers if necessary.

“There were a whole bunch of interventions we used to make sure that it would really feel strange and different and isolated,” Saxe says. “They had to let us know when they were going to the bathroom so we could make sure it was empty. We delivered food to the door and then texted them when it was there so they could go get it. They really were not allowed to see people.”

After the 10-hour isolation ended, each participant was scanned in an MRI machine. This posed additional challenges, as the researchers wanted to avoid any social contact during the scanning. Before the isolation period began, each subject was trained on how to get into the machine, so that they could do it by themselves, without any help from the researcher.

“Normally, getting somebody into an MRI machine is actually a really social process. We engage in all kinds of social interactions to make sure people understand what we’re asking them, that they feel safe, that they know we’re there,” Saxe says. “In this case, the subjects had to do it all by themselves, while the researcher, who was gowned and masked, just stood silently by and watched.”

Each of the 40 participants also underwent 10 hours of fasting, on a different day. After the 10-hour period of isolation or fasting, the participants were scanned while looking at images of food, images of people interacting, and neutral images such as flowers. The researchers focused on a part of the brain called the substantia nigra, a tiny structure located in the midbrain, which has previously been linked with hunger cravings and drug cravings. The substantia nigra is also believed to share evolutionary origins with a brain region in mice called the dorsal raphe nucleus, which is the area that Tye’s lab showed was active following social isolation in their 2016 study.

The researchers hypothesized that when socially isolated subjects saw photos of people enjoying social interactions, the “craving signal” in their substantia nigra would be similar to the signal produced when they saw pictures of food after fasting. This was indeed the case. Furthermore, the amount of activation in the substantia nigra was correlated with how strongly the patients rated their feelings of craving either food or social interaction.

Degrees of loneliness

The researchers also found that people’s responses to isolation varied depending on their normal levels of loneliness. People who reported feeling chronically isolated months before the study was done showed weaker cravings for social interaction after the 10-hour isolation period than people who reported a richer social life.

“For people who reported that their lives were really full of satisfying social interactions, this intervention had a bigger effect on their brains and on their self-reports,” Saxe says.

The researchers also looked at activation patterns in other parts of the brain, including the striatum and the cortex, and found that hunger and isolation each activated distinct areas of those regions. That suggests that those areas are more specialized to respond to different types of longings, while the substantia nigra produces a more general signal representing a variety of cravings.

Now that the researchers have established that they can observe the effects of social isolation on brain activity, Saxe says they can now try to answer many additional questions. Those questions include how social isolation affect people’s behavior, whether virtual social contacts such as video calls help to alleviate cravings for social interaction, and how isolation affects different age groups.

The researchers also hope to study whether the brain responses that they saw in this study could be used to predict how the same participants responded to being isolated during the lockdowns imposed during the early stages of the coronavirus pandemic.

The research was funded by a SFARI Explorer Grant from the Simons Foundation, a MINT grant from the McGovern Institute, the National Institutes of Health, including an NIH Pioneer Award, a Max Kade Foundation Fellowship, and an Erwin Schroedinger Fellowship from the Austrian Science Fund.

Imaging method reveals a “symphony of cellular activities”

Within a single cell, thousands of molecules, such as proteins, ions, and other signaling molecules, work together to perform all kinds of functions — absorbing nutrients, storing memories, and differentiating into specific tissues, among many others.

Deciphering these molecules, and all of their interactions, is a monumental task. Over the past 20 years, scientists have developed fluorescent reporters they can use to read out the dynamics of individual molecules within cells. However, typically only one or two such signals can be observed at a time, because a microscope cannot distinguish between many fluorescent colors.

MIT researchers have now developed a way to image up to five different molecule types at a time, by measuring each signal from random, distinct locations throughout a cell.

This approach could allow scientists to learn much more about the complex signaling networks that control most cell functions, says Edward Boyden, the Y. Eva Tan Professor in Neurotechnology and a professor of biological engineering, media arts and sciences, and brain and cognitive sciences at MIT.

“There are thousands of molecules encoded by the genome, and they’re interacting in ways that we don’t understand. Only by watching them at the same time can we understand their relationships,” says Boyden, who is also a member of MIT’s McGovern Institute for Brain Research and Koch Institute for Integrative Cancer Research.

In a new study, Boyden and his colleagues used this technique to identify two populations of neurons that respond to calcium signals in different ways, which may influence how they encode long-term memories, the researchers say.

Boyden is the senior author of the study, which appears today in Cell. The paper’s lead authors are MIT postdoc Changyang Linghu and graduate student Shannon Johnson.

Fluorescent clusters

Shannon Johnson is a graduate fellow in the fellow in the Yang-Tan Center for Molecular Therapeutics.

To make molecular activity visible within a cell, scientists typically create reporters by fusing a protein that senses a target molecule to a protein that glows. “This is similar to how a smoke detector will sense smoke and then flash a light,” says Johnson, who is also a fellow in the Yang-Tan Center for Molecular Therapeutics. The most commonly used glowing protein is green fluorescent protein (GFP), which is based on a molecule originally found in a fluorescent jellyfish.

“Typically a biologist can see one or two colors at the same time on a microscope, and many of the reporters out there are green, because they’re based on the green fluorescent protein,” Boyden says. “What has been lacking until now is the ability to see more than a couple of these signals at once.”

“Just like listening to the sound of a single instrument from an orchestra is far from enough to fully appreciate a symphony,” Linghu says, “by enabling observations of multiple cellular signals at the same time, our technology will help us understand the ‘symphony’ of cellular activities.”

To boost the number of signals they could see, the researchers set out to identify signals by location instead of by color. They modified existing reporters to cause them to accumulate in clusters at different locations within a cell. They did this by adding two small peptides to each reporter, which helped the reporters form distinct clusters within cells.

“It’s like having reporter X be tethered to a LEGO brick, and reporter Z tethered to a K’NEX piece — only LEGO bricks will snap to other LEGO bricks, causing only reporter X to be clustered with more of reporter X,” Johnson says.

Changyang Linghu is the J. Douglas Tan Postdoctoral Fellow in the Hock E. Tan and K. Lisa Yang Center for Autism Research.

With this technique, each cell ends up with hundreds of clusters of fluorescent reporters. After measuring the activity of each cluster under a microscope, based on the changing fluorescence, the researchers can identify which molecule was being measured in each cluster by preserving the cell and staining for peptide tags that are unique to each reporter.  The peptide tags are invisible in the live cell, but they can be stained and seen after the live imaging is done. This allows the researchers to distinguish signals for different molecules even though they may all be fluorescing the same color in the live cell.

Using this approach, the researchers showed that they could see five different molecular signals in a single cell. To demonstrate the potential usefulness of this strategy, they measured the activities of three molecules in parallel — calcium, cyclic AMP, and protein kinase A (PKA). These molecules form a signaling network that is involved with many different cellular functions throughout the body. In neurons, it plays an important role in translating a short-term input (from upstream neurons) into long-term changes such as strengthening the connections between neurons — a process that is necessary for learning and forming new memories.

Applying this imaging technique to pyramidal neurons in the hippocampus, the researchers identified two novel subpopulations with different calcium signaling dynamics. One population showed slow calcium responses. In the other population, neurons had faster calcium responses. The latter population had larger PKA responses. The researchers believe this heightened response may help sustain long-lasting changes in the neurons.

Imaging signaling networks

The researchers now plan to try this approach in living animals so they can study how signaling network activities relate to behavior, and also to expand it to other types of cells, such as immune cells. This technique could also be useful for comparing signaling network patterns between cells from healthy and diseased tissue.

In this paper, the researchers showed they could record five different molecular signals at once, and by modifying their existing strategy, they believe they could get up to 16. With additional work, that number could reach into the hundreds, they say.

“That really might help crack open some of these tough questions about how the parts of a cell work together,” Boyden says. “One might imagine an era when we can watch everything going on in a living cell, or at least the part involved with learning, or with disease, or with the treatment of a disease.”

The research was funded by the Friends of the McGovern Institute Fellowship; the J. Douglas Tan Fellowship; Lisa Yang; the Yang-Tan Center for Molecular Therapeutics; John Doerr; the Open Philanthropy Project; the HHMI-Simons Faculty Scholars Program; the Human Frontier Science Program; the U.S. Army Research Laboratory; the MIT Media Lab; the Picower Institute Innovation Fund; the National Institutes of Health, including an NIH Director’s Pioneer Award; and the National Science Foundation.

Controlling drug activity with light

Hormones and nutrients bind to receptors on cell surfaces by a lock-and-key mechanism that triggers intracellular events linked to that specific receptor. Drugs that mimic natural molecules are widely used to control these intracellular signaling mechanisms for therapy and in research.

In a new publication, a team led by McGovern Institute Associate Investigator Polina Anikeeva and Oregon Health & Science University Research Assistant Professor James Frank introduce a microfiber technology to deliver and activate a drug that can be induced to bind its receptor by exposure to light.

“A significant barrier in applying light-controllable drugs to modulate neural circuits in living animals is the lack of hardware which enables simultaneous delivery of both light and drugs to the target brain area,” says Frank, who was previously a postdoctoral associate in Anikeeva’s Bioelectronics group at MIT. “Our work offers an integrated approach for on-demand delivery of light and drugs through a single fiber.”

These devices were used to deliver a “photoswitchable” drug deep into the brain. So-called “photoswitches” are light-sensitive molecules that can be attached to drugs to switch their activity on or off with a flash of light ­– the use of these drugs is called photopharmacology. In the new study, photopharmacology is used to control neuronal activity and behavior in mice.

Creating miniaturized devices from macroscale templates

The lightweight device features two microfluidic channel and an optical waveguide, and can easily be carried by the animal during behavior

To use light to control drug activity, light and drugs must be delivered simultaneously to the targeted cells. This is a major challenge when the target is deep in the body, but Anikeeva’s Bioelectronics group is uniquely equipped to deal with this challenge.  Marc-Joseph (MJ) Antonini, a PhD student in Anikeeva’s Bioelectronics lab and co-first author of the study, specializes in the fabrication of biocompatible multifunctional fibers that house microfluidic channels and waveguides to deliver liquids and transmit light.

The multifunctional fibers used in this study contain a fluidic channel and an optical waveguide and are comprised of many layers of different materials that are fused together to provide flexibility and strength. The original form of the fiber is constructed at a macroscale and then heated and pulled (a process called thermal drawing) to become longer, but nearly 70X smaller in diameter. By this method, 100’s of meters of miniaturized fiber can be created from the original template at a cross-sectional scale of micrometers that minimizes tissue damage.

The device used in this study had an implantable fiber bundle of 480µm × 380µm and weighed only 0.8 g, small enough that a mouse can easily carry it on its head for many weeks.

Synthesis of a new photoswitchable drug

To demonstrate effectiveness of their device for simultaneous delivery of liquids and light, the Anikeeva lab teamed up with Dirk Trauner (Frank’s former PhD advisor) and David Konrad,  pharmacologists who synthesized photoswitchable drugs.

They had previously modified a photoswitchable analog of capsaicin, a molecule found in hot peppers that binds to the TRPV1 receptor on sensory neurons and controls the sensation of heat. This modification allowed the capsaicin analog to be activated by 560 nm wave-length of light (visible green) that is not damaging to tissue compared to the original version of the drug that required ultraviolet light. By adding both the TRPV1 receptor and the new photoswitchable capsaicin analog to neurons, they could be artificially activated with green light.

This new photopharmacology system had been shown by Frank, Konrad and their colleagues to work in cells cultured in a dish, but had never been shown to work in freely-moving animals.

Controlling behavior by photopharmacology

To test whether their system could activate neurons in the brain, Frank and Antonini tested it in mice. They asked whether adding the photoswitchable drug and its receptor to reward-mediating neurons in the mouse brain causes mice to prefer a chamber in which they receive light stimulation.

The multifunctional fiber-inspired neural implant was implanted into a phantom brain (left), and successfully delivered light and a blue dye (right).

The miniaturized multifunctional fiber developed by the team was implanted in the mouse brain’s ventral tegmental area, a deep region rich in dopamine neurons that controls reward-seeking behavior. Through the fluidic channel in the device, the researchers delivered a virus that drives expression of the TRPV1 receptor in the neurons under study.  Several weeks later, the device was then used to deliver both light and the photoswitchable capsaicin analog directly to the same neurons. To control for the specificity of their system, they also tested the effects of delivering a virus that does not express the TRPV1 receptor, and the effects of delivering a wavelength of light that does not switch on the drug.

They found that mice showed a preference only for the chamber where they had previously received all three components required for the photopharmacology to function: the receptor-expressing virus, the photoswitchable receptor ligand and the green light that activates the drug. These results demonstrate the efficacy of this system to control the time and place within the body that a drug is active.

“Using these fibers to enable photopharmacology in vivo is a great example of how our multifunctional platform can be leveraged to improve and expand how we can interact with the brain,” says Antonini. “This combination of technologies allows us to achieve the temporal and spatial resolution of light stimulation with the chemical specificity of drug injection in freely moving animals.”

Therapeutic drugs that are taken orally or by injection often cause unwanted side-effects because they act continuously and throughout the whole body. Many unwanted side effects could be eliminated by targeting a drug to a specific body tissue and activating it only as needed. The new technology described by Anikeeva and colleagues is one step toward this ultimate goal.

“Our next goal is to use these neural implants to deliver other photoswitchable drugs to target receptors which are naturally expressed within these circuits,” says Frank, whose new lab in the Vollum Institute at OHSU is synthesizing new light-controllable molecules. “The hardware presented in this study will be widely applicable for controlling circuits throughout the brain, enabling neuroscientists to manipulate them with enhanced precision.”

Using machine learning to track the pandemic’s impact on mental health

Dealing with a global pandemic has taken a toll on the mental health of millions of people. A team of MIT and Harvard University researchers has shown that they can measure those effects by analyzing the language that people use to express their anxiety online.

Using machine learning to analyze the text of more than 800,000 Reddit posts, the researchers were able to identify changes in the tone and content of language that people used as the first wave of the Covid-19 pandemic progressed, from January to April of 2020. Their analysis revealed several key changes in conversations about mental health, including an overall increase in discussion about anxiety and suicide.

“We found that there were these natural clusters that emerged related to suicidality and loneliness, and the amount of posts in these clusters more than doubled during the pandemic as compared to the same months of the preceding year, which is a grave concern,” says Daniel Low, a graduate student in the Program in Speech and Hearing Bioscience and Technology at Harvard and MIT and the lead author of the study.

The analysis also revealed varying impacts on people who already suffer from different types of mental illness. The findings could help psychiatrists, or potentially moderators of the Reddit forums that were studied, to better identify and help people whose mental health is suffering, the researchers say.

“When the mental health needs of so many in our society are inadequately met, even at baseline, we wanted to bring attention to the ways that many people are suffering during this time, in order to amplify and inform the allocation of resources to support them,” says Laurie Rumker, a graduate student in the Bioinformatics and Integrative Genomics PhD Program at Harvard and one of the authors of the study.

Satrajit Ghosh, a principal research scientist at MIT’s McGovern Institute for Brain Research, is the senior author of the study, which appears in the Journal of Internet Medical Research. Other authors of the paper include Tanya Talkar, a graduate student in the Program in Speech and Hearing Bioscience and Technology at Harvard and MIT; John Torous, director of the digital psychiatry division at Beth Israel Deaconess Medical Center; and Guillermo Cecchi, a principal research staff member at the IBM Thomas J. Watson Research Center.

A wave of anxiety

The new study grew out of the MIT class 6.897/HST.956 (Machine Learning for Healthcare), in MIT’s Department of Electrical Engineering and Computer Science. Low, Rumker, and Talkar, who were all taking the course last spring, had done some previous research on using machine learning to detect mental health disorders based on how people speak and what they say. After the Covid-19 pandemic began, they decided to focus their class project on analyzing Reddit forums devoted to different types of mental illness.

“When Covid hit, we were all curious whether it was affecting certain communities more than others,” Low says. “Reddit gives us the opportunity to look at all these subreddits that are specialized support groups. It’s a really unique opportunity to see how these different communities were affected differently as the wave was happening, in real-time.”

The researchers analyzed posts from 15 subreddit groups devoted to a variety of mental illnesses, including schizophrenia, depression, and bipolar disorder. They also included a handful of groups devoted to topics not specifically related to mental health, such as personal finance, fitness, and parenting.

Using several types of natural language processing algorithms, the researchers measured the frequency of words associated with topics such as anxiety, death, isolation, and substance abuse, and grouped posts together based on similarities in the language used. These approaches allowed the researchers to identify similarities between each group’s posts after the onset of the pandemic, as well as distinctive differences between groups.

The researchers found that while people in most of the support groups began posting about Covid-19 in March, the group devoted to health anxiety started much earlier, in January. However, as the pandemic progressed, the other mental health groups began to closely resemble the health anxiety group, in terms of the language that was most often used. At the same time, the group devoted to personal finance showed the most negative semantic change from January to April 2020, and significantly increased the use of words related to economic stress and negative sentiment.

They also discovered that the mental health groups affected the most negatively early in the pandemic were those related to ADHD and eating disorders. The researchers hypothesize that without their usual social support systems in place, due to lockdowns, people suffering from those disorders found it much more difficult to manage their conditions. In those groups, the researchers found posts about hyperfocusing on the news and relapsing back into anorexia-type behaviors since meals were not being monitored by others due to quarantine.

Using another algorithm, the researchers grouped posts into clusters such as loneliness or substance use, and then tracked how those groups changed as the pandemic progressed. Posts related to suicide more than doubled from pre-pandemic levels, and the groups that became significantly associated with the suicidality cluster during the pandemic were the support groups for borderline personality disorder and post-traumatic stress disorder.

The researchers also found the introduction of new topics specifically seeking mental health help or social interaction. “The topics within these subreddit support groups were shifting a bit, as people were trying to adapt to a new life and focus on how they can go about getting more help if needed,” Talkar says.

While the authors emphasize that they cannot implicate the pandemic as the sole cause of the observed linguistic changes, they note that there was much more significant change during the period from January to April in 2020 than in the same months in 2019 and 2018, indicating the changes cannot be explained by normal annual trends.

Mental health resources

This type of analysis could help mental health care providers identify segments of the population that are most vulnerable to declines in mental health caused by not only the Covid-19 pandemic but other mental health stressors such as controversial elections or natural disasters, the researchers say.

Additionally, if applied to Reddit or other social media posts in real-time, this analysis could be used to offer users additional resources, such as guidance to a different support group, information on how to find mental health treatment, or the number for a suicide hotline.

“Reddit is a very valuable source of support for a lot of people who are suffering from mental health challenges, many of whom may not have formal access to other kinds of mental health support, so there are implications of this work for ways that support within Reddit could be provided,” Rumker says.

The researchers now plan to apply this approach to study whether posts on Reddit and other social media sites can be used to detect mental health disorders. One current project involves screening posts in a social media site for veterans for suicide risk and post-traumatic stress disorder.

The research was funded by the National Institutes of Health and the McGovern Institute.