Imaging method reveals a “symphony of cellular activities”

Within a single cell, thousands of molecules, such as proteins, ions, and other signaling molecules, work together to perform all kinds of functions — absorbing nutrients, storing memories, and differentiating into specific tissues, among many others.

Deciphering these molecules, and all of their interactions, is a monumental task. Over the past 20 years, scientists have developed fluorescent reporters they can use to read out the dynamics of individual molecules within cells. However, typically only one or two such signals can be observed at a time, because a microscope cannot distinguish between many fluorescent colors.

MIT researchers have now developed a way to image up to five different molecule types at a time, by measuring each signal from random, distinct locations throughout a cell.

This approach could allow scientists to learn much more about the complex signaling networks that control most cell functions, says Edward Boyden, the Y. Eva Tan Professor in Neurotechnology and a professor of biological engineering, media arts and sciences, and brain and cognitive sciences at MIT.

“There are thousands of molecules encoded by the genome, and they’re interacting in ways that we don’t understand. Only by watching them at the same time can we understand their relationships,” says Boyden, who is also a member of MIT’s McGovern Institute for Brain Research and Koch Institute for Integrative Cancer Research.

In a new study, Boyden and his colleagues used this technique to identify two populations of neurons that respond to calcium signals in different ways, which may influence how they encode long-term memories, the researchers say.

Boyden is the senior author of the study, which appears today in Cell. The paper’s lead authors are MIT postdoc Changyang Linghu and graduate student Shannon Johnson.

Fluorescent clusters

Shannon Johnson is a graduate fellow in the fellow in the Yang-Tan Center for Molecular Therapeutics.

To make molecular activity visible within a cell, scientists typically create reporters by fusing a protein that senses a target molecule to a protein that glows. “This is similar to how a smoke detector will sense smoke and then flash a light,” says Johnson, who is also a fellow in the Yang-Tan Center for Molecular Therapeutics. The most commonly used glowing protein is green fluorescent protein (GFP), which is based on a molecule originally found in a fluorescent jellyfish.

“Typically a biologist can see one or two colors at the same time on a microscope, and many of the reporters out there are green, because they’re based on the green fluorescent protein,” Boyden says. “What has been lacking until now is the ability to see more than a couple of these signals at once.”

“Just like listening to the sound of a single instrument from an orchestra is far from enough to fully appreciate a symphony,” Linghu says, “by enabling observations of multiple cellular signals at the same time, our technology will help us understand the ‘symphony’ of cellular activities.”

To boost the number of signals they could see, the researchers set out to identify signals by location instead of by color. They modified existing reporters to cause them to accumulate in clusters at different locations within a cell. They did this by adding two small peptides to each reporter, which helped the reporters form distinct clusters within cells.

“It’s like having reporter X be tethered to a LEGO brick, and reporter Z tethered to a K’NEX piece — only LEGO bricks will snap to other LEGO bricks, causing only reporter X to be clustered with more of reporter X,” Johnson says.

Changyang Linghu is the J. Douglas Tan Postdoctoral Fellow in the Hock E. Tan and K. Lisa Yang Center for Autism Research.

With this technique, each cell ends up with hundreds of clusters of fluorescent reporters. After measuring the activity of each cluster under a microscope, based on the changing fluorescence, the researchers can identify which molecule was being measured in each cluster by preserving the cell and staining for peptide tags that are unique to each reporter.  The peptide tags are invisible in the live cell, but they can be stained and seen after the live imaging is done. This allows the researchers to distinguish signals for different molecules even though they may all be fluorescing the same color in the live cell.

Using this approach, the researchers showed that they could see five different molecular signals in a single cell. To demonstrate the potential usefulness of this strategy, they measured the activities of three molecules in parallel — calcium, cyclic AMP, and protein kinase A (PKA). These molecules form a signaling network that is involved with many different cellular functions throughout the body. In neurons, it plays an important role in translating a short-term input (from upstream neurons) into long-term changes such as strengthening the connections between neurons — a process that is necessary for learning and forming new memories.

Applying this imaging technique to pyramidal neurons in the hippocampus, the researchers identified two novel subpopulations with different calcium signaling dynamics. One population showed slow calcium responses. In the other population, neurons had faster calcium responses. The latter population had larger PKA responses. The researchers believe this heightened response may help sustain long-lasting changes in the neurons.

Imaging signaling networks

The researchers now plan to try this approach in living animals so they can study how signaling network activities relate to behavior, and also to expand it to other types of cells, such as immune cells. This technique could also be useful for comparing signaling network patterns between cells from healthy and diseased tissue.

In this paper, the researchers showed they could record five different molecular signals at once, and by modifying their existing strategy, they believe they could get up to 16. With additional work, that number could reach into the hundreds, they say.

“That really might help crack open some of these tough questions about how the parts of a cell work together,” Boyden says. “One might imagine an era when we can watch everything going on in a living cell, or at least the part involved with learning, or with disease, or with the treatment of a disease.”

The research was funded by the Friends of the McGovern Institute Fellowship; the J. Douglas Tan Fellowship; Lisa Yang; the Yang-Tan Center for Molecular Therapeutics; John Doerr; the Open Philanthropy Project; the HHMI-Simons Faculty Scholars Program; the Human Frontier Science Program; the U.S. Army Research Laboratory; the MIT Media Lab; the Picower Institute Innovation Fund; the National Institutes of Health, including an NIH Director’s Pioneer Award; and the National Science Foundation.

A hunger for social contact

Since the coronavirus pandemic began in the spring, many people have only seen their close friends and loved ones during video calls, if at all. A new study from MIT finds that the longings we feel during this kind of social isolation share a neural basis with the food cravings we feel when hungry.

The researchers found that after one day of total isolation, the sight of people having fun together activates the same brain region that lights up when someone who hasn’t eaten all day sees a picture of a plate of cheesy pasta.

“People who are forced to be isolated crave social interactions similarly to the way a hungry person craves food.”

“Our finding fits the intuitive idea that positive social interactions are a basic human need, and acute loneliness is an aversive state that motivates people to repair what is lacking, similar to hunger,” says Rebecca Saxe, the John W. Jarve Professor of Brain and Cognitive Sciences at MIT, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

The research team collected the data for this study in 2018 and 2019, long before the coronavirus pandemic and resulting lockdowns. Their new findings, described today in Nature Neuroscience, are part of a larger research program focusing on how social stress affects people’s behavior and motivation.

Former MIT postdoc Livia Tomova, who is now a research associate at Cambridge University, is the lead author of the paper. Other authors include Kimberly Wang, a McGovern Institute research associate; Todd Thompson, a McGovern Institute scientist; Atsushi Takahashi, assistant director of the Martinos Imaging Center; Gillian Matthews, a research scientist at the Salk Institute for Biological Studies; and Kay Tye, a professor at the Salk Institute.

Social craving

The new study was partly inspired by a recent paper from Tye, a former member of MIT’s Picower Institute for Learning and Memory. In that 2016 study, she and Matthews, then an MIT postdoc, identified a cluster of neurons in the brains of mice that represent feelings of loneliness and generate a drive for social interaction following isolation. Studies in humans have shown that being deprived of social contact can lead to emotional distress, but the neurological basis of these feelings is not well-known.

“We wanted to see if we could experimentally induce a certain kind of social stress, where we would have control over what the social stress was,” Saxe says. “It’s a stronger intervention of social isolation than anyone had tried before.”

To create that isolation environment, the researchers enlisted healthy volunteers, who were mainly college students, and confined them to a windowless room on MIT’s campus for 10 hours. They were not allowed to use their phones, but the room did have a computer that they could use to contact the researchers if necessary.

“There were a whole bunch of interventions we used to make sure that it would really feel strange and different and isolated,” Saxe says. “They had to let us know when they were going to the bathroom so we could make sure it was empty. We delivered food to the door and then texted them when it was there so they could go get it. They really were not allowed to see people.”

After the 10-hour isolation ended, each participant was scanned in an MRI machine. This posed additional challenges, as the researchers wanted to avoid any social contact during the scanning. Before the isolation period began, each subject was trained on how to get into the machine, so that they could do it by themselves, without any help from the researcher.

“Normally, getting somebody into an MRI machine is actually a really social process. We engage in all kinds of social interactions to make sure people understand what we’re asking them, that they feel safe, that they know we’re there,” Saxe says. “In this case, the subjects had to do it all by themselves, while the researcher, who was gowned and masked, just stood silently by and watched.”

Each of the 40 participants also underwent 10 hours of fasting, on a different day. After the 10-hour period of isolation or fasting, the participants were scanned while looking at images of food, images of people interacting, and neutral images such as flowers. The researchers focused on a part of the brain called the substantia nigra, a tiny structure located in the midbrain, which has previously been linked with hunger cravings and drug cravings. The substantia nigra is also believed to share evolutionary origins with a brain region in mice called the dorsal raphe nucleus, which is the area that Tye’s lab showed was active following social isolation in their 2016 study.

The researchers hypothesized that when socially isolated subjects saw photos of people enjoying social interactions, the “craving signal” in their substantia nigra would be similar to the signal produced when they saw pictures of food after fasting. This was indeed the case. Furthermore, the amount of activation in the substantia nigra was correlated with how strongly the patients rated their feelings of craving either food or social interaction.

Degrees of loneliness

The researchers also found that people’s responses to isolation varied depending on their normal levels of loneliness. People who reported feeling chronically isolated months before the study was done showed weaker cravings for social interaction after the 10-hour isolation period than people who reported a richer social life.

“For people who reported that their lives were really full of satisfying social interactions, this intervention had a bigger effect on their brains and on their self-reports,” Saxe says.

The researchers also looked at activation patterns in other parts of the brain, including the striatum and the cortex, and found that hunger and isolation each activated distinct areas of those regions. That suggests that those areas are more specialized to respond to different types of longings, while the substantia nigra produces a more general signal representing a variety of cravings.

Now that the researchers have established that they can observe the effects of social isolation on brain activity, Saxe says they can now try to answer many additional questions. Those questions include how social isolation affect people’s behavior, whether virtual social contacts such as video calls help to alleviate cravings for social interaction, and how isolation affects different age groups.

The researchers also hope to study whether the brain responses that they saw in this study could be used to predict how the same participants responded to being isolated during the lockdowns imposed during the early stages of the coronavirus pandemic.

The research was funded by a SFARI Explorer Grant from the Simons Foundation, a MINT grant from the McGovern Institute, the National Institutes of Health, including an NIH Pioneer Award, a Max Kade Foundation Fellowship, and an Erwin Schroedinger Fellowship from the Austrian Science Fund.

Controlling drug activity with light

Hormones and nutrients bind to receptors on cell surfaces by a lock-and-key mechanism that triggers intracellular events linked to that specific receptor. Drugs that mimic natural molecules are widely used to control these intracellular signaling mechanisms for therapy and in research.

In a new publication, a team led by McGovern Institute Associate Investigator Polina Anikeeva and Oregon Health & Science University Research Assistant Professor James Frank introduce a microfiber technology to deliver and activate a drug that can be induced to bind its receptor by exposure to light.

“A significant barrier in applying light-controllable drugs to modulate neural circuits in living animals is the lack of hardware which enables simultaneous delivery of both light and drugs to the target brain area,” says Frank, who was previously a postdoctoral associate in Anikeeva’s Bioelectronics group at MIT. “Our work offers an integrated approach for on-demand delivery of light and drugs through a single fiber.”

These devices were used to deliver a “photoswitchable” drug deep into the brain. So-called “photoswitches” are light-sensitive molecules that can be attached to drugs to switch their activity on or off with a flash of light ­– the use of these drugs is called photopharmacology. In the new study, photopharmacology is used to control neuronal activity and behavior in mice.

Creating miniaturized devices from macroscale templates

The lightweight device features two microfluidic channel and an optical waveguide, and can easily be carried by the animal during behavior

To use light to control drug activity, light and drugs must be delivered simultaneously to the targeted cells. This is a major challenge when the target is deep in the body, but Anikeeva’s Bioelectronics group is uniquely equipped to deal with this challenge.  Marc-Joseph (MJ) Antonini, a PhD student in Anikeeva’s Bioelectronics lab and co-first author of the study, specializes in the fabrication of biocompatible multifunctional fibers that house microfluidic channels and waveguides to deliver liquids and transmit light.

The multifunctional fibers used in this study contain a fluidic channel and an optical waveguide and are comprised of many layers of different materials that are fused together to provide flexibility and strength. The original form of the fiber is constructed at a macroscale and then heated and pulled (a process called thermal drawing) to become longer, but nearly 70X smaller in diameter. By this method, 100’s of meters of miniaturized fiber can be created from the original template at a cross-sectional scale of micrometers that minimizes tissue damage.

The device used in this study had an implantable fiber bundle of 480µm × 380µm and weighed only 0.8 g, small enough that a mouse can easily carry it on its head for many weeks.

Synthesis of a new photoswitchable drug

To demonstrate effectiveness of their device for simultaneous delivery of liquids and light, the Anikeeva lab teamed up with Dirk Trauner (Frank’s former PhD advisor) and David Konrad,  pharmacologists who synthesized photoswitchable drugs.

They had previously modified a photoswitchable analog of capsaicin, a molecule found in hot peppers that binds to the TRPV1 receptor on sensory neurons and controls the sensation of heat. This modification allowed the capsaicin analog to be activated by 560 nm wave-length of light (visible green) that is not damaging to tissue compared to the original version of the drug that required ultraviolet light. By adding both the TRPV1 receptor and the new photoswitchable capsaicin analog to neurons, they could be artificially activated with green light.

This new photopharmacology system had been shown by Frank, Konrad and their colleagues to work in cells cultured in a dish, but had never been shown to work in freely-moving animals.

Controlling behavior by photopharmacology

To test whether their system could activate neurons in the brain, Frank and Antonini tested it in mice. They asked whether adding the photoswitchable drug and its receptor to reward-mediating neurons in the mouse brain causes mice to prefer a chamber in which they receive light stimulation.

The multifunctional fiber-inspired neural implant was implanted into a phantom brain (left), and successfully delivered light and a blue dye (right).

The miniaturized multifunctional fiber developed by the team was implanted in the mouse brain’s ventral tegmental area, a deep region rich in dopamine neurons that controls reward-seeking behavior. Through the fluidic channel in the device, the researchers delivered a virus that drives expression of the TRPV1 receptor in the neurons under study.  Several weeks later, the device was then used to deliver both light and the photoswitchable capsaicin analog directly to the same neurons. To control for the specificity of their system, they also tested the effects of delivering a virus that does not express the TRPV1 receptor, and the effects of delivering a wavelength of light that does not switch on the drug.

They found that mice showed a preference only for the chamber where they had previously received all three components required for the photopharmacology to function: the receptor-expressing virus, the photoswitchable receptor ligand and the green light that activates the drug. These results demonstrate the efficacy of this system to control the time and place within the body that a drug is active.

“Using these fibers to enable photopharmacology in vivo is a great example of how our multifunctional platform can be leveraged to improve and expand how we can interact with the brain,” says Antonini. “This combination of technologies allows us to achieve the temporal and spatial resolution of light stimulation with the chemical specificity of drug injection in freely moving animals.”

Therapeutic drugs that are taken orally or by injection often cause unwanted side-effects because they act continuously and throughout the whole body. Many unwanted side effects could be eliminated by targeting a drug to a specific body tissue and activating it only as needed. The new technology described by Anikeeva and colleagues is one step toward this ultimate goal.

“Our next goal is to use these neural implants to deliver other photoswitchable drugs to target receptors which are naturally expressed within these circuits,” says Frank, whose new lab in the Vollum Institute at OHSU is synthesizing new light-controllable molecules. “The hardware presented in this study will be widely applicable for controlling circuits throughout the brain, enabling neuroscientists to manipulate them with enhanced precision.”

Using machine learning to track the pandemic’s impact on mental health

Dealing with a global pandemic has taken a toll on the mental health of millions of people. A team of MIT and Harvard University researchers has shown that they can measure those effects by analyzing the language that people use to express their anxiety online.

Using machine learning to analyze the text of more than 800,000 Reddit posts, the researchers were able to identify changes in the tone and content of language that people used as the first wave of the Covid-19 pandemic progressed, from January to April of 2020. Their analysis revealed several key changes in conversations about mental health, including an overall increase in discussion about anxiety and suicide.

“We found that there were these natural clusters that emerged related to suicidality and loneliness, and the amount of posts in these clusters more than doubled during the pandemic as compared to the same months of the preceding year, which is a grave concern,” says Daniel Low, a graduate student in the Program in Speech and Hearing Bioscience and Technology at Harvard and MIT and the lead author of the study.

The analysis also revealed varying impacts on people who already suffer from different types of mental illness. The findings could help psychiatrists, or potentially moderators of the Reddit forums that were studied, to better identify and help people whose mental health is suffering, the researchers say.

“When the mental health needs of so many in our society are inadequately met, even at baseline, we wanted to bring attention to the ways that many people are suffering during this time, in order to amplify and inform the allocation of resources to support them,” says Laurie Rumker, a graduate student in the Bioinformatics and Integrative Genomics PhD Program at Harvard and one of the authors of the study.

Satrajit Ghosh, a principal research scientist at MIT’s McGovern Institute for Brain Research, is the senior author of the study, which appears in the Journal of Internet Medical Research. Other authors of the paper include Tanya Talkar, a graduate student in the Program in Speech and Hearing Bioscience and Technology at Harvard and MIT; John Torous, director of the digital psychiatry division at Beth Israel Deaconess Medical Center; and Guillermo Cecchi, a principal research staff member at the IBM Thomas J. Watson Research Center.

A wave of anxiety

The new study grew out of the MIT class 6.897/HST.956 (Machine Learning for Healthcare), in MIT’s Department of Electrical Engineering and Computer Science. Low, Rumker, and Talkar, who were all taking the course last spring, had done some previous research on using machine learning to detect mental health disorders based on how people speak and what they say. After the Covid-19 pandemic began, they decided to focus their class project on analyzing Reddit forums devoted to different types of mental illness.

“When Covid hit, we were all curious whether it was affecting certain communities more than others,” Low says. “Reddit gives us the opportunity to look at all these subreddits that are specialized support groups. It’s a really unique opportunity to see how these different communities were affected differently as the wave was happening, in real-time.”

The researchers analyzed posts from 15 subreddit groups devoted to a variety of mental illnesses, including schizophrenia, depression, and bipolar disorder. They also included a handful of groups devoted to topics not specifically related to mental health, such as personal finance, fitness, and parenting.

Using several types of natural language processing algorithms, the researchers measured the frequency of words associated with topics such as anxiety, death, isolation, and substance abuse, and grouped posts together based on similarities in the language used. These approaches allowed the researchers to identify similarities between each group’s posts after the onset of the pandemic, as well as distinctive differences between groups.

The researchers found that while people in most of the support groups began posting about Covid-19 in March, the group devoted to health anxiety started much earlier, in January. However, as the pandemic progressed, the other mental health groups began to closely resemble the health anxiety group, in terms of the language that was most often used. At the same time, the group devoted to personal finance showed the most negative semantic change from January to April 2020, and significantly increased the use of words related to economic stress and negative sentiment.

They also discovered that the mental health groups affected the most negatively early in the pandemic were those related to ADHD and eating disorders. The researchers hypothesize that without their usual social support systems in place, due to lockdowns, people suffering from those disorders found it much more difficult to manage their conditions. In those groups, the researchers found posts about hyperfocusing on the news and relapsing back into anorexia-type behaviors since meals were not being monitored by others due to quarantine.

Using another algorithm, the researchers grouped posts into clusters such as loneliness or substance use, and then tracked how those groups changed as the pandemic progressed. Posts related to suicide more than doubled from pre-pandemic levels, and the groups that became significantly associated with the suicidality cluster during the pandemic were the support groups for borderline personality disorder and post-traumatic stress disorder.

The researchers also found the introduction of new topics specifically seeking mental health help or social interaction. “The topics within these subreddit support groups were shifting a bit, as people were trying to adapt to a new life and focus on how they can go about getting more help if needed,” Talkar says.

While the authors emphasize that they cannot implicate the pandemic as the sole cause of the observed linguistic changes, they note that there was much more significant change during the period from January to April in 2020 than in the same months in 2019 and 2018, indicating the changes cannot be explained by normal annual trends.

Mental health resources

This type of analysis could help mental health care providers identify segments of the population that are most vulnerable to declines in mental health caused by not only the Covid-19 pandemic but other mental health stressors such as controversial elections or natural disasters, the researchers say.

Additionally, if applied to Reddit or other social media posts in real-time, this analysis could be used to offer users additional resources, such as guidance to a different support group, information on how to find mental health treatment, or the number for a suicide hotline.

“Reddit is a very valuable source of support for a lot of people who are suffering from mental health challenges, many of whom may not have formal access to other kinds of mental health support, so there are implications of this work for ways that support within Reddit could be provided,” Rumker says.

The researchers now plan to apply this approach to study whether posts on Reddit and other social media sites can be used to detect mental health disorders. One current project involves screening posts in a social media site for veterans for suicide risk and post-traumatic stress disorder.

The research was funded by the National Institutes of Health and the McGovern Institute.

Identifying the structure and function of a brain hub

Our ability to pay attention, plan, and trouble-shoot involve cognitive processing by the brain’s prefrontal cortex. The balance of activity among excitatory and inhibitory neurons in the cortex, based on local neural circuits and distant inputs, is key to these cognitive functions.

A recent study from the McGovern Institute shows that excitatory inputs from the thalamus activate a local inhibitory circuit in the prefrontal cortex, revealing new insights into how these cognitive circuits may be controlled.

“For the field, systematic identification of these circuits is crucial in understanding behavioral flexibility and interpreting psychiatric disorders in terms of dysfunction of specific microcircuits,” says postdoctoral associate Arghya Mukherjee, lead author on the report.

Hub of activity

The thalamus is located in the center of the brain and is considered a cerebral hub based on its inputs from a diverse array of brain regions and outputs to the striatum, hippocampus, and cerebral cortex. More than 60 thalamic nuclei (cellular regions) have been defined and are broadly divided into “sensory” or “higher-order” thalamic regions based on whether they relay primary sensory inputs or instead have inputs exclusively from the cerebrum.

Considering the fundamental distinction between the input connections of the sensory and higher-order thalamus, Mukherjee, a researcher in the lab of Michael Halassa, the Class of 1958 Career Development Professor in MIT’s Department of Brain and Cognitive Sciences, decided to explore whether there are similarly profound distinctions in their outputs to the cerebral cortex.

He addressed this question in mice by directly comparing the outputs of the medial geniculate body (MGB), a sensory thalamic region, and the mediodorsal thalamus (MD), a higher-order thalamic region. The researchers selected these two regions because the relatively accessible MGB nucleus relays auditory signals to cerebral cortical regions that process sound, and the MD interconnects regions of the prefrontal cortex.

Their study, now available as a preprint in eLife, describes key functional and anatomical differences between these two thalamic circuits. These findings build on Halassa’s previous work showing that outputs from higher-order thalamic nuclei play a central role in cognitive processing.

A side by side comparison of the two microcircuits: (Left) MD receives its primary inputs (black) from the frontal cortex and sends back inhibition dominant outputs to multiple layers of the prefrontal cortex. (Right) MGB receives its primary input (black) from the auditory midbrain and acts as a ‘relay’ by sending excitation dominant outputs specifically to layer 4 of the auditory cortex. Image: Arghya Mukherjee

Circuit analysis

Using cutting-edge stimulation and recording methods, the researchers found that neurons in the prefrontal and auditory cortices have dramatically different responses to activation of their respective MD and MGB inputs.

The researchers stimulated the MD-prefrontal and MGB-auditory cortex circuits using optogenetic technology and recorded the response to this stimulation with custom multi-electrode scaffolds that hold independently movable micro-drives for recording hundreds of neurons in the cortex. When MGB neurons were stimulated with light, there was strong activation of neurons in the auditory cortex. By contrast, MD stimulation caused a suppression of neuron firing in the prefrontal cortex and concurrent activation of local inhibitory interneurons. The separate activation of the two thalamocortical circuits had dramatically different impacts on cortical output, with the sensory thalamus seeming to promote feed-forward activity and the higher-order thalamus stimulating inhibitory microcircuits within the cortical target region.

“The textbook view of the thalamus is an excitatory cortical input, and the fact that turning on a thalamic circuit leads to a net cortical inhibition was quite striking and not something you would have expected based on reading the literature,” says Halassa, who is also an associate investigator at the McGovern Institute. “Arghya and his colleagues did an amazing job following that up with detailed anatomy to explain why might this effect be so.”

Anatomical differences

Using a system called GFP (green fluorescent protein) reconstitution across synaptic partners (mGRASP), the researchers demonstrated that MD and MGB projections target different types of cortical neurons, offering a possible explanation for their differing effects on cortical activity.

With mGRASP, the presynaptic terminal (in this case, MD or MGB) expresses one part of the fluorescent protein and the postsynaptic neuron (in this case, prefrontal or auditory cortex) expresses the other part of the fluorescent protein, which by themselves alone do not fluoresce. Only when there is a close synaptic connection do the two parts of GFP come together to become fluorescent. These experiments showed that MD neurons synapse more frequently onto inhibitory interneurons in the prefrontal cortex whereas MGB neurons synapse onto excitatory neurons with larger synapses, consistent with only MGB being a strong activity driver.

Using fluorescent viral vectors that can cross synapses of interconnected neurons, a technology developed by McGovern principal research scientist Ian Wickersham, the researchers were also able to map the inputs to the MD and MGB thalamic regions. Viruses, like rabies, are well-suited for tracing neural connections because they have evolved to spread from neuron to neuron through synaptic junctions.

The inputs to the targeted higher-order and sensory thalamocortical neurons identified across the brain appeared to arise respectively from forebrain and midbrain sensory regions, as expected. The MGB inputs were consistent with a sensory relay function, arising primarily from the auditory input pathway. By contrast, MD inputs arose from a wide array of cerebral cortical regions and basal ganglia circuits, consistent with MD receiving contextual and motor command information.

Direct comparisons

By directly comparing these microcircuits, the Halassa lab has revealed important clues about the function and anatomy of these sensory and higher-order brain connections. It is only through a systematic understanding of these circuits that we can begin to interpret how their dysfunction may contribute to psychiatric disorders like schizophrenia.

It is this basic scientific inquiry that often fuels their research, says Halassa. “Excitement about science is part of the glue that holds us all together.”

Study helps explain why motivation to learn declines with age

As people age, they often lose their motivation to learn new things or engage in everyday activities. In a study of mice, MIT neuroscientists have now identified a brain circuit that is critical for maintaining this kind of motivation.

This circuit is particularly important for learning to make decisions that require evaluating the cost and reward that come with a particular action. The researchers showed that they could boost older mice’s motivation to engage in this type of learning by reactivating this circuit, and they could also decrease motivation by suppressing the circuit.

“As we age, it’s harder to have a get-up-and-go attitude toward things,” says Ann Graybiel, an Institute Professor at MIT and member of the McGovern Institute for Brain Research. “This get-up-and-go, or engagement, is important for our social well-being and for learning — it’s tough to learn if you aren’t attending and engaged.”

Graybiel is the senior author of the study, which appears today in Cell. The paper’s lead authors are Alexander Friedman, a former MIT research scientist who is now an assistant professor at the University of Texas at El Paso, and Emily Hueske, an MIT research scientist.

Evaluating cost and benefit

The striatum is part of the basal ganglia — a collection of brain centers linked to habit formation, control of voluntary movement, emotion, and addiction. For several decades, Graybiel’s lab has been studying clusters of cells called striosomes, which are distributed throughout the striatum. Graybiel discovered striosomes many years ago, but their function had remained mysterious, in part because they are so small and deep within the brain that it is difficult to image them with functional magnetic resonance imaging (fMRI).

In recent years, Friedman, Graybiel, and colleagues including MIT research fellow Ken-ichi Amemori have discovered that striosomes play an important role in a type of decision-making known as approach-avoidance conflict. These decisions involve choosing whether to take the good with the bad — or to avoid both — when given options that have both positive and negative elements. An example of this kind of decision is having to choose whether to take a job that pays more but forces a move away from family and friends. Such decisions often provoke great anxiety.

In a related study, Graybiel’s lab found that striosomes connect to cells of the substantia nigra, one of the brain’s major dopamine-producing centers. These studies led the researchers to hypothesize that striosomes may be acting as a gatekeeper that absorbs sensory and emotional information coming from the cortex and integrates it to produce a decision on how to act. These actions can then be invigorated by the dopamine-producing cells.

The researchers later discovered that chronic stress has a major impact on this circuit and on this kind of emotional decision-making. In a 2017 study performed in rats and mice, they showed that stressed animals were far more likely to choose high-risk, high-payoff options, but that they could block this effect by manipulating the circuit.

In the new Cell study, the researchers set out to investigate what happens in striosomes as mice learn how to make these kinds of decisions. To do that, they measured and analyzed the activity of striosomes as mice learned to choose between positive and negative outcomes.

During the experiments, the mice heard two different tones, one of which was accompanied by a reward (sugar water), and another that was paired with a mildly aversive stimulus (bright light). The mice gradually learned that if they licked a spout more when they heard the first tone, they would get more of the sugar water, and if they licked less during the second, the light would not be as bright.

Learning to perform this kind of task requires assigning value to each cost and each reward. The researchers found that as the mice learned the task, striosomes showed higher activity than other parts of the striatum, and that this activity correlated with the mice’s behavioral responses to both of the tones. This suggests that striosomes could be critical for assigning subjective value to a particular outcome.

“In order to survive, in order to do whatever you are doing, you constantly need to be able to learn. You need to learn what is good for you, and what is bad for you,” Friedman says.

“A person, or this case a mouse, may value a reward so highly that the risk of experiencing a possible cost is overwhelmed, while another may wish to avoid the cost to the exclusion of all rewards. And these may result in reward-driven learning in some and cost-driven learning in others,” Hueske says.

The researchers found that inhibitory neurons that relay signals from the prefrontal cortex help striosomes to enhance their signal-to-noise ratio, which helps to generate the strong signals that are seen when the mice evaluate a high-cost or high-reward option.

Loss of motivation

Next, the researchers found that in older mice (between 13 and 21 months, roughly equivalent to people in their 60s and older), the mice’s engagement in learning this type of cost-benefit analysis went down. At the same time, their striosomal activity declined compared to that of younger mice. The researchers found a similar loss of motivation in a mouse model of Huntington’s disease, a neurodegenerative disorder that affects the striatum and its striosomes.

When the researchers used genetically targeted drugs to boost activity in the striosomes, they found that the mice became more engaged in performance of the task. Conversely, suppressing striosomal activity led to disengagement.

In addition to normal age-related decline, many mental health disorders can skew the ability to evaluate the costs and rewards of an action, from anxiety and depression to conditions such as PTSD. For example, a depressed person may undervalue potentially rewarding experiences, while someone suffering from addiction may overvalue drugs but undervalue things like their job or their family.

The researchers are now working on possible drug treatments that could stimulate this circuit, and they suggest that training patients to enhance activity in this circuit through biofeedback could offer another potential way to improve their cost-benefit evaluations.

“If you could pinpoint a mechanism which is underlying the subjective evaluation of reward and cost, and use a modern technique that could manipulate it, either psychiatrically or with biofeedback, patients may be able to activate their circuits correctly,” Friedman says.

The research was funded by the CHDI Foundation, the Saks Kavanaugh Foundation, the National Institutes of Health, the Nancy Lurie Marks Family Foundation, the Bachmann-Strauss Dystonia and Parkinson’s Foundation, the William N. and Bernice E. Bumpus Foundation, the Simons Center for the Social Brain, the Kristin R. Pressman and Jessica J. Pourian ’13 Fund, Michael Stiefel, and Robert Buxton.

Robert Desimone to receive the Fred Kavli Distinguished Career Contributions Award

Robert Desimone, the Doris and Don Berkey Professor in Brain and Cognitive Sciences at MIT, has been recognized by the Cognitive Neuroscience Society as this year’s winner of the Fred Kavli Distinguished Career Contributions (DCC) award. Supported annually by the Kavli Foundation, the award honors senior cognitive neuroscientists for their distinguished career, leadership and mentoring in the field of cognitive neuroscience.

Desimone, who is also the director of the McGovern Institute for Brain Research, studies the brain mechanisms underlying attention, and most recently, has been studying animal models for brain disorders.

Desimone will deliver his prize lecture at the annual meeting of the Cognitive Neuroscience Society in March 2021.

RNA “ticker tape” records gene activity over time

As cells grow, divide, and respond to their environment,  their gene expression changes; one gene may be transcribed into more RNA at one time point and less at another time when it’s no longer needed. Now, researchers at the McGovern Institute, Harvard, and the Broad Institute of MIT and Harvard have developed a way to determine when specific RNA molecules are produced in cells.  The method, described today in Nature Biotechnology, allows scientists to more easily study how a cell’s gene expression fluctuates over time.

“Biology is very dynamic but most of the tools we use in biology are static; you get a fixed snapshot of what’s happening in a cell at a given moment,” said Fei Chen, a core institute member at the Broad, an assistant professor at Harvard University, and a co-senior author of the new work. “This will now allow us to record what’s happening over hours or days.”

To find out the level of RNA a cell is transcribing, researchers typically extract genetic material from the cell—destroying the cell in the process—and use RNA sequencing technology to determine which genes are being transcribed into RNA, and how much. Although researchers can sample cells at various times, they can’t easily measure gene expression at multiple time points.

To create a more precise timestamp, the team added strings of repetitive DNA bases to genes of interest in cultured human cells. These strings caused the cell to add repetitive regions of adenosine molecules—one of four building blocks of RNA — to the ends of RNA when the RNA was transcribed from these genes. The researchers also introduced an engineered version of an enzyme called adenosine deaminase acting on RNA (ADAR2cd), which slowly changed the adenosine molecules to a related molecule, inosine, at a predictable rate in the RNA. By measuring the ratio of inosines to adenosines in the timestamped section of any given RNA molecule, the researchers could elucidate when it was first produced, while keeping cells intact.

“It was pretty surprising to see how well this worked as a timestamp,” said Sam Rodriques, a co-first author of the new paper and former MIT graduate student who is now founding the Applied Biotechnology Laboratory at the Crick Institute in London. “And the more molecules you look at, the better your temporal resolution.”

Using their method, the researchers could estimate the age of a single timestamped RNA molecule to within 2.7 hours. But when they looked simultaneously at four RNA molecules, they could estimate the age of the molecules to within 1.5 hours. Looking at 200 molecules at once allowed the scientists to correctly sort RNA molecules into groups based on their age, or order them along a timeline with 86 percent accuracy.

“Extremely interesting biology, such as immune responses and development, occurs over a timescale of hours,” said co-first author of the paper Linlin Chen of the Broad. “Now we have the opportunity to better probe what’s happening on this timescale.”

The researchers found that the approach, with some small tweaks, worked well on various cell types — neurons, fibroblasts and embryonic kidney cells. They’re planning to now use the method to study how levels of gene activity related to learning and memory change in the hours after a neuron fires.

The current system allows researchers to record changes in gene expression over half a day. The team is now expanding the time range over which they can record gene activity, making the method more precise, and adding the ability to track several different genes at a time.

“Gene expression is constantly changing in response to the environment,” said co-senior author Edward Boyden of MIT, the McGovern Institute for Brain Research, and the Howard Hughes Medical Institute. “Tools like this will help us eavesdrop on how cells evolve over time, and help us pinpoint new targets for treating diseases.”

Support for the research was provided by the National Institutes of Health, the Schmidt Fellows Program at Broad Institute, the Burroughs Wellcome Fund, John Doerr, the Open Philanthropy Project, the HHMI-Simons Faculty Scholars Program, the U. S. Army Research Laboratory and the U. S. Army Research Office, the MIT Media Lab, Lisa Yang, the Hertz Graduate Fellowship and the National Science Foundation Graduate Research Fellowship Program.

Researchers ID crucial brain pathway involved in object recognition

MIT researchers have identified a brain pathway critical in enabling primates to effortlessly identify objects in their field of vision. The findings enrich existing models of the neural circuitry involved in visual perception and help to further unravel the computational code for solving object recognition in the primate brain.

Led by Kohitij Kar, a postdoctoral associate at the McGovern Institute for Brain Research and Department of Brain and Cognitive Sciences, the study looked at an area called the ventrolateral prefrontal cortex (vlPFC), which sends feedback signals to the inferior temporal (IT) cortex via a network of neurons. The main goal of this study was to test how the back and forth information processing of this circuitry, that is, this recurrent neural network, is essential to rapid object identification in primates.

The current study, published in Neuron and available today via open access, is a follow-up to prior work published by Kar and James DiCarlo, Peter de Florez Professor of Neuroscience, the head of MIT’s Department of Brain and Cognitive Sciences, and an investigator in the McGovern Institute for Brain Research and the Center for Brains, Minds, and Machines.

Monkey versus machine

In 2019, Kar, DiCarlo, and colleagues identified that primates must use some recurrent circuits during rapid object recognition. Monkey subjects in that study were able to identify objects more accurately than engineered “feedforward” computational models, called deep convolutional neural networks, that lacked recurrent circuitry.

Interestingly, specific images for which models performed poorly compared to monkeys in object identification, also took longer to be solved in the monkeys’ brains — suggesting that the additional time might be due to recurrent processing in the brain. Based on the 2019 study, it remained unclear though exactly which recurrent circuits were responsible for the delayed information boost in the IT cortex. That’s where the current study picks up.

“In this new study, we wanted to find out: Where are these recurrent signals in IT coming from?” Kar said. “Which areas reciprocally connected to IT, are functionally the most critical part of this recurrent circuit?”

To determine this, researchers used a pharmacological agent to temporarily block the activity in parts of the vlPFC in macaques while they engaged in an object discrimination task. During these tasks, monkeys viewed images that contained an object, such as an apple, a car, or a dog; then, researchers used eye tracking to determine if the monkeys could correctly indicate what object they had previously viewed when given two object choices.

“We observed that if you use pharmacological agents to partially inactivate the vlPFC, then both the monkeys’ behavior and IT cortex activity deteriorates but more so for certain specific images. These images were the same ones we identified in the previous study — ones that were poorly solved by ‘feedforward’ models and took longer to be solved in the monkey’s IT cortex,” said Kar.

MIT researchers used an object recognition task (e.g., recognizing that there is a “bird” and not an “elephant” in the shown image) in studying the role of feedback from primate ventrolateral prefrontal cortex (vlPFC) to the inferior temporal (IT) cortex via a network of neurons. In primate brains, temporally blocking the vlPFC (green shaded area) disrupts the recurrent neural network comprising vlPFC and IT inducing specific deficits, implicating its role in rapid object identification. Image: Kohitij Kar, brain image adapted from SciDraw

“These results provide evidence that this recurrently connected network is critical for rapid object recognition, the behavior we’re studying. Now, we have a better understanding of how the full circuit is laid out, and what are the key underlying neural components of this behavior.”

The full study, entitled “Fast recurrent processing via ventrolateral prefrontal cortex is needed by the primate ventral stream for robust core visual object recognition,” will run in print January 6, 2021.

“This study demonstrates the importance of pre-frontal cortical circuits in automatically boosting object recognition performance in a very particular way,” DiCarlo said. “These results were obtained in nonhuman primates and thus are highly likely to also be relevant to human vision.”

The present study makes clear the integral role of the recurrent connections between the vlPFC and the primate ventral visual cortex during rapid object recognition. The results will be helpful to researchers designing future studies that aim to develop accurate models of the brain, and to researchers who seek to develop more human-like artificial intelligence.