Artificial networks learn to smell like the brain

Using machine learning, a computer model can teach itself to smell in just a few minutes. When it does, researchers have found, it builds a neural network that closely mimics the olfactory circuits that animal brains use to process odors.

Animals from fruit flies to humans all use essentially the same strategy to process olfactory information in the brain. But neuroscientists who trained an artificial neural network to take on a simple odor classification task were surprised to see it replicate biology’s strategy so faithfully.

“The algorithm we use has no resemblance to the actual process of evolution,” says Guangyu Robert Yang, an associate investigator at MIT’s McGovern Institute, who led the work as a postdoctoral fellow at Columbia University. The similarities between the artificial and biological systems suggest that the brain’s olfactory network is optimally suited to its task.

Yang and his collaborators, who reported their findings October 6, 2021, in the journal Neuron, say their artificial network will help researchers learn more about the brain’s olfactory circuits. The work also helps demonstrate artificial neural networks’ relevance to neuroscience. “By showing that we can match the architecture [of the biological system] very precisely, I think that gives more confidence that these neural networks can continue to be useful tools for modeling the brain,” says Yang, who is also an assistant professor in MIT’s Departments of Brain and Cognitive Sciences and Electrical Engineering and Computer Science and a member of the Center for Brains, Minds and Machines.

Mapping natural olfactory circuits

For fruit flies, the organism in which the brain’s olfactory circuitry has been best mapped, smell begins in the antennae. Sensory neurons there, each equipped with odor receptors specialized to detect specific scents, transform the binding of odor molecules into electrical activity. When an odor is detected, these neurons, which make up the first layer of the olfactory network, signal to the second-layer: a set of neurons that reside in a part of the brain called the antennal lobe. In the antennal lobe, sensory neurons that share the same receptor converge onto the same second-layer neuron. “They’re very choosy,” Yang says. “They don’t receive any input from neurons expressing other receptors.” Because it has fewer neurons than the first layer, this part of the network is considered a compression layer. These second-layer neurons, in turn, signal to a larger set of neurons in the third layer. Puzzlingly, those connections appear to be random.

For Yang, a computational neuroscientist, and Columbia University graduate student Peter Yiliu Wang, this knowledge of the fly’s olfactory system represented a unique opportunity. Few parts of the brain have been mapped as comprehensively, and that has made it difficult to evaluate how well certain computational models represent the true architecture of neural circuits, they say.

Building an artificial smell network

Neural networks, in which artificial neurons rewire themselves to perform specific tasks, are computational tools inspired by the brain. They can be trained to pick out patterns within complex datasets, making them valuable for speech and image recognition and other forms of artificial intelligence. There are hints that the neural networks that do this best replicate the activity of the nervous system. But, says Wang, who is now a postdoctoral researcher at Stanford University, differently structured networks could generate similar results, and neuroscientists still need to know whether artificial neural networks reflect the actual structure of biological circuits. With comprehensive anatomical data about fruit fly olfactory circuits, he says: “We’re able to ask this question: Can artificial neural networks truly be used to study the brain?”

Collaborating closely with Columbia neuroscientists Richard Axel and Larry Abbott, Yang and Wang constructed a network of artificial neurons comprising an input layer, a compression layer, and an expansion layer—just like the fruit fly olfactory system. They gave it the same number of neurons as the fruit fly system, but no inherent structure: connections between neurons would be rewired as the model learned to classify odors.

The scientists asked the network to assign data representing different odors to categories, and to correctly categorize not just single odors, but also mixtures of odors. This is something that the brain’s olfactory system is uniquely good at, Yang says. If you combine the scents of two different apples, he explains, the brain still smells apple. In contrast, if two photographs of cats are blended pixel by pixel, the brain no longer sees a cat. This ability is just one feature of the brain’s odor-processing circuits, but captures the essence of the system, Yang says.

It took the artificial network only minutes to organize itself. The structure that emerged was stunningly similar to that found in the fruit fly brain. Each neuron in the compression layer received inputs from a particular type of input neuron and connected, seemingly randomly, to multiple neurons in the expansion layer. What’s more, each neuron in the expansion layer receives connections, on average, from six compression-layer neurons—exactly as occurs in the fruit fly brain.

“It could have been one, it could have been 50. It could have been anywhere in between,” Yang says. “Biology finds six, and our network finds about six as well.” Evolution found this organization through random mutation and natural selection; the artificial network found it through standard machine learning algorithms.

The surprising convergence provides strong support that the brain circuits that interpret olfactory information are optimally organized for their task, he says. Now, researchers can use the model to further explore that structure, exploring how the network evolves under different conditions and manipulating the circuitry in ways that cannot be done experimentally.

Dealing with uncertainty

As we interact with the world, we are constantly presented with information that is unreliable or incomplete – from jumbled voices in a crowded room to solicitous strangers with unknown motivations. Fortunately, our brains are well equipped to evaluate the quality of the evidence we use to make decisions, usually allowing us to act deliberately, without jumping to conclusions.

Now, neuroscientists at MIT’s McGovern Institute have homed in on key brain circuits that help guide decision-making under conditions of uncertainty. By studying how mice interpret ambiguous sensory cues, they’ve found neurons that stop the brain from using unreliable information.

“One area cares about the content of the message—that’s the prefrontal cortex—and the thalamus seems to care about how certain the input is.” – Michael Halassa

The findings, published October 6, 2021, in the journal Nature, could help researchers develop treatments for schizophrenia and related conditions, whose symptoms may be at least partly due to affected individuals’ inability to effectively gauge uncertainty.

Decoding ambiguity

“A lot of cognition is really about handling different types of uncertainty,” says McGovern Associate Investigator Michael Halassa, explaining that we all must use ambiguous information to make inferences about what’s happening in the world. Part of dealing with this ambiguity involves recognizing how confident we can be in our conclusions. And when this process fails, it can dramatically skew our interpretation of the world around us.

“In my mind, schizophrenia spectrum disorders are really disorders of appropriately inferring the causes of events in the world and what other people think,” says Halassa, who is a practicing psychiatrist. Patients with these disorders often develop strong beliefs based on events or signals most people would dismiss as meaningless or irrelevant, he says. They may assume hidden messages are embedded in a garbled audio recording, or worry that laughing strangers are plotting against them. Such things are not impossible—but delusions arise when patients fail to recognize that they are highly unlikely.

Halassa and postdoctoral researcher Arghya Mukherjee wanted to know how healthy brains handle uncertainty, and recent research from other labs provided some clues. Functional brain imaging had shown that when people are asked to study a scene but they aren’t sure what to pay attention to, a part of the brain called the mediodorsal thalamus becomes active. The less guidance people are given for this task, the harder the mediodorsal thalamus works.

The thalamus is a sort of crossroads within the brain, made up of cells that connect distant brain regions to one another. Its mediodorsal region sends signals to the prefrontal cortex, where sensory information is integrated with our goals, desires, and knowledge to guide behavior. Previous work in the Halassa lab showed that the mediodorsal thalamus helps the prefrontal cortex tune in to the right signals during decision-making, adjusting signaling as needed when circumstances change. Intriguingly, this brain region has been found to be less active in people with schizophrenia than it is in others.

group photo of study authors
Study authors (from left to right) Michael Halassa, Arghya Mukherjee, Norman Lam and Ralf Wimmer.

Working with postdoctoral researcher Norman Lam and research scientist Ralf Wimmer, Halassa and Mukherjee designed a set of animal experiments to examine the mediodorsal thalamus’s role in handling uncertainty. Mice were trained to respond to sensory signals according to audio cues that alerted them whether to focus on either light or sound. When the animals were given conflicting cues, it was up to them animal to figure out which one was represented most prominently and act accordingly. The experimenters varied the uncertainty of this task by manipulating the numbers and ratio of the cues.

Division of labor

By manipulating and recording activity in the animals’ brains, the researchers found that the prefrontal cortex got involved every time mice completed this task, but the mediodorsal thalamus was only needed when the animals were given signals that left them uncertain how to behave. There was a simple division of labor within the brain, Halassa says. “One area cares about the content of the message—that’s the prefrontal cortex—and the thalamus seems to care about how certain the input is.”

Within the mediodorsal thalamus, Halassa and Mukherjee found a subset of cells that were especially active when the animals were presented with conflicting sound cues. These neurons, which connect directly to the prefrontal cortex, are inhibitory neurons, capable of dampening downstream signaling. So when they fire, Halassa says, they effectively stop the brain from acting on unreliable information. Cells of a different type were focused on the uncertainty that arises when signaling is sparse. “There’s a dedicated circuitry to integrate evidence across time to extract meaning out of this kind of assessment,” Mukherjee explains.

As Halassa and Mukherjee investigate these circuits more deeply, a priority will be determining whether they are disrupted in people with schizophrenia. To that end, they are now exploring the circuitry in animal models of the disorder. The hope, Mukherjee says, is to eventually target dysfunctional circuits in patients, using noninvasive, focused drug delivery methods currently under development. “We have the genetic identity of these circuits. We know they express specific types of receptors, so we can find drugs that target these receptors,” he says. “Then you can specifically release these drugs in the mediodorsal thalamus to modulate the circuits as a potential therapeutic strategy.”

This work was funded by grants from the National Institute of Mental Health (R01MH107680-05 and R01MH120118-02).

New bionics center established at MIT with $24 million gift

A deepening understanding of the brain has created unprecedented opportunities to alleviate the challenges posed by disability. Scientists and engineers are taking design cues from biology itself to create revolutionary technologies that restore the function of bodies affected by injury, aging, or disease – from prosthetic limbs that effortlessly navigate tricky terrain to digital nervous systems that move the body after a spinal cord injury.

With the establishment of the new K. Lisa Yang Center for Bionics, MIT is pushing forward the development and deployment of enabling technologies that communicate directly with the nervous system to mitigate a broad range of disabilities. The center’s scientists, clinicians, and engineers will work together to create, test, and disseminate bionic technologies that integrate with both the body and mind.

The center is funded by a $24 million gift to MIT’s McGovern Institute for Brain Research from philanthropist Lisa Yang, a former investment banker committed to advocacy for individuals with visible and invisible disabilities.

Portait of philanthropist Lisa Yang.
Philanthropist Lisa Yang is committed to advocacy for individuals with visible and invisible disabilities. Photo: Caitlin Cunningham

Her previous gifts to MIT have also enabled the establishment of the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics in Neuroscience, Hock E. Tan and K. Lisa Yang Center for Autism Research, Y. Eva Tan Professorship in Neurotechnology, and the endowed K. Lisa Yang Post-Baccalaureate Program.

“The K. Lisa Yang Center for Bionics will provide a dynamic hub for scientists, engineers and designers across MIT to work together on revolutionary answers to the challenges of disability,” says MIT President L. Rafael Reif. “With this visionary gift, Lisa Yang is unleashing a powerful collaborative strategy that will have broad impact across a large spectrum of human conditions – and she is sending a bright signal to the world that the lives of individuals who experience disability matter deeply.”

An interdisciplinary approach

To develop prosthetic limbs that move as the brain commands or optical devices that bypass an injured spinal cord to stimulate muscles, bionic developers must integrate knowledge from a diverse array of fields—from robotics and artificial intelligence to surgery, biomechanics, and design. The K. Lisa Yang Center for Bionics will be deeply interdisciplinary, uniting experts from three MIT schools: Science, Engineering, and Architecture and Planning. With clinical and surgical collaborators at Harvard Medical School, the center will ensure that research advances are tested rapidly and reach people in need, including those in traditionally underserved communities.

To support ongoing efforts to move toward a future without disability, the center will also provide four endowed fellowships for MIT graduate students working in bionics or other research areas focused on improving the lives of individuals who experience disability.

“I am thrilled to support MIT on this major research effort to enable powerful new solutions that improve the quality of life for individuals who experience disability,” says Yang. “This new commitment extends my philanthropic investment into the realm of physical disabilities, and I look forward to the center’s positive impact on countless lives, here in the US and abroad.”

The center will be led by Hugh Herr, a professor of media arts and sciences at MIT’s Media Lab, and Ed Boyden, the Y. Eva Tan Professor of Neurotechnology at MIT, a professor of biological engineering, brain and cognitive sciences, and media arts and sciences, and an investigator at MIT’s McGovern Institute and the Howard Hughes Medical Institute.

A double amputee himself, Herr is a pioneer in the development of bionic limbs to improve mobility for those with physical disabilities. “The world profoundly needs relief from the disabilities imposed by today’s nonexistent or broken technologies. We must continually strive towards a technological future in which disability is no longer a common life experience,” says Herr. “I am thrilled that the Yang Center for Bionics will help to measurably improve the human experience for so many.”

Boyden, who is a renowned creator of tools to analyze and control the brain, will play a key role in merging bionics technologies with the nervous system. “The Yang Center for Bionics will be a research center unlike any other in the world,” he says. “A deep understanding of complex biological systems, coupled with rapid advances in human-machine bionic interfaces, mean we will soon have the capability to offer entirely new strategies for individuals who experience disability. It is an honor to be part of the center’s founding team.”

Center priorities

In its first four years, the K. Lisa Yang Center for Bionics will focus on developing and testing three bionic technologies:

  • Digital nervous system: to eliminate movement disorders caused by spinal cord injuries, using computer-controlled muscle activations to control limb movements while simultaneously stimulating spinal cord repair
  • Brain-controlled limb exoskeletons: to assist weak muscles and enable natural movement for people affected by stroke or musculoskeletal disorders
  • Bionic limb reconstruction: to restore natural, brain-controlled movements as well as the sensation of touch and proprioception (awareness of position and movement) from bionic limbs

A fourth priority will be developing a mobile delivery system to ensure patients in medically underserved communities have access to prosthetic limb services. Investigators will field test a system that uses a mobile clinic to conduct the medical imaging needed to design personalized, comfortable prosthetic limbs and to fit the prostheses to patients where they live. Investigators plan to initially bring this mobile delivery system to Sierra Leone, where thousands of people suffered amputations during the country’s 11-year civil war. While the population of persons with amputation continues to increase each year in Sierra Leone, today less than 10% of persons in need benefit from functional prostheses. Through the mobile delivery system, a key center objective is to scale up production and access of functional limb prostheses for Sierra Leoneans in dire need.

Portrait of Lisa Yang, Hugh Herr, Julius Maada Bio, and David Moinina Sengeh (from left to right).
Philanthropist Lisa Yang (far left) and MIT bionics researcher Hugh Herr (second from left) met with Sierra Leone’s President Julius Maada Bio (second from right) and Chief Innovation Officer for the Directorate of Science, Technology and Innovation, David Moinina Sengeh, to discuss the mobile clinic component of the new K. Lisa Yang Center for Bionics at MIT. Photo: David Moinina Sengeh

“The mobile prosthetics service fueled by the K. Lisa Yang Center for Bionics at MIT is an innovative solution to a global problem,” said Julius Maada Bio, President of Sierra Leone. “I am proud that Sierra Leone will be the first site for deploying this state-of-the-art digital design and fabrication process. As leader of a government that promotes innovative technologies and prioritizes human capital development, I am overjoyed that this pilot project will give Sierra Leoneans (especially in rural areas) access to quality limb prostheses and thus improve their quality of life.”

Together, Herr and Boyden will launch research at the bionics center with three other MIT faculty: Assistant Professor of Media Arts and Sciences Canan Dagdeviren, Walter A. Rosenblith Professor of Cognitive Neuroscience Nancy Kanwisher, and David H. Koch (1962) Institute Professor Robert Langer. They will work closely with three clinical collaborators at Harvard Medical School: orthopedic surgeon Marco Ferrone, plastic surgeon Matthew Carty, and Nancy Oriol, Faculty Associate Dean for Community Engagement in Medical Education.

“Lisa Yang and I share a vision for a future in which each and every person in the world has the right to live without a debilitating disability if they so choose,” adds Herr. “The Yang Center will be a potent catalyst for true innovation and impact in the bionics space, and I am overjoyed to work with my colleagues at MIT, and our accomplished clinical partners at Harvard, to make important steps forward to help realize this vision.”

Tracking time in the brain

By studying how primates mentally measure time, scientists at MIT’s McGovern Institute have discovered that the brain runs an internal clock whose speed is set by prior experience. In new experiences, the brain closely tracks how elapsed time intervals differ from its preset expectation—indicating that for the brain, time is relative.

The findings, reported September 15, 2021, in the journal Neuron, help explain how the brain uses past experience to make predictions—a powerful strategy for navigating a complex and ever-changing world. The research was led by McGovern Investigator Mehrdad Jazayeri, who is working to understand how the brain forms internal models of the world.

Internal clock

Sensory information tells us a lot about our environment, but the brain needs more than data, Jazayeri says. Internal models are vital for understanding the relationships between things, making generalizations, and interpreting and acting on our perceptions. They help us focus on what’s most important and make predictions about our surroundings, as well as the consequences of our actions. “To be efficient in learning about the world and interacting with the world, we need those predictions,” Jazayeri says. When we enter a new grocery store, for example, we don’t have to check every aisle for the peanut butter, because we know it is likely to be near the jam. Likewise, an experienced racquetball player knows how the ball will move when her paddle hits it a certain way.

Jazayeri’s team was interested in how the brain might make predictions about time. Previously, his team showed how neurons in the frontal cortex—a part of the brain involved in planning—can tick off the passage of time like a metronome. By training monkeys to use an eye movement to indicate the duration of time that separated two flashes of light, they found that cells that track time during this task cooperate to form an adjustable internal clock. Those cells generate a pattern of activity that can be drawn out to measure long time intervals or compressed to track shorter ones. The changes in these signal dynamics reflect elapsed time so precisely that by monitoring the right neurons, Jazayeri’s team can determine exactly how fast a monkey’s internal clock is running.

Predictive processing

Nicolas Meirhaeghe, a graduate student in Mehrdad Jazayeri’s lab, studies how we plan and perform movements in the face of uncertainty. He is pictured here as part of the McGovern Institute 20th anniversary “Rising Stars” photo series. Photo: Michael Spencer

For their most recent experiments, graduate student Nicolas Meirhaeghe designed a series of experiments in which the delay between the two flashes of light changed as the monkeys repeated the task. Sometimes the flashes were separated by just a fraction of a second, sometimes the delay was a bit longer. He found that the time-keeping activity pattern in the frontal cortex occurred over different time scales as the monkeys came to expect delays of different durations. As the duration of the delay fluctuated, the brain appeared to take all prior experience into account, setting the clock to measure the average of those times in anticipation of the next interval.

The behavior of the neurons told the researchers that as a monkey waited for a new set of light cues, it already had an expectation about how long the delay would be. To make such a prediction, Meirhaeghe says, “the brain has no choice but to use all the different values that you perceive from your experience, average those out, and use this as the expectation.”

By analyzing neuronal behavior during their experiments, Jazayeri and Meirhaeghe determined that the brain’s signals were not encoding the full time elapsed between light cues, but instead how that time differed from the predicted time. Calculating this prediction error enabled the monkeys to report back how much time had elapsed.

Neuroscientists have suspected that this strategy, known as predictive processing, is widely used by the brain—although until now there has been little evidence of it outside early sensory areas. “You have a lot of stimuli that are coming from the environment, but lots of stimuli are actually predictable,” Meirhaeghe says. “The idea is that your brain is learning through experience patterns in the environment, and is subtracting your expectation from the incoming signal. What the brain actually processes in the end is the result of this subtraction.”

Finally, the researchers investigated the brain’s ability to update its expectations about time. After presenting monkeys with delays within a particular time range, they switched without warning to times that fluctuated within a new range. The brain responded quickly, updating its internal clock. “If you look inside the brain, after about 100 trials the monkeys have already figured out that these statistics have changed,” says Jazayeri.

It took longer, however—as many as 1,000 trials—for the monkeys to change their behavior in response to the change. “It seems like this prediction, and updating the internal model about the statistics of the world, is way faster than our muscles are able to implement,” Jazayeri says. “Our motor system is kind of lagging behind what our cognitive abilities tell us.” This makes sense, he says, because not every change in the environment merits a change in behavior. “You don’t want to be distracted by every small thing that deviates from your prediction. You want to pay attention to things that have a certain level of consistency.”

Single gene linked to repetitive behaviors, drug addiction

Making and breaking habits is a prime function of the striatum, a large forebrain region that underlies the cerebral cortex. McGovern researchers have identified a particular gene that controls striatal function as well as repetitive behaviors that are linked to drug addiction vulnerability.

To identify genes involved specifically in striatal functions, MIT Institute Professor Ann Graybiel previously identified genes that are preferentially expressed in striatal neurons. One identified gene encodes CalDAG-GEFI (CDGI), a signaling molecule that effects changes inside of cells in response to extracellular signals that are received by receptors on the cell surface. In a paper to be published in the October issue of Neurobiology of Disease and now available online, Graybiel, along with former Research Scientist Jill Crittenden and collaborators James Surmeier and Shenyu Zhai at the Feinman School of Medicine at Northwestern University, show that CDGI is key for controlling behavioral responses to drugs of abuse and underlying neuronal plasticity (cellular changes induced by experience) in the striatum.

“This paper represents years of intensive research, which paid off in the end by identifying a specific cellular signaling cascade for controlling repetitive behaviors and neuronal plasticity,” says Graybiel, who is also an investigator at the McGovern Institute and a professor of brain and cognitive sciences at MIT.

McGovern Investigator Ann Graybiel (right) with former Research Scientist Jill Crittenden. Photo: Justin Knight

Surprise discovery

To understand the essential roles of CDGI, Crittenden first engineered “knockout” mice that lack the gene encoding CDGI. Then the Graybiel team began looking for abnormalities in the CDGI knockout mice that could be tied to the loss of CDGI’s function.

Initially, they noticed that the rodent ear-tag IDs often fell off in the knockout mice, an observation that ultimately led to the surprise discovery by the Graybiel team and others that CDGI is expressed in blood platelets and is responsible for a bleeding disorder in humans, dogs, and other animals. The CDGI knockout mice were otherwise healthy and seemed just like their “wildtype” brothers and sisters, which did not carry the gene mutation. To figure out the role of CDGI in the brain, the Graybiel team would have to scrutinize the mice more closely.

Challenging the striatum

Both the CDGI knockout and wildtype mice were given an extensive set of behavioral and neurological tests and the CDGI mice showed deficits in two tests designed to challenge the striatum.

In one test, mice must find their way through a maze by relying on egocentric (i.e. self-referential) cues, such as their turning right or turning left, and not competing allocentric (i.e. external) cues, such as going toward a bright poster on the wall. Egocentric cues are thought to be processed by the striatum whereas allocentric cues are thought to rely on the hippocampus.

In a second test of striatal function, mice learned various gait patterns to match different patterns of rungs on their running wheel, a task designed to test the mouse’s ability to learn and remember a motor sequence.

The CDGI mice learned both of these striatal tasks more slowly than their wildtype siblings, suggesting that the CDGI mice might perform normally in general tests of behavior because they are able to compensate for striatal deficits by using other brain regions such as the hippocampus to solve standard tasks.

The team then decided to give the mice a completely different type of test that relies on the striatum. Because the striatum is strongly activated by drugs of abuse, which elevate dopamine and drive motor habits, Crittenden and collaborator Morgane Thomsen (now at the University of Copenhagen) looked to see whether the CDGI knockout mice respond normally to amphetamine and cocaine.

Psychomotor stimulants like cocaine and amphetamine normally induce a mixture of hyperactive behaviors such as pacing and focused repetitive behaviors like skin-picking (also called stereotypy or punding in humans). The researchers found however, that the drug-induced behaviors in the CDGI knockout mice were less varied than the normal mice and consisted of abnormally prolonged stereotypy, as though the mice were unable to switch between behaviors. The researchers were able to map the abnormal behavior to CDGI function in the striatum by showing that the same vulnerability to drug-induced stereotypy was observed in mice that were engineered to delete CDGI in the striatum after birth (“conditional knockouts”), but to otherwise have normal CDGI throughout the body.

Controlling cravings

In addition to exhibiting prolonged, repetitive behaviors, the CDGI knockout mice had a vulnerability to self-administer drugs. Although previous research had shown that treatments that activate the M1 acetylcholine receptor can block cocaine self-administration, the team found that this therapy was ineffective in CDGI knockout mice. Knockouts continued to self-administer cocaine (suggesting increased craving for the drug) at the same rate before and after M1 receptor activation treatment, even though the treatment succeeded with their sibling control mice. The researchers concluded that CDGI is critically important for controlling repetitive behaviors and the ability to stop self-administration of addictive stimulants.

mouse brain images
Brain sections from control mice (left) and mice engineered for deletion of the CDGI gene after birth. The expression of CDGI in the striatum (arrows) grows stronger as mice grow from pups to adulthood in control mice, but is gradually lost in the CDGI engineered mice (“conditional knockouts”). Image courtesy of the researchers

To better understand how CDGI is linked to the M1 receptor at the cellular level, the team turned to slice physiologists, scientists who record the electrical activity of neurons in brain slices. Their recordings showed that striatal neurons from CDGI knockouts fail to undergo the normal, expected electrophysiological changes after receiving treatments that target the M1 receptor. In particular, the neurons of the striatum that function broadly to stop ongoing behaviors, did not integrate cellular signals properly and failed to undergo “long-term potentiation,” a type of neuronal plasticity thought to underlie learning.

The new findings suggest that excessive repetitive movements are controlled by M1 receptor signaling through CDGI in indirect pathway neurons of the striatum, a neuronal subtype that degenerates in Huntington’s disease and is affected by dopamine loss and l-DOPA replacement therapy in Parkinson’s disease.

“The M1 acetylcholine receptor is a target for therapeutic drug development in treating cognitive and behavioral problems in multiple disorders, but progress has been severely hampered by off-target side-effects related to the wide-spread expression of the M1 receptor,” Graybiel explains. “Our findings suggest that CDGI offers the possibility for forebrain-specific targeting of M1 receptor signaling cascades that are of interest for blocking pathologically repetitive and unwanted behaviors that are common to numerous brain disorders including Huntington’s disease, drug addiction, autism, and schizophrenia as well as drug-induced dyskinesias. We hope that this work can help therapeutic development for these major health problems.”

This work was funded by the James W. (1963) and Patricia T. Poitras Fund, the William N. & Bernice E. Bumpus Foundation, the Saks Kavanaugh Foundation, the Simons Foundation, and the National Institute of Health.

New programmable gene editing proteins found outside of CRISPR systems

Within the last decade, scientists have adapted CRISPR systems from microbes into gene editing technology, a precise and programmable system for modifying DNA. Now, scientists at MIT’s McGovern Institute and the Broad Institute of MIT and Harvard have discovered a new class of programmable DNA modifying systems called OMEGAs (Obligate Mobile Element Guided Activity), which may naturally be involved in shuffling small bits of DNA throughout bacterial genomes.

These ancient DNA-cutting enzymes are guided to their targets by small pieces of RNA. While they originated in bacteria, they have now  been engineered to work in human cells, suggesting they could be useful in the development of gene editing therapies, particularly as they are small (~30% the size of Cas9), making them easier to deliver to cells than bulkier enzymes. The discovery, reported September 9, 2021, in the journal Science, provides evidence that natural RNA-guided enzymes are among the most abundant proteins on earth, pointing toward a vast new area of biology that is poised to drive the next revolution in genome editing technology.

The research was led by McGovern Investigator Feng Zhang, who is the James and Patricia Poitras Professor of Neuroscience at MIT, a Howard Hughes Medical Institute investigator, and a Core Institute Member of the Broad Institute. Zhang’s team has been exploring natural diversity in search of new molecular systems that can be rationally programmed.

“We are super excited about the discovery of these widespread programmable enzymes, which have been hiding under our noses all along,” says Zhang. “These results suggest the tantalizing possibility that there are many more programmable systems that await discovery and development as useful technologies.”

Natural adaptation

Programmable enzymes, particularly those that use an RNA guide, can be rapidly adapted for different uses. For example, CRISPR enzymes naturally use an RNA guide to target viral invaders, but biologists can direct Cas9 to any target by generating their own RNA guide. “It’s so easy to just change a guide sequence and set a new target,” says graduate student and co-first author of the paper, Soumya Kannan. “So one of the broad questions that we’re interested in is trying to see if other natural systems use that same kind of mechanism.”

Zhang lab graduate student Han Altae-Tran, co-author of the Science paper with Soumya Kannan. Photo: Zhang lab

The first hints that OMEGA proteins might be directed by RNA came from the genes for proteins called IscBs. The IscBs are not involved in CRISPR immunity and were not known to associate with RNA, but they looked like small, DNA-cutting enzymes. The team discovered that each IscB had a small RNA encoded nearby and it directed IscB enzymes to cut specific DNA sequences. They named these RNAs “ωRNAs.”

The team’s experiments showed that two other classes of small proteins known as IsrBs and TnpBs, one of the most abundant genes in bacteria, also use ωRNAs that act as guides to direct the cleavage of DNA.

IscB, IsrB, and TnpB are found in mobile genetic elements called transposons. Graduate student Han Altae-Tran, co-first author on the paper, explains that each time these transposons move, they create a new guide RNA, allowing the enzyme they encode to cut somewhere else.

It’s not clear how bacteria benefit from this genomic shuffling—or whether they do at all.  Transposons are often thought of as selfish bits of DNA, concerned only with their own mobility and preservation, Kannan says. But if hosts can “co-opt” these systems and repurpose them, hosts may gain new abilities, as with CRISPR systems which confer adaptive immunity.

“A lot of the things that we have been thinking about may already exist naturally in some capacity,” says Altae-Tran.

IscBs and TnpBs appear to be predecessors of Cas9 and Cas12 CRISPR systems. The team suspects they, along with IsrB, likely gave rise to other RNA-guided enzymes, too—and they are eager to find them. They are curious about the range of functions that might be carried out in nature by RNA-guided enzymes, Kannan says, and suspect evolution likely already took advantage of OMEGA enzymes like IscBs and TnpBs to solve problems that biologists are keen to tackle.

Comparison of Ω (OMEGA) systems with other known RNA-guided systems. In contrast to CRISPR systems, which capture spacer sequences and store them in the locus within the CRISPR array, Ω systems may transpose their loci (or trans-acting loci) into target sequences, converting targets into ωRNA guides. Image courtesy of the researchers.

“A lot of the things that we have been thinking about may already exist naturally in some capacity,” says Altae-Tran. “Natural versions of these types of systems might be a good starting point to adapt for that particular task.”

The team is also interested in tracing the evolution of RNA-guided systems further into the past. “Finding all these new systems sheds light on how RNA-guided systems have evolved, but we don’t know where RNA-guided activity itself comes from,” Altae-Tran says. Understanding those origins, he says, could pave the way to developing even more classes of programmable tools.

This work was made possible with support from the Simons Center for the Social Brain at MIT; National Institutes of Health Intramural Research Program; National Institutes of Health grants 1R01-HG009761 and 1DP1-HL141201; Howard Hughes Medical Institute; Open Philanthropy; G. Harold and Leila Y. Mathers Charitable Foundation; Edward Mallinckrodt, Jr. Foundation; Poitras Center for Psychiatric Disorders Research at MIT; Hock E. Tan and K. Lisa Yang Center for Autism Research at MIT; Yang-Tan Center for Molecular Therapeutics at MIT; Lisa Yang; Phillips family; R. Metcalfe; and J. and P. Poitras.

RNA-targeting enzyme expands the CRISPR toolkit

Researchers at MIT’s McGovern Institute have discovered a bacterial enzyme that they say could expand scientists’ CRISPR toolkit, making it easy to cut and edit RNA with the kind of precision that, until now, has only been available for DNA editing. The enzyme, called Cas7-11, modifies RNA targets without harming cells, suggesting that in addition to being a valuable research tool, it provides a fertile platform for therapeutic applications.

“This new enzyme is like the Cas9 of RNA,” says McGovern Fellow Omar Abudayyeh, referring to the DNA-cutting CRISPR enzyme that has revolutionized modern biology by making DNA editing fast, inexpensive, and exact. “It creates two precise cuts and doesn’t destroy the cell in the process like other enzymes,” he adds.

Up until now, only one other family of RNA-targeting enzymes, Cas13, has extensively been developed for RNA targeting applications. However, when Cas13 recognizes its target, it shreds any RNAs in the cell, destroying the cell along the way. Like Cas9, Cas7-11 is part of a programmable system; it can be directed at specific RNA targets using a CRISPR guide. Abudayyeh, McGovern fellow Jonathan Gootenberg, and their colleagues discovered Cas7-11 through a deep exploration of the CRISPR systems found in the microbial world. Their findings are reported today in the journal Nature.

Exploring natural diversity

DNA tools in the CRISPR toolkit (red) are approaching capacity, but researchers are now beginning to find new tools to edit RNA (blue). Image: Steven Dixon

Like other CRISPR proteins, Cas7-11 is used by bacteria as a defense mechanism against viruses. After encountering a new virus, bacteria that employ the CRISPR system keep a record of the infection in the form of a small snippet of the pathogen’s genetic material. Should that virus reappear, the CRISPR system is activated, guided by a small piece of RNA to destroy the viral genome and eliminate the infection.

These ancient immune systems are widespread and diverse, with different bacteria deploying different proteins to counter their viral invaders.

“Some target DNA, some target RNA. Some are very efficient in cleaving the target but have some toxicity, and others do not. They introduce different types of cuts, they can differ in specificity—and so on,” says Eugene Koonin, an evolutionary biologist at the National Center for Biotechnology Information.

Abudayyeh, Gootenberg, and Koonin have been scouring genome sequences to learn about the natural diversity of CRISPR systems—and to mine them for potential tools. The idea, Abudayyeh says, is to take advantage of the work that evolution has already done in engineering protein machines.

“We don’t know what we’ll find,” Abudayyeh says, “but let’s just explore and see what’s out there.”

As the team was poring through public databases to examine the components of different bacterial defense systems, a protein from a bacterium that had been isolated from Tokyo Bay caught their attention. Its amino acid sequence indicated that it belonged to a class of CRISPR systems that use large, multiprotein machines to find and cleave their targets. But this protein appeared to have everything it needed to carry out the job on its own. Other known single-protein Cas enzymes, including the Cas9 protein that has been widely adopted for DNA editing, belong to a separate class of CRISPR systems—but Cas7-11 blurs the boundaries of the CRISPR classification system, Koonin says.

The enzyme, which the team eventually named Cas7-11, was attractive from an engineering perspective, because single proteins are easier to deliver to cells and make better tools than their complex counterparts. But its composition also signaled an unexpected evolutionary history. The team found evidence that through evolution, the components of a more complex Cas machine had fused together to make the Cas7-11 protein. Gootenberg equates this to discovering a bat when you had previously assumed that birds are the only animals that fly, thereby recognizing that there are multiple evolutionary paths to flight. “It totally changes the landscape of how these systems are thought about, both functionally and evolutionarily,” he says.

Precision editing

McGovern Fellows Jonathan Gootenberg and Omar Abudayyeh in their lab. Photo: Caitlin Cunningham

When Gootenberg and Abudayyeh produced the Cas7-11 protein in their lab and began experimenting with it, they realized this unusual enzyme offered a powerful means to manipulate and study RNA. When they introduced it into cells along with an RNA guide, it made remarkably precise cuts, snipping its targets while leaving other RNA undisturbed. This meant they could use Cas7-11 to change specific letters in the RNA code, correcting errors introduced by genetic mutations. They were also able to program Cas7-11 to either stabilize or destroy particular RNA molecules inside cells, which gave them the ability to adjust the levels of the proteins encoded by those RNAs.

Abudayyeh and Gootenberg also found that Cas7-11’s ability to cut RNA could be dampened by a protein that appeared likely to also be involved in triggering programmed cell death, suggesting a possible link between CRISPR defense and a more extreme response to infection.

The team showed that a gene therapy vector can deliver the complete Cas7-11 editing system to cells and that Cas7-11 does not compromise cells’ health. They hope that with further development, the enzyme might one day be used to edit disease-causing sequences out of a patient’s RNA so their cells can produce healthy proteins, or to dial down the level of a protein that is doing harm due to genetic disease.

“We think that the unique way that Cas7-11 cuts enables many interesting and diverse applications,” Gootenberg says, noting that no other CRISPR tool cuts RNA so precisely. “It’s yet another great example of how these basic-biology driven explorations can yield new tools for therapeutics and diagnostics,” he adds. “And we’re certainly still just scratching the surface of what’s out there in natural diversity.”

Mapping the cellular circuits behind spitting

For over a decade, researchers have known that the roundworm Caenorhabditis elegans can detect and avoid short-wavelength light, despite lacking eyes and the light-absorbing molecules required for sight. As a graduate student in the Horvitz lab, Nikhil Bhatla proposed an explanation for this ability. He observed that light exposure not only made the worms wriggle away, but it also prompted them to stop eating. This clue led him to a series of studies that suggested that his squirming subjects weren’t seeing the light at all — they were detecting the noxious chemicals it produced, such as hydrogen peroxide. Soon after, the Horvitz lab realized that worms not only taste the nasty chemicals light generates, they also spit them out.

Now, in a study recently published in eLife, a team led by former graduate student Steve Sando reports the mechanism that underlies spitting in C. elegans. Individual muscle cells are generally regarded as the smallest units that neurons can independently control, but the researchers’ findings question this assumption. In the case of spitting, they determined that neurons can direct specialized subregions of a single muscle cell to generate multiple motions — expanding our understanding of how neurons control muscle cells to shape behavior.

“Steve made the remarkable discovery that the contraction of a small region of a particular muscle cell can be uncoupled from the contraction of the rest of the same cell,” says H. Robert Horvitz, the David H. Koch Professor of Biology at MIT, a member of the McGovern Institute for Brain Research and the Koch Institute for Integrative Cancer Research, Howard Hughes Medical Institute Investigator, and senior author of the study. “Furthermore, Steve found that such subcellular muscle compartments can be controlled by neurons to dramatically alter behavior.”

Roundworms are like vacuum cleaners that wiggle around hoovering up bacteria. The worm’s mouth, also known as the pharynx, is a muscular tube that traps the food, chews it, and then transfers it to the intestines through a series of “pumping” contractions.

Researchers have known for over a decade that worms flee from UV, violet, or blue light. But Bhatla discovered that this light also interrupts the constant pumping of the pharynx, because the taste produced by the light is so nasty that the worms pause feeding. As he looked closer, Bhatla noticed the worms’ response was actually quite nuanced. After an initial pause, the pharynx briefly starts pumping again in short bursts before fully stopping — almost like the worm was chewing for a bit even after tasting the unsavory light. Sometimes, a bubble would escape from the mouth, like a burp.

After he joined the project, Sando discovered that the worms were neither burping nor continuing to munch. Instead, the “burst pumps” were driving material in the opposite direction, out of the mouth into the local environment, rather than further back into the pharynx and intestine. In other words, the bad-tasting light caused worms to spit. Sando then spent years chasing his subjects around the microscope with a bright light and recording their actions in slow motion, in order to pinpoint the neural circuitry and muscle motions required for this behavior.

“The discovery that the worms were spitting was quite surprising to us, because the mouth seemed to be moving just like it does when it’s chewing,” Sando says. “It turns out that you really needed to zoom in and slow things down to see what’s going on, because the animals are so small and the behavior is happening so quickly.”

To analyze what’s happening in the pharynx to produce this spitting motion, the researchers used a tiny laser beam to surgically remove individual nerve and muscle cells from the mouth and discern how that affected the worm’s behavior. They also monitored the activity of the cells in the mouth by tagging them with specially-engineered fluorescent “reporter” proteins.

They saw that while the worm is eating, three muscle cells towards the front of the pharynx called pm3s contract and relax together in synchronous pulses. But as soon as the worm tastes light, the subregions of these individual cells closest to the front of the mouth become locked in a state of contraction, opening the front of the mouth and allowing material to be propelled out. This reverses the direction of the flow of the ingested material and converts feeding into spitting.

The team determined that this “uncoupling” phenomenon is controlled by a single neuron at the back of the worm’s mouth. Called M1, this nerve cell spurs a localized influx of calcium at the front end of the pm3 muscle likely responsible for triggering the sub-cellular contractions.

M1 relays important information like a switchboard. It receives incoming signals from many different neurons, and transmits that information to the muscles involved in spitting. Sando and his team suspect that the strength of the incoming signal can tune the worm’s behavior in response to tasting light. For instance, their findings suggest that a revolting taste elicits a vigorous rinsing of the mouth, while a mildly unpleasant sensation causes the worm spit more gently, just enough to eject the contents.

In the future, Sando thinks the worm could be used as a model to study how neurons trigger subregions of muscle cells to constrict and shape behavior — a phenomenon they suspect occurs in other animals, possibly including humans.

“We’ve essentially found a new way for a neuron to move a muscle,” Sando says. “Neurons orchestrate the motions of muscles, and this could be a new tool that allows them to exert a sophisticated kind of control. That’s pretty exciting.”

Having more conversations to boost brain development

Engaging children in more conversation may be all it takes to strengthen language processing networks in their brains, according to a new study by MIT scientists.

Childhood experiences, including language exposure, have a profound impact on the brain’s development. Now, scientists led by McGovern Institute investigator John Gabrieli have shown that when families change their communication style to incorporate more back-and-forth exchanges between child and adult, key brain regions grow and children’s language abilities advance. Other parts of the brain may be impacted, as well.

In a study of preschool and kindergarten-aged children and their families, Gabrieli, Harvard postdoctoral researcher Rachel Romeo, and colleagues found that increasing conversation had a measurable impact on children’s brain structure and cognition within just a few months. “In just nine weeks, fluctuations in how often parents spoke with their kids appear to make a difference in brain development, language development, and executive function development,” Gabrieli says. The team’s findings are reported in the June issue of the journal Developmental Cognitive Neuroscience.

“We’re excited because this adds a little more evidence to the idea that [the brain] is malleable,” adds Romeo, who is now an assistant professor at the University of Maryland College Park.

“It suggests that in a relatively short period of time, the brain can change in positive ways,” says Romeo.

30 million word gap

In the 1990s, researchers determined that there are dramatic discrepancies in the language that children are exposed to early in life. They found that children from high-income families heard about 30 million more words during their first three years than children from lower-income families—and those exposed to more language tended to do better on tests of language development, vocabulary, and reading comprehension.

In 2018, Gabrieli and Romeo found that it was not the volume of language that made a difference, however, but instead the extent to which children were engaged in conversation. They measured this by counting the number of “conversational turns” that children experienced over a few days—that is, the frequency with which dialogue switched between child and adult. When they compared the brains of children who experienced significantly different levels of these conversational turns, they found structural and functional differences in regions known to be involved in language and speech.

After observing these differences, the researchers wanted to know whether altering a child’s language environment would impact their brain’s future development. To find out, they enrolled the families of fifty-two children between the ages of four and seven in a study, and randomly assigned half of the families to participate in a nine-week parent training program. While the program did not focus exclusively on language, there was an emphasis on improving communication, and parents were encouraged to engage in meaningful dialogues with their children.

Romeo and colleagues sent families home with audio recording devices to capture all of the language children were exposed to over two full days, first at the outset of the program and again after the nine-week training was complete. When they analyzed the recordings, they found that in many families, conversation between children and their parents had increased—and children who experienced the greatest increase in conversational turns showed the greatest improvements in language skills as well as in executive functions—a set of skills that includes memory, attention, and self-control.

 

graph depicting cortical changes
Clusters where changes in cortical thickness are significantly correlated with changes in children’s experienced conversational turns. Scatterplots represent the average change in cortical thickness as a function of the pre-to-post changes in conversational turns.

MRI scans showed that over the nine-week study, these children also experienced the most growth in two key brain areas: a sound processing center called the supramarginal gyrus and a region involved in language processing and speech production called Broca’s area. Intriguingly, these areas are very close to parts of the brain involved in executive function and social cognition.

“The brain networks for executive functioning, language, and social cognition are deeply intertwined and going through these really important periods of development during this preschool and transition-to-school period,” Romeo says. “Conversational turns seem to be going beyond just linguistic information. They seem to be about human communication and cognition at a deeper level. I think the brain results are suggestive of that, because there are so many language regions that could pop out, but these happen to be language regions that also are associated with other cognitive functions.”

Talk more

Gabrieli and Romeo say they are interested in exploring simple ways—such a web or smartphone-based tools—to support parents in communicating with their children in ways that foster brain development. It’s particularly exciting, Gabrieli notes, that introducing more conversation can impact brain development when at the age when children are preparing to begin school.

“Kids who arrive to school school-ready in language skills do better in school for years to come,” Gabrieli says. “So I think it’s really exciting to be able to see that the school readiness is so flexible and dynamic in nine weeks of experience.”

“We know this is not a trivial ask of people,” he says. “There’s a lot of factors that go into people’s lives— their own prior experiences, the pressure of their circumstances. But it’s a doable thing. You don’t have to have an expensive tutor or some deluxe pre-K environment. You can just talk more with your kid.”

International Dyslexia Association recognizes John Gabrieli with highest honor

Cognitive neuroscientist John Gabrieli has been named the 2021 winner of the Samuel Torrey Orton Award, the International Dyslexia Association’s highest honor. The award recognizes achievements of leading researchers and practitioners in the dyslexia field, as well as those of individuals with dyslexia who exhibit leadership and serve as role models in their communities.

“I am grateful to the International Dyslexia Association for this recognition,” said Gabrieli, who is the Grover Hermann Professor of Health Sciences and Technology, a professor of brain and cognitive sciences, and a member of MIT’s McGovern Institute for Brain Research. “The association has been such an advocate for individuals and their families who struggle with dyslexia, and has also been such a champion for the relevant science. I am humbled to join the company of previous recipients of this award who have done so much to help us understand dyslexia and how individuals with dyslexia can be supported to flourish in their growth and development.”

Gabrieli, who is also the director of MIT’s Athinoula A. Martinos Imaging Center, uses neuroimaging and behavioral tests to understand how the human brain powers learning, thinking, and feeling.  For the last two decades, Gabrieli has sought to unravel the neuroscience behind learning and reading disabilities and, ultimately, convert that understanding into new and better education interventions—a sort of translational medicine for the classroom.

“We want to get every kid to be an adequate reader by the end of the third grade,” Gabrieli says. “That’s the ultimate goal: to help all children become learners.”

In March of 2018, Gabrieli and the MIT Integrated Learning Initiative—MITili, which he also directs—announced a $30 million-dollar grant from the Chan Zuckerberg Initiative for a collaboration between MIT, the Harvard Graduate School of Education, and Florida State University. This partnership, called “Reach Every Reader” aims to make significant progress on the crisis in early literacy – including tools to identify children at risk for dyslexia and other learning disabilities before they even learn to read.

“John is especially deserving of this award,” says Hugh Catts, Gabrieli’s colleague at Reach Every Reader. Catts is a professor and director of the School of Communications Science and Disorders at Florida State University. “His work has been seminal to our understanding of the neural basis of learning and learning difficulties such as dyslexia. He has been a strong advocate for individuals with dyslexia and a mentor to leading experts in the field,” says Catts, who is also received the Orton Award in 2008.

“It’s a richly deserved honor,”says Sanjay Sarma, the Fred Fort Flowers (1941) and Daniel Fort Flowers (1941) Professor of Mechanical Engineering at MIT. “John’s research is a cornerstone of MIT’s efforts to make education more equitable and accessible for all. His contributions to learning science inform so much of what we do, and his advocacy continues to raise public awareness of dyslexia and helps us better reach the dyslexic community through literacy initiatives such as Reach Every Reader. We’re so pleased that his work has been recognized with the Samuel Torrey Orton Award,” says Sarma, who is also Vice President for Open Learning at MIT.

Gabrieli will deliver the Samuel Torrey Orton and Joan Lyday Orton Memorial Lecture this fall in North Carolina as part of the 2021 International Dyslexia Association’s Annual Reading, Literacy and Learning Conference.