MRI sensor images deep brain activity

Calcium is a critical signaling molecule for most cells, and it is especially important in neurons. Imaging calcium in brain cells can reveal how neurons communicate with each other; however, current imaging techniques can only penetrate a few millimeters into the brain.

MIT researchers have now devised a new way to image calcium activity that is based on magnetic resonance imaging (MRI) and allows them to peer much deeper into the brain. Using this technique, they can track signaling processes inside the neurons of living animals, enabling them to link neural activity with specific behaviors.

“This paper describes the first MRI-based detection of intracellular calcium signaling, which is directly analogous to powerful optical approaches used widely in neuroscience but now enables such measurements to be performed in vivo in deep tissue,” says Alan Jasanoff, an MIT professor of biological engineering, brain and cognitive sciences, and nuclear science and engineering, and an associate member of MIT’s McGovern Institute for Brain Research.

Jasanoff is the senior author of the paper, which appears in the Feb. 22 issue of Nature Communications. MIT postdocs Ali Barandov and Benjamin Bartelle are the paper’s lead authors. MIT senior Catherine Williamson, recent MIT graduate Emily Loucks, and Arthur Amos Noyes Professor Emeritus of Chemistry Stephen Lippard are also authors of the study.

Getting into cells

In their resting state, neurons have very low calcium levels. However, when they fire an electrical impulse, calcium floods into the cell. Over the past several decades, scientists have devised ways to image this activity by labeling calcium with fluorescent molecules. This can be done in cells grown in a lab dish, or in the brains of living animals, but this kind of microscopy imaging can only penetrate a few tenths of a millimeter into the tissue, limiting most studies to the surface of the brain.

“There are amazing things being done with these tools, but we wanted something that would allow ourselves and others to look deeper at cellular-level signaling,” Jasanoff says.

To achieve that, the MIT team turned to MRI, a noninvasive technique that works by detecting magnetic interactions between an injected contrast agent and water molecules inside cells.

Many scientists have been working on MRI-based calcium sensors, but the major obstacle has been developing a contrast agent that can get inside brain cells. Last year, Jasanoff’s lab developed an MRI sensor that can measure extracellular calcium concentrations, but these were based on nanoparticles that are too large to enter cells.

To create their new intracellular calcium sensors, the researchers used building blocks that can pass through the cell membrane. The contrast agent contains manganese, a metal that interacts weakly with magnetic fields, bound to an organic compound that can penetrate cell membranes. This complex also contains a calcium-binding arm called a chelator.

Once inside the cell, if calcium levels are low, the calcium chelator binds weakly to the manganese atom, shielding the manganese from MRI detection. When calcium flows into the cell, the chelator binds to the calcium and releases the manganese, which makes the contrast agent appear brighter in an MRI image.

“When neurons, or other brain cells called glia, become stimulated, they often experience more than tenfold increases in calcium concentration. Our sensor can detect those changes,” Jasanoff says.

Precise measurements

The researchers tested their sensor in rats by injecting it into the striatum, a region deep within the brain that is involved in planning movement and learning new behaviors. They then used potassium ions to stimulate electrical activity in neurons of the striatum, and were able to measure the calcium response in those cells.

Jasanoff hopes to use this technique to identify small clusters of neurons that are involved in specific behaviors or actions. Because this method directly measures signaling within cells, it can offer much more precise information about the location and timing of neuron activity than traditional functional MRI (fMRI), which measures blood flow in the brain.

“This could be useful for figuring out how different structures in the brain work together to process stimuli or coordinate behavior,” he says.

In addition, this technique could be used to image calcium as it performs many other roles, such as facilitating the activation of immune cells. With further modification, it could also one day be used to perform diagnostic imaging of the brain or other organs whose functions rely on calcium, such as the heart.

The research was funded by the National Institutes of Health and the MIT Simons Center for the Social Brain.

2019 Scolnick Prize Awarded to Richard Huganir

The McGovern Institute announced today that the winner of the 2019 Edward M. Scolnick Prize in Neuroscience is Rick Huganir, the Bloomberg Distinguished Professor of Neuroscience and Psychological and Brain Sciences at the Johns Hopkins University School of Medicine. Huganir is being recognized for his role in understanding the molecular and biochemical underpinnings of “synaptic plasticity,” changes at synapses that are key to learning and memory formation. The Scolnick Prize is awarded annually by the McGovern Institute to recognize outstanding advances in any field of neuroscience.

“Rick Huganir has made a huge impact on our understanding of how neurons communicate with one another, and the award honors him for this ground-breaking research”, says Robert Desimone, director of the McGovern Institute and the chair of the committee.

“He conducts basic research on the synapses between neurons but his work has important implications for our understanding of many brain disorders that impair synaptic function.”

As the past president of the Society for Neuroscience, the world’s largest organization of researchers that study the brain and nervous system, Huganir is well-known in the global neuroscience community. He also directs the Kavli Neuroscience Discovery Institute and serves as director of the Solomon H. Snyder Department of Neuroscience at Johns Hopkins University School of Medicine and co-director of the Johns Hopkins Brain Science Institute.

From the beginning of his research career, Huganir was interested in neurotransmitter receptors, key to signaling at the synapse. He conducted his thesis work in the laboratory of Efraim Racker at Cornell University, where he first reconstituted one of these receptors, the nicotinic acetylcholine receptor, allowing its biochemical characterization. He went on to become a postdoctoral fellow in Paul Greengard’s lab at The Rockefeller University in New York. During this time, he made the first functional demonstration that phosphorylation, a reversible chemical modification, affects neurotransmitter receptor activity. Phosphorylation was shown to regulate desensitization, the process by which neurotransmitter receptors stop reacting during prolonged exposure to the neurotransmitter.

Upon arriving at Johns Hopkins University, Huganir broadened this concept, finding that the properties and functions of other key receptors and channels, including the GABAA, AMPA, and kainite receptors, could be controlled through phosphorylation. By understanding the sites of phosphorylation and the effects of this modification, Huganir was laying the foundation for the next major steps from his lab: showing that these modifications affect the strength of synaptic connections and transmission, i.e. synaptic plasticity, and in turn, behavior and memory. Huganir also uncovered proteins that interact with neurotransmitter receptors and influence synaptic transmission and plasticity, thus uncovering another layer of molecular regulation. He went on to define how these accessory factors have such influence, showing that they impact the subcellular targeting and cycling of neurotransmitter receptors to and from the synaptic membrane. These mechanisms influence the formation of, for example, fear memory, as well as its erasure. Indeed, Huganir found that a specific type of AMPA receptor is added to synapses in the amygdala after a traumatic event, and that specific removal results in fear erasure in a mouse model.

Among many awards and honors, Huganir received the Young Investigator Award and the Julius Axelrod Award of the Society for Neuroscience. He was also elected to the American Academy of Arts and Sciences, the US National Academy of Sciences, and the Institute of Medicine. He is also a fellow of the American Association for the Advancement of Science.

The Scolnick Prize was first awarded in 2004, and was established by Merck in honor of Edward M. Scolnick who was President of Merck Research Laboratories for 17 years. Scolnick is currently a core investigator at the Broad Institute, and chief scientist emeritus of the Stanley Center for Psychiatric Research at Broad Institute.

Huganir will deliver the Scolnick Prize lecture at the McGovern Institute on May 8, 2019 at 4:00pm in the Singleton Auditorium of MIT’s Brain and Cognitive Sciences Complex (Bldg 46-3002), 43 Vassar Street in Cambridge. The event is free and open to the public.

 

 

Ila Fiete joins the McGovern Institute

Ila Fiete, an associate professor in the Department of Brain and Cognitive Sciences at MIT recently joined the McGovern Institute as an associate investigator. Fiete is working to understand the circuits that underlie short-term memory, integration, and inference in the brain.

Think about the simple act of visiting a new town and getting to know its layout as you explore it. What places are reachable from others? Where are landmarks relative to each other? Where are you relative to these landmarks? How do you get from here to where you want to go next?

The process that occurs as your brain tries to transform the few routes you follow into a coherent map of the world is just one of myriad examples of hard computations that the brain is constantly performing. Fiete’s goal is to understand how the brain is able to carry out such computations, and she is developing and using multiple tools to this end. These approaches include pure theoretical approaches to examine neural codes, building numerical dynamical models of circuit operation, and techniques to extract information about the underlying circuit dynamics from neural data.

Spatial navigation is a particularly interesting nut to crack from a neural perspective: The mapping devices on your phone have access to global satellite data, previously constructed detailed maps of the town, various additional sensors, and excellent non-leaky memory. By contrast, the brain must build maps, plan routes, and determine goals all using noisy, local sensors, no externally provided maps, and with noisy, forgetful or leaky neurons. Fiete is particularly interested in elucidating how the brain deals with noisy and ambiguous cues from the world to arrive at robust estimates about the world that resolve ambiguity. She is also interested in how the networks that are important for memory and integration develop through plasticity, learning, and development in the brain.

Fiete earned a BS in mathematics and physics at the University of Michigan then obtained her PhD in 2004 at Harvard University in the Department of Physics. She held a postdoctoral appointment at the Kavli Institute for Theoretical Physics at the University of California, Santa Barbara from 2004 to 2006, while she was also a visiting member of the Center for Theoretical Biophysics at the University of California at San Diego. Fiete subsequently spent two years at Caltech as a Broad Fellow in brain circuitry, and in 2008 joined the faculty of the University of Texas at Austin. She is currently an HHMI faculty scholar.

Peering under the hood of fake-news detectors

New work from researchers at the McGovern Institute for Brain Research at MIT peers under the hood of an automated fake-news detection system, revealing how machine-learning models catch subtle but consistent differences in the language of factual and false stories. The research also underscores how fake-news detectors should undergo more rigorous testing to be effective for real-world applications.

Popularized as a concept in the United States during the 2016 presidential election, fake news is a form of propaganda created to mislead readers, in order to generate views on websites or steer public opinion.

Almost as quickly as the issue became mainstream, researchers began developing automated fake news detectors — so-called neural networks that “learn” from scores of data to recognize linguistic cues indicative of false articles. Given new articles to assess, these networks can, with fairly high accuracy, separate fact from fiction, in controlled settings.

One issue, however, is the “black box” problem — meaning there’s no telling what linguistic patterns the networks analyze during training. They’re also trained and tested on the same topics, which may limit their potential to generalize to new topics, a necessity for analyzing news across the internet.

In a paper presented at the Conference and Workshop on Neural Information Processing Systems, the researchers tackle both of those issues. They developed a deep-learning model that learns to detect language patterns of fake and real news. Part of their work “cracks open” the black box to find the words and phrases the model captures to make its predictions.

Additionally, they tested their model on a novel topic it didn’t see in training. This approach classifies individual articles based solely on language patterns, which more closely represents a real-world application for news readers. Traditional fake news detectors classify articles based on text combined with source information, such as a Wikipedia page or website.

“In our case, we wanted to understand what was the decision-process of the classifier based only on language, as this can provide insights on what is the language of fake news,” says co-author Xavier Boix, a postdoc in the lab of Eugene McDermott Professor Tomaso Poggio at the Center for Brains, Minds, and Machines (CBMM), a National Science Foundation-funded center housed within the McGovern Institute.

“A key issue with machine learning and artificial intelligence is that you get an answer and don’t know why you got that answer,” says graduate student and first author Nicole O’Brien ’17. “Showing these inner workings takes a first step toward understanding the reliability of deep-learning fake-news detectors.”

The model identifies sets of words that tend to appear more frequently in either real or fake news — some perhaps obvious, others much less so. The findings, the researchers say, points to subtle yet consistent differences in fake news — which favors exaggerations and superlatives — and real news, which leans more toward conservative word choices.

“Fake news is a threat for democracy,” Boix says. “In our lab, our objective isn’t just to push science forward, but also to use technologies to help society. … It would be powerful to have tools for users or companies that could provide an assessment of whether news is fake or not.”

The paper’s other co-authors are Sophia Latessa, an undergraduate student in CBMM; and Georgios Evangelopoulos, a researcher in CBMM, the McGovern Institute of Brain Research, and the Laboratory for Computational and Statistical Learning.

Limiting bias

The researchers’ model is a convolutional neural network that trains on a dataset of fake news and real news. For training and testing, the researchers used a popular fake news research dataset, called Kaggle, which contains around 12,000 fake news sample articles from 244 different websites. They also compiled a dataset of real news samples, using more than 2,000 from the New York Times and more than 9,000 from The Guardian.

In training, the model captures the language of an article as “word embeddings,” where words are represented as vectors — basically, arrays of numbers — with words of similar semantic meanings clustered closer together. In doing so, it captures triplets of words as patterns that provide some context — such as, say, a negative comment about a political party. Given a new article, the model scans the text for similar patterns and sends them over a series of layers. A final output layer determines the probability of each pattern: real or fake.

The researchers first trained and tested the model in the traditional way, using the same topics. But they thought this might create an inherent bias in the model, since certain topics are more often the subject of fake or real news. For example, fake news stories are generally more likely to include the words “Trump” and “Clinton.”

“But that’s not what we wanted,” O’Brien says. “That just shows topics that are strongly weighting in fake and real news. … We wanted to find the actual patterns in language that are indicative of those.”

Next, the researchers trained the model on all topics without any mention of the word “Trump,” and tested the model only on samples that had been set aside from the training data and that did contain the word “Trump.” While the traditional approach reached 93-percent accuracy, the second approach reached 87-percent accuracy. This accuracy gap, the researchers say, highlights the importance of using topics held out from the training process, to ensure the model can generalize what it has learned to new topics.

More research needed

To open the black box, the researchers then retraced their steps. Each time the model makes a prediction about a word triplet, a certain part of the model activates, depending on if the triplet is more likely from a real or fake news story. The researchers designed a method to retrace each prediction back to its designated part and then find the exact words that made it activate.

More research is needed to determine how useful this information is to readers, Boix says. In the future, the model could potentially be combined with, say, automated fact-checkers and other tools to give readers an edge in combating misinformation. After some refining, the model could also be the basis of a browser extension or app that alerts readers to potential fake news language.

“If I just give you an article, and highlight those patterns in the article as you’re reading, you could assess if the article is more or less fake,” he says. “It would be kind of like a warning to say, ‘Hey, maybe there is something strange here.’”

Joining the dots in large neural datasets

You might have played ‘join the dots’, a puzzle where numbers guide you to draw until a complete picture emerges. But imagine a complex underlying image with no numbers to guide the sequence of joining. This is a problem that challenges scientists who work with large amounts of neural data. Sometimes they can align data to a stereotyped behavior, and thus define a sequence of neuronal activity underlying navigation of a maze or singing of a song learned and repeated across generations of birds. But most natural behavior is not stereotyped, and when it comes to sleeping, imagining, and other higher order activities, there is not even a physical behavioral readout for alignment. Michale Fee and colleagues have now developed an algorithm, seqNMF, that can recognize relevant sequences of neural activity, even when there is no guide to align to, such as an overt sequence of behaviors or notes.

“This method allows you to extract structure from the internal life of the brain without being forced to make reference to inputs or output,” says Michale Fee, a neuroscientist at the McGovern Institute at MIT, Associate Department Head and Glen V. and Phyllis F. Dorflinger Professor of Neuroscience in the Department of Brain and Cognitive Sciences, and investigator with the Simons Collaboration on the Global Brain. Fee conducted the study in collaboration with Mark S. Goldman of the University of California, Davis.

In order to achieve this task, the authors of the study, co-led by Emily L. Mackevicius and Andrew H. Bahle of the McGovern Institute,  took a process called convolutional non-negative matrix factorization (NMF), a tool that allows extraction of sparse, but important, features from complex and noisy data, and developed it so that it can be used to extract sequences over time that are related to a learned behavior or song. The new algorithm also relies on repetition, but tell-tale repetitions of neural activity rather than simplistic repetitions in the animal’s behavior. seqNMF can follow repeated sequences of firing over time that are not tied to a specific external reference time framework, and can extract relevant sequences of neural firing in an unsupervised fashion without the researcher supplying prior information.

In the current study, the authors initially applied and honed the system on synthetic datasets. These datasets started to show them that the algorithm could “join the dots” without additional informational input. When seqNMF performed well in these tests, they applied it to available open source data from rats, finding that they could extract sequences of neural firing in the hippocampus that are relevant to finding a water reward in a maze.

Having passed these initial tests, the authors upped the ante and challenged seqNMF to find relevant neural activity sequences in a non-stereotypical behavior: improvised singing by zebra finches that have not learned the signature songs of their species (untutored birds). The authors analyzed neural data from the HVC, a region of the bird brain previously linked to song learning. Since normal adult bird songs are stereotyped, the researchers could align neural activity with features in the song itself for well-tutored birds. Fee and colleagues then turned to untutored birds and found that they still had repeated neural sequences related to the “improvised” song, that are reminiscent of the tutored birds, but the patterns are messier. Indeed, the brain of the untutored bird will even initiate two distinct neural signatures at the same time, but seqNMF is able to see past the resulting neural cacophony, and decipher that multiple patterns are present but overlapping. Being able to find these levels of order in such neural datasets is near impossible using previous methods of analysis.

seqNMF can be applied, potentially, to any neural activity, and the researchers are now testing whether the algorithm can indeed be generalized to extract information from other types of neural data. In other words, now that it’s clear that seqNMF can find a relevant sequence of neural activity for a non-stereotypical behavior, scientists can examine whether the neural basis of behaviors in other organisms and even for activities such as sleep and imagination can be extracted. Indeed, seqNMF is available on GitHub for researchers to apply to their own questions of interest.

Scientists engineer new CRISPR platform for DNA targeting

A team that includes the scientist who first harnessed the revolutionary CRISPR-Cas9 and other systems for genome editing of eukaryotic organisms, including animals and plants, has engineered another CRISPR system, called Cas12b. The new system offers improved capabilities and options when compared to CRISPR-Cas9 systems.

In a study published today in Nature Communications, Feng Zhang and colleagues at the Broad Institute of MIT and Harvard and the McGovern Institute for Brain Research at MIT, with co-author Eugene Koonin at the National Institutes of Health, demonstrate that the new enzyme can be engineered to target and precisely nick or edit the genomes of human cells. The high target specificity and small size of Cas12b from Bacillus hisashii (BhCas12b) as compared to Cas9 (SpCas9), makes this new system suitable for in vivo applications. The team is now making CRISPR-Cas12b widely available for research.

The team previously identified Cas12b (then known as C2c1) as one of three promising new CRISPR enzymes in 2015, but faced a hurdle: Because Cas12b comes from thermophilic bacteria — which live in hot environments such as geysers, hot springs, volcanoes, and deep sea hydrothermal vents — the enzyme naturally only works at temperatures higher than human body temperature.

“We searched for inspirations from nature,” Zhang said. “We wanted to create a version of Cas12b that could operate at lower temperatures, so we scanned thousands of bacterial genetic sequences, looking in bacteria that could thrive in the lower temperatures of mammalian environments.”

Through a combination of exploration of natural diversity and rational engineering of promising candidate enzymes, they generated a version of Cas12b capable of efficiently editing genomes in primary human T cells, an important initial step for therapeutics that target or leverage the immune system.

“This is further evidence that there are many useful CRISPR systems waiting to be discovered,” said Jonathan Strecker, a postdoctoral fellow in the Zhang Lab, a Human Frontiers Science program fellow, and the study’s first author.

The field is moving quickly: Since the Cas12b family of enzymes was first described in 2015 and demonstrated to be RNA-guided DNA endonucleases, several groups have have been exploring this family of enzymes. In 2017 a team from Jennifer Doudna’s lab at UC Berkeley reported that Cas12b from Alicyclobacillus acidoterrestris can mediate non-specific collateral cleavage of DNA in vitro. More recently, a team from the Chinese Academy of Sciences in Beijing reported that another Cas12b, from Alicyclobacillus acidiphilus, was used to edit mammalian cells.

The Broad Institute and MIT are sharing the Cas12b system widely. As with earlier genome editing tools, these groups will make the technology freely available for academic research via the Zhang lab’s page on the plasmid-sharing website Addgene, through which the Zhang lab has already shared reagents more than 52,000 times with researchers at nearly 2,400 labs in 62 countries to accelerate research.

Zhang is a core institute member of the Broad Institute of MIT and Harvard, as well as an investigator at the McGovern Institute for Brain Research at MIT, the James and Patricia Poitras Professor of Neuroscience at MIT, and an associate professor at MIT, with joint appointments in the departments of Brain and Cognitive Sciences and Biological Engineering.

Support for this study was provided by the Poitras Center for Psychiatric Disorders Research, the Hock E. Tan and K. Lisa Yang Center for Autism Research, the National Human Genome Research Institute, the National Institute of Mental Health, the National Heart, Lung, and Blood Institute, and other sources. Feng Zhang is an Investigator with the Howard Hughes Medical Institute.

References:

Strecker J, et al. Engineering of CRISPR-Cas12b for human genome editing. Nature Communications. Online January 22, 2019. DOI: 10.1038/s41467-018-08224-4.

Mapping the brain at high resolution

Researchers have developed a new way to image the brain with unprecedented resolution and speed. Using this approach, they can locate individual neurons, trace connections between them, and visualize organelles inside neurons, over large volumes of brain tissue.

The new technology combines a method for expanding brain tissue, making it possible to image at higher resolution, with a rapid 3-D microscopy technique known as lattice light-sheet microscopy. In a paper appearing in Science Jan. 17, the researchers showed that they could use these techniques to image the entire fruit fly brain, as well as large sections of the mouse brain, much faster than has previously been possible. The team includes researchers from MIT, the University of California at Berkeley, the Howard Hughes Medical Institute, and Harvard Medical School/Boston Children’s Hospital.

This technique allows researchers to map large-scale circuits within the brain while also offering unique insight into individual neurons’ functions, says Edward Boyden, the Y. Eva Tan Professor in Neurotechnology, an associate professor of biological engineering and of brain and cognitive sciences at MIT, and a member of MIT’s McGovern Institute for Brain Research, Media Lab, and Koch Institute for Integrative Cancer Research.

“A lot of problems in biology are multiscale,” Boyden says. “Using lattice light-sheet microscopy, along with the expansion microscopy process, we can now image at large scale without losing sight of the nanoscale configuration of biomolecules.”

Boyden is one of the study’s senior authors, along with Eric Betzig, a senior fellow at the Janelia Research Campus and a professor of physics and molecular and cell biology at UC Berkeley. The paper’s lead authors are MIT postdoc Ruixuan Gao, former MIT postdoc Shoh Asano, and Harvard Medical School Assistant Professor Srigokul Upadhyayula.

Large-scale imaging

In 2015, Boyden’s lab developed a way to generate very high-resolution images of brain tissue using an ordinary light microscope. Their technique relies on expanding tissue before imaging it, allowing them to image the tissue at a resolution of about 60 nanometers. Previously, this kind of imaging could be achieved only with very expensive high-resolution microscopes, known as super-resolution microscopes.

In the new study, Boyden teamed up with Betzig and his colleagues at HHMI’s Janelia Research Campus to combine expansion microscopy with lattice light-sheet microscopy. This technology, which Betzig developed several years ago, has some key traits that make it ideal to pair with expansion microscopy: It can image large samples rapidly, and it induces much less photodamage than other fluorescent microscopy techniques.

“The marrying of the lattice light-sheet microscope with expansion microscopy is essential to achieve the sensitivity, resolution, and scalability of the imaging that we’re doing,” Gao says.

Imaging expanded tissue samples generates huge amounts of data — up to tens of terabytes per sample — so the researchers also had to devise highly parallelized computational image-processing techniques that could break down the data into smaller chunks, analyze it, and stitch it back together into a coherent whole.

In the Science paper, the researchers demonstrated the power of their new technique by imaging layers of neurons in the somatosensory cortex of mice, after expanding the tissue volume fourfold. They focused on a type of neuron known as pyramidal cells, one of the most common excitatory neurons found in the nervous system. To locate synapses, or connections, between these neurons, they labeled proteins found in the presynaptic and postsynaptic regions of the cells. This also allowed them to compare the density of synapses in different parts of the cortex.

Using this technique, it is possible to analyze millions of synapses in just a few days.

“We counted clusters of postsynaptic markers across the cortex, and we saw differences in synaptic density in different layers of the cortex,” Gao says. “Using electron microscopy, this would have taken years to complete.”

The researchers also studied patterns of axon myelination in different neurons. Myelin is a fatty substance that insulates axons and whose disruption is a hallmark of multiple sclerosis. The researchers were able to compute the thickness of the myelin coating in different segments of axons, and they measured the gaps between stretches of myelin, which are important because they help conduct electrical signals. Previously, this kind of myelin tracing would have required months to years for human annotators to perform.

This technology can also be used to image tiny organelles inside neurons. In the new paper, the researchers identified mitochondria and lysosomes, and they also measured variations in the shapes of these organelles.

Circuit analysis

The researchers demonstrated that this technique could be used to analyze brain tissue from other organisms as well; they used it to image the entire brain of the fruit fly, which is the size of a poppy seed and contains about 100,000 neurons. In one set of experiments, they traced an olfactory circuit that extends across several brain regions, imaged all dopaminergic neurons, and counted all synapses across the brain. By comparing multiple animals, they also found differences in the numbers and arrangements of synaptic boutons within each animal’s olfactory circuit.

In future work, Boyden envisions that this technique could be used to trace circuits that control memory formation and recall, to study how sensory input leads to a specific behavior, or to analyze how emotions are coupled to decision-making.

“These are all questions at a scale that you can’t answer with classical technologies,” he says.

The system could also have applications beyond neuroscience, Boyden says. His lab is planning to work with other researchers to study how HIV evades the immune system, and the technology could also be adapted to study how cancer cells interact with surrounding cells, including immune cells.

The research was funded by John Doerr, K. Lisa Yang and Y. Eva Tan, the Open Philanthropy Project, the National Institutes of Health, the Howard Hughes Medical Institute, the HHMI-Simons Faculty Scholars Program, the U.S. Army Research Laboratory and Army Research Office, the US-Israel Binational Science Foundation, Biogen, and Ionis Pharmaceuticals.

Welcoming the first McGovern Fellows

We are delighted to kick off the new year by welcoming Omar Abuddayeh and Jonathan Gootenberg as the first members of our new McGovern Institute Fellows Program. The fellows program is a recently launched initiative that supports highly-talented and selected postdocs that are ready to initiate their own research program.

As McGovern Fellows, the pair will be given space, time, and support to help them follow scientific research directions of their own choosing. This provides an alternative to the traditional postdoctoral research route.

Abudayyeh and Gootenberg both defended their thesis in the fall of 2018, and graduated from the lab of Feng Zhang, who is the James and Patricia Poitras Professor of Neuroscience at MIT, a McGovern investigator and core member of the Broad Institute. During their time in the Zhang lab, Abudayyeh and Gootenberg worked on projects that sought and found new tools based on enzymes mined from bacterial CRISPR systems. Cas9 is the original programmable single-effector DNA-editing enzyme, and the new McGovern Fellows worked on teams that actively looked for CRISPR enzymes with properties distinct from and complementary to Cas9. In the course of their thesis work, they helped to identify RNA-guided RNA editing factors such as the Cas13 family. This work led to the development of the REPAIR system, which is capable of editing RNA, thus providing a CRISPR-based therapeutic avenue that is not based on permanent, heritable changes to the genome. In addition, they worked on a Cas13-based diagnostic system called SHERLOCK that can detect specific nucleic acid sequences. SHERLOCK is able to detect the presence of infectious agents such as Zika virus in an easily-deployable lateral flow format, similar to a pregnancy test.

We are excited to see the directions that the new McGovern Fellows take as they now arrive at the institute, and will keep you posted on scientific findings as they emerge from their labs.

 

Plugging into the brain

Driven by curiosity and therapeutic goals, Anikeeva leaves no scientific stone unturned in her drive to invent neurotechnology.

The audience sits utterly riveted as Polina Anikeeva highlights the gaps she sees in the landscape of neural tools. With a background in optoelectronics, she has a decidedly unique take on the brain.

“In neuroscience,” says Anikeeva, “we are currently applying silicon-based neural probes with the elastic properties of a knife to a delicate material with the consistency of chocolate pudding—the brain.”

A key problem, summarized by Anikeeva, is that these sharp probes damage tissue, making such interfaces unreliable and thwarting long term brain studies of processes including development and aging. The state of the art is even grimmer in the clinic. An avid climber, Anikeeva recalls a friend sustaining a spinal cord injury. “She made a remarkable recovery,” explains Anikeeva, “but seeing the technology being used to help her was shocking. Not even the simplest electronic tools were used, it was basically lots of screws and physical therapy.” This crude approach, compared to the elegant optoelectronic tools familiar to Anikeeva, sparked a drive to bring advanced materials technology to biological systems.

Outside the box

As the group breaks up after the seminar, the chatter includes boxes, more precisely, thinking outside of them. An associate professor in material sciences and engineering at MIT, Anikeeva’s interest in neuroscience recently led to a McGovern Institute appointment. She sees her journey to neurobiology as serendipitous, having earned her doctorate designing light-emitting devices at MIT.

“I wanted to work on tools that don’t exist, and neuroscience seemed like an obvious choice. Neurons communicate in part through membrane voltage changes and as an electronics designer, I felt that I should be able to use voltage.”

Comfort at the intersection of sciences requires, according to Anikeeva, clarity and focus, also important in her chief athletic pursuits, running and climbing. Through long distant running, Anikeeva finds solitary time (“assuming that no one can chase me”) and the clarity to consider complicated technical questions. Climbing hones something different, absolute focus in the face of the often-tangled information that comes with working at scientific intersections.

“When climbing, you can only think about one thing, your next move. Only the most important thoughts float up.”

This became particularly important when, in Yosemite National Park, she made the decision to go up, instead of down, during an impending thunderstorm. Getting out depended on clear focus, despite imminent hypothermia and being exposed “on one of the tallest features in the area, holding large quantities of metal.” Polina and her climbing partner made it out, but her summary of events echoes her research philosophy: “What you learn and develop is a strong mindset where you don’t do the comfortable thing, the easy thing. Instead you always find, and execute, the most logical strategy.”

In this vein, Anikeeva’s research pursues two very novel, but exceptionally logical, paths to brain research and therapeutics: fiber development and magnetic nanomaterials.

Drawing new fibers

Walking into Anikeeva’s lab, the eye is immediately drawn to a robust metal frame containing, upon closer scrutiny, recognizable parts: a large drill bit, a motor, a heating element. This custom-built machine applies principles from telecommunications to draw multifunctional fibers using more “brain-friendly” materials.

“We start out with a macroscopic model, a preform, of the device that we ultimately want,” explains Anikeeva.

This “preform” is a transparent block of polymers, composites, and soft low-melting temperature metals with optical and electrical properties needed in the final fiber. “So, this could include
electrodes for recording, optical channels for optogenetics, microfluidics for drug delivery, and one day even components that allow chemical or mechanical sensing.” After sitting in a vacuum to remove gases and impurities, the two-inch by one-inch preform arrives at the fiber-drawing tower.

“Then we heat it and pull it, and the macroscopic model becomes a kilometer-long fiber with a lateral dimension of microns, even nanometers,” explains Anikeeva. “Take one of your hairs, and imagine that inside there are electrodes for recording, there are microfluidic channels to infuse drugs, optical channels for stimulation. All of this is combined in a single miniature form
factor, and it can be quite flexible and even stretchable.”

Construction crew

Anikeeva’s lab comprises an eclectic mix of 21 researchers from over 13 different countries, and a range of expertises, including materials science, chemistry, electrical and mechanical engineering, and neuroscience. In 2011, Andres Canales, a materials scientist from Mexico, was the second person to join Anikeeva’s lab.

“There was only an idea, a diagram,” explains Canales. “I didn’t want to work on biology when I arrived at MIT, but talking to Polina, seeing the pictures, thinking about what it would entail, I became very excited by the methods and the potential applications she was thinking of.”

Despite the lack of preliminary models, Anikeeva’s ideas were compelling. Elegant as the fibers are, the road involved painstaking, iterative refinement. From a materials perspective, drawing a fiber containing a continuous conductive element was challenging, as was validation of its properties. But the resulting fiber can deliver optogenetics vectors, monitor expression, and then stimulate neuronal activity in a single surgery, removing the spatial and temporal guesswork usually involved in such an experiment.

Seongjun Park, an electrical engineering graduate student in the lab, explains one biological challenge. “For long term recording in the spinal cord, there was even an additional challenge as the fiber needed to be stretchable to respond to the spine’s movement. For this we developed a drawing process compatible with an elastomer.”

The resulting fibers can be deployed chronically without the scar tissue accumulation that usually prevents long-term optical manipulation and drug delivery, making them good candidates for the treatment of brain disorders. The lab’s current papers find that these implanted fibers are useful for three months, and material innovations make them confident that longer time periods are possible.

Magnetic moments

Another wing of Anikeeva’s research aims to develop entirely non-invasive modalities, and use magnetic nanoparticles to stimulate the brain and deliver therapeutics.

“Magnetic fields are probably the best modality for getting any kind of stimulus to deep tissues,” explains Anikeeva, “because biological systems, except for very specialized systems, do not perceive magnetic fields. They go through us unattenuated, and they don’t couple to our physiology.”

In other words, magnetic fields can safely reach deep tissues, including the brain. Upon reaching their tissue targets these fields can be used to stimulate magnetic nanoparticles, which might one day, for example, be used to deliver dopamine to the brains of Parkinson’s disease patients. The alternating magnetic fields being used in these experiments are tiny, 100-1000 times smaller than fields clinically approved for MRI-based brain imaging.

Tiny fields, but they can be used to powerful effect. By manipulating magnetic moments in these nanoparticles, the magnetic field can cause heat dissipation by the particle that can stimulate thermal receptors in the nervous system. These receptors naturally detect heat, chili peppers and vanilla, but Anikeeva’s magnetic nanoparticles act as tiny heaters that activate these receptors, and, in turn, local neurons. This principle has already been used to activate the brain’s reward center in freely moving mice.

Siyuan Rao, a postdoc who works on the magnetic nanoparticles in collaboration with McGovern Investigator Guoping Feng, is unhesitating when asked what most inspires her.

“As a materials scientist, it is really rewarding to see my materials at work. We can remotely modulate mouse behavior, even turn hopeless behavior into motivation.”

Pushing the boundaries

Such collaborations are valued by Anikeeva. Early on she worked with McGovern Investigator Emilio Bizzi to use the above fiber technology in the spinal cord. “It is important to us to not just make these devices,” explains Anikeeva, “but to use them and show ourselves, and our colleagues, the types of experiments that they can enable.”

Far from an assembly line, the researchers in Anikeeva’s lab follow projects from ideation to deployment. “The student that designs a fiber, performs their own behavioral experiments, and data analysis,” says Anikeeva. “Biology is unforgiving. You can trivially design the most brilliant electrophysiological recording probe, but unless you are directly working in the system, it is easy to miss important design considerations.”

Inspired by this, Anikeeva’s students even started a project with Gloria Choi’s group on their own initiative. This collaborative, can-do ethos spreads beyond the walls of the lab, inspiring people around MIT.

“We often work with a teaching instructor, David Bono, who is an expert on electronics and magnetic instruments,” explains Alex Senko, a senior graduate student in the lab. “In his spare time, he helps those of us who work on electrical engineering flavored projects to hunt down components needed to build our devices.”

These components extend to whatever is needed. When a low frequency source was needed, the Anikeeva lab drafted a guitar amplifier.

Queried about difficulties that she faces having chosen to navigate such a broad swath of fields, Anikeeva is focused, as ever, on the unknown, the boundaries of knowledge.

“Honestly, I really, really enjoy it. It keeps me engaged and not bored. Even when thinking about complicated physics and chemistry, I always have eyes on the prize, that this will allow us to address really interesting neuroscience questions.”

With such thinking, and by relentlessly seeking the tools needed to accomplish scientific goals, Anikeeva and her lab continue to avoid the comfortable route, instead using logical routes toward new technologies.

What is CRISPR?

CRISPR (which stands for Clustered Regularly Interspaced Short Palindromic Repeats) is not actually a single entity, but shorthand for a set of bacterial systems that are found with a hallmarked arrangement in the bacterial genome.

When CRISPR is mentioned, most people are likely thinking of CRISPR-Cas9, now widely known for its capacity to be re-deployed to target sequences of interest in eukaryotic cells, including human cells. Cas9 can be programmed to target specific stretches of DNA, but other enzymes have since been discovered that are able to edit DNA, including Cpf1 and Cas12b. Other CRISPR enzymes, Cas13 family members, can be programmed to target RNA and even edit and change its sequence.

The common theme that makes CRISPR enzymes so powerful, is that scientists can supply them with a guide RNA for a chosen sequence. Since the guide RNA can pair very specifically with DNA, or for Cas13 family members, RNA, researchers can basically provide a given CRISPR enzyme with a way of homing in on any sequence of interest. Once a CRISPR protein finds its target, it can be used to edit that sequence, perhaps removing a disease-associated mutation.

In addition, CRISPR proteins have been engineered to modulate gene expression and even signal the presence of particular sequences, as in the case of the Cas13-based diagnostic, SHERLOCK.

Do you have a question for The Brain? Ask it here.