Mapping the brain at high resolution

Researchers have developed a new way to image the brain with unprecedented resolution and speed. Using this approach, they can locate individual neurons, trace connections between them, and visualize organelles inside neurons, over large volumes of brain tissue.

The new technology combines a method for expanding brain tissue, making it possible to image at higher resolution, with a rapid 3-D microscopy technique known as lattice light-sheet microscopy. In a paper appearing in Science Jan. 17, the researchers showed that they could use these techniques to image the entire fruit fly brain, as well as large sections of the mouse brain, much faster than has previously been possible. The team includes researchers from MIT, the University of California at Berkeley, the Howard Hughes Medical Institute, and Harvard Medical School/Boston Children’s Hospital.

This technique allows researchers to map large-scale circuits within the brain while also offering unique insight into individual neurons’ functions, says Edward Boyden, the Y. Eva Tan Professor in Neurotechnology, an associate professor of biological engineering and of brain and cognitive sciences at MIT, and a member of MIT’s McGovern Institute for Brain Research, Media Lab, and Koch Institute for Integrative Cancer Research.

“A lot of problems in biology are multiscale,” Boyden says. “Using lattice light-sheet microscopy, along with the expansion microscopy process, we can now image at large scale without losing sight of the nanoscale configuration of biomolecules.”

Boyden is one of the study’s senior authors, along with Eric Betzig, a senior fellow at the Janelia Research Campus and a professor of physics and molecular and cell biology at UC Berkeley. The paper’s lead authors are MIT postdoc Ruixuan Gao, former MIT postdoc Shoh Asano, and Harvard Medical School Assistant Professor Srigokul Upadhyayula.

Large-scale imaging

In 2015, Boyden’s lab developed a way to generate very high-resolution images of brain tissue using an ordinary light microscope. Their technique relies on expanding tissue before imaging it, allowing them to image the tissue at a resolution of about 60 nanometers. Previously, this kind of imaging could be achieved only with very expensive high-resolution microscopes, known as super-resolution microscopes.

In the new study, Boyden teamed up with Betzig and his colleagues at HHMI’s Janelia Research Campus to combine expansion microscopy with lattice light-sheet microscopy. This technology, which Betzig developed several years ago, has some key traits that make it ideal to pair with expansion microscopy: It can image large samples rapidly, and it induces much less photodamage than other fluorescent microscopy techniques.

“The marrying of the lattice light-sheet microscope with expansion microscopy is essential to achieve the sensitivity, resolution, and scalability of the imaging that we’re doing,” Gao says.

Imaging expanded tissue samples generates huge amounts of data — up to tens of terabytes per sample — so the researchers also had to devise highly parallelized computational image-processing techniques that could break down the data into smaller chunks, analyze it, and stitch it back together into a coherent whole.

In the Science paper, the researchers demonstrated the power of their new technique by imaging layers of neurons in the somatosensory cortex of mice, after expanding the tissue volume fourfold. They focused on a type of neuron known as pyramidal cells, one of the most common excitatory neurons found in the nervous system. To locate synapses, or connections, between these neurons, they labeled proteins found in the presynaptic and postsynaptic regions of the cells. This also allowed them to compare the density of synapses in different parts of the cortex.

Using this technique, it is possible to analyze millions of synapses in just a few days.

“We counted clusters of postsynaptic markers across the cortex, and we saw differences in synaptic density in different layers of the cortex,” Gao says. “Using electron microscopy, this would have taken years to complete.”

The researchers also studied patterns of axon myelination in different neurons. Myelin is a fatty substance that insulates axons and whose disruption is a hallmark of multiple sclerosis. The researchers were able to compute the thickness of the myelin coating in different segments of axons, and they measured the gaps between stretches of myelin, which are important because they help conduct electrical signals. Previously, this kind of myelin tracing would have required months to years for human annotators to perform.

This technology can also be used to image tiny organelles inside neurons. In the new paper, the researchers identified mitochondria and lysosomes, and they also measured variations in the shapes of these organelles.

Circuit analysis

The researchers demonstrated that this technique could be used to analyze brain tissue from other organisms as well; they used it to image the entire brain of the fruit fly, which is the size of a poppy seed and contains about 100,000 neurons. In one set of experiments, they traced an olfactory circuit that extends across several brain regions, imaged all dopaminergic neurons, and counted all synapses across the brain. By comparing multiple animals, they also found differences in the numbers and arrangements of synaptic boutons within each animal’s olfactory circuit.

In future work, Boyden envisions that this technique could be used to trace circuits that control memory formation and recall, to study how sensory input leads to a specific behavior, or to analyze how emotions are coupled to decision-making.

“These are all questions at a scale that you can’t answer with classical technologies,” he says.

The system could also have applications beyond neuroscience, Boyden says. His lab is planning to work with other researchers to study how HIV evades the immune system, and the technology could also be adapted to study how cancer cells interact with surrounding cells, including immune cells.

The research was funded by John Doerr, K. Lisa Yang and Y. Eva Tan, the Open Philanthropy Project, the National Institutes of Health, the Howard Hughes Medical Institute, the HHMI-Simons Faculty Scholars Program, the U.S. Army Research Laboratory and Army Research Office, the US-Israel Binational Science Foundation, Biogen, and Ionis Pharmaceuticals.

Welcoming the first McGovern Fellows

We are delighted to kick off the new year by welcoming Omar Abuddayeh and Jonathan Gootenberg as the first members of our new McGovern Institute Fellows Program. The fellows program is a recently launched initiative that supports highly-talented and selected postdocs that are ready to initiate their own research program.

As McGovern Fellows, the pair will be given space, time, and support to help them follow scientific research directions of their own choosing. This provides an alternative to the traditional postdoctoral research route.

Abudayyeh and Gootenberg both defended their thesis in the fall of 2018, and graduated from the lab of Feng Zhang, who is the James and Patricia Poitras Professor of Neuroscience at MIT, a McGovern investigator and core member of the Broad Institute. During their time in the Zhang lab, Abudayyeh and Gootenberg worked on projects that sought and found new tools based on enzymes mined from bacterial CRISPR systems. Cas9 is the original programmable single-effector DNA-editing enzyme, and the new McGovern Fellows worked on teams that actively looked for CRISPR enzymes with properties distinct from and complementary to Cas9. In the course of their thesis work, they helped to identify RNA-guided RNA editing factors such as the Cas13 family. This work led to the development of the REPAIR system, which is capable of editing RNA, thus providing a CRISPR-based therapeutic avenue that is not based on permanent, heritable changes to the genome. In addition, they worked on a Cas13-based diagnostic system called SHERLOCK that can detect specific nucleic acid sequences. SHERLOCK is able to detect the presence of infectious agents such as Zika virus in an easily-deployable lateral flow format, similar to a pregnancy test.

We are excited to see the directions that the new McGovern Fellows take as they now arrive at the institute, and will keep you posted on scientific findings as they emerge from their labs.

 

Plugging into the brain

Driven by curiosity and therapeutic goals, Anikeeva leaves no scientific stone unturned in her drive to invent neurotechnology.

The audience sits utterly riveted as Polina Anikeeva highlights the gaps she sees in the landscape of neural tools. With a background in optoelectronics, she has a decidedly unique take on the brain.

“In neuroscience,” says Anikeeva, “we are currently applying silicon-based neural probes with the elastic properties of a knife to a delicate material with the consistency of chocolate pudding—the brain.”

A key problem, summarized by Anikeeva, is that these sharp probes damage tissue, making such interfaces unreliable and thwarting long term brain studies of processes including development and aging. The state of the art is even grimmer in the clinic. An avid climber, Anikeeva recalls a friend sustaining a spinal cord injury. “She made a remarkable recovery,” explains Anikeeva, “but seeing the technology being used to help her was shocking. Not even the simplest electronic tools were used, it was basically lots of screws and physical therapy.” This crude approach, compared to the elegant optoelectronic tools familiar to Anikeeva, sparked a drive to bring advanced materials technology to biological systems.

Outside the box

As the group breaks up after the seminar, the chatter includes boxes, more precisely, thinking outside of them. An associate professor in material sciences and engineering at MIT, Anikeeva’s interest in neuroscience recently led to a McGovern Institute appointment. She sees her journey to neurobiology as serendipitous, having earned her doctorate designing light-emitting devices at MIT.

“I wanted to work on tools that don’t exist, and neuroscience seemed like an obvious choice. Neurons communicate in part through membrane voltage changes and as an electronics designer, I felt that I should be able to use voltage.”

Comfort at the intersection of sciences requires, according to Anikeeva, clarity and focus, also important in her chief athletic pursuits, running and climbing. Through long distant running, Anikeeva finds solitary time (“assuming that no one can chase me”) and the clarity to consider complicated technical questions. Climbing hones something different, absolute focus in the face of the often-tangled information that comes with working at scientific intersections.

“When climbing, you can only think about one thing, your next move. Only the most important thoughts float up.”

This became particularly important when, in Yosemite National Park, she made the decision to go up, instead of down, during an impending thunderstorm. Getting out depended on clear focus, despite imminent hypothermia and being exposed “on one of the tallest features in the area, holding large quantities of metal.” Polina and her climbing partner made it out, but her summary of events echoes her research philosophy: “What you learn and develop is a strong mindset where you don’t do the comfortable thing, the easy thing. Instead you always find, and execute, the most logical strategy.”

In this vein, Anikeeva’s research pursues two very novel, but exceptionally logical, paths to brain research and therapeutics: fiber development and magnetic nanomaterials.

Drawing new fibers

Walking into Anikeeva’s lab, the eye is immediately drawn to a robust metal frame containing, upon closer scrutiny, recognizable parts: a large drill bit, a motor, a heating element. This custom-built machine applies principles from telecommunications to draw multifunctional fibers using more “brain-friendly” materials.

“We start out with a macroscopic model, a preform, of the device that we ultimately want,” explains Anikeeva.

This “preform” is a transparent block of polymers, composites, and soft low-melting temperature metals with optical and electrical properties needed in the final fiber. “So, this could include
electrodes for recording, optical channels for optogenetics, microfluidics for drug delivery, and one day even components that allow chemical or mechanical sensing.” After sitting in a vacuum to remove gases and impurities, the two-inch by one-inch preform arrives at the fiber-drawing tower.

“Then we heat it and pull it, and the macroscopic model becomes a kilometer-long fiber with a lateral dimension of microns, even nanometers,” explains Anikeeva. “Take one of your hairs, and imagine that inside there are electrodes for recording, there are microfluidic channels to infuse drugs, optical channels for stimulation. All of this is combined in a single miniature form
factor, and it can be quite flexible and even stretchable.”

Construction crew

Anikeeva’s lab comprises an eclectic mix of 21 researchers from over 13 different countries, and a range of expertises, including materials science, chemistry, electrical and mechanical engineering, and neuroscience. In 2011, Andres Canales, a materials scientist from Mexico, was the second person to join Anikeeva’s lab.

“There was only an idea, a diagram,” explains Canales. “I didn’t want to work on biology when I arrived at MIT, but talking to Polina, seeing the pictures, thinking about what it would entail, I became very excited by the methods and the potential applications she was thinking of.”

Despite the lack of preliminary models, Anikeeva’s ideas were compelling. Elegant as the fibers are, the road involved painstaking, iterative refinement. From a materials perspective, drawing a fiber containing a continuous conductive element was challenging, as was validation of its properties. But the resulting fiber can deliver optogenetics vectors, monitor expression, and then stimulate neuronal activity in a single surgery, removing the spatial and temporal guesswork usually involved in such an experiment.

Seongjun Park, an electrical engineering graduate student in the lab, explains one biological challenge. “For long term recording in the spinal cord, there was even an additional challenge as the fiber needed to be stretchable to respond to the spine’s movement. For this we developed a drawing process compatible with an elastomer.”

The resulting fibers can be deployed chronically without the scar tissue accumulation that usually prevents long-term optical manipulation and drug delivery, making them good candidates for the treatment of brain disorders. The lab’s current papers find that these implanted fibers are useful for three months, and material innovations make them confident that longer time periods are possible.

Magnetic moments

Another wing of Anikeeva’s research aims to develop entirely non-invasive modalities, and use magnetic nanoparticles to stimulate the brain and deliver therapeutics.

“Magnetic fields are probably the best modality for getting any kind of stimulus to deep tissues,” explains Anikeeva, “because biological systems, except for very specialized systems, do not perceive magnetic fields. They go through us unattenuated, and they don’t couple to our physiology.”

In other words, magnetic fields can safely reach deep tissues, including the brain. Upon reaching their tissue targets these fields can be used to stimulate magnetic nanoparticles, which might one day, for example, be used to deliver dopamine to the brains of Parkinson’s disease patients. The alternating magnetic fields being used in these experiments are tiny, 100-1000 times smaller than fields clinically approved for MRI-based brain imaging.

Tiny fields, but they can be used to powerful effect. By manipulating magnetic moments in these nanoparticles, the magnetic field can cause heat dissipation by the particle that can stimulate thermal receptors in the nervous system. These receptors naturally detect heat, chili peppers and vanilla, but Anikeeva’s magnetic nanoparticles act as tiny heaters that activate these receptors, and, in turn, local neurons. This principle has already been used to activate the brain’s reward center in freely moving mice.

Siyuan Rao, a postdoc who works on the magnetic nanoparticles in collaboration with McGovern Investigator Guoping Feng, is unhesitating when asked what most inspires her.

“As a materials scientist, it is really rewarding to see my materials at work. We can remotely modulate mouse behavior, even turn hopeless behavior into motivation.”

Pushing the boundaries

Such collaborations are valued by Anikeeva. Early on she worked with McGovern Investigator Emilio Bizzi to use the above fiber technology in the spinal cord. “It is important to us to not just make these devices,” explains Anikeeva, “but to use them and show ourselves, and our colleagues, the types of experiments that they can enable.”

Far from an assembly line, the researchers in Anikeeva’s lab follow projects from ideation to deployment. “The student that designs a fiber, performs their own behavioral experiments, and data analysis,” says Anikeeva. “Biology is unforgiving. You can trivially design the most brilliant electrophysiological recording probe, but unless you are directly working in the system, it is easy to miss important design considerations.”

Inspired by this, Anikeeva’s students even started a project with Gloria Choi’s group on their own initiative. This collaborative, can-do ethos spreads beyond the walls of the lab, inspiring people around MIT.

“We often work with a teaching instructor, David Bono, who is an expert on electronics and magnetic instruments,” explains Alex Senko, a senior graduate student in the lab. “In his spare time, he helps those of us who work on electrical engineering flavored projects to hunt down components needed to build our devices.”

These components extend to whatever is needed. When a low frequency source was needed, the Anikeeva lab drafted a guitar amplifier.

Queried about difficulties that she faces having chosen to navigate such a broad swath of fields, Anikeeva is focused, as ever, on the unknown, the boundaries of knowledge.

“Honestly, I really, really enjoy it. It keeps me engaged and not bored. Even when thinking about complicated physics and chemistry, I always have eyes on the prize, that this will allow us to address really interesting neuroscience questions.”

With such thinking, and by relentlessly seeking the tools needed to accomplish scientific goals, Anikeeva and her lab continue to avoid the comfortable route, instead using logical routes toward new technologies.

What is CRISPR?

CRISPR (which stands for Clustered Regularly Interspaced Short Palindromic Repeats) is not actually a single entity, but shorthand for a set of bacterial systems that are found with a hallmarked arrangement in the bacterial genome.

When CRISPR is mentioned, most people are likely thinking of CRISPR-Cas9, now widely known for its capacity to be re-deployed to target sequences of interest in eukaryotic cells, including human cells. Cas9 can be programmed to target specific stretches of DNA, but other enzymes have since been discovered that are able to edit DNA, including Cpf1 and Cas12b. Other CRISPR enzymes, Cas13 family members, can be programmed to target RNA and even edit and change its sequence.

The common theme that makes CRISPR enzymes so powerful, is that scientists can supply them with a guide RNA for a chosen sequence. Since the guide RNA can pair very specifically with DNA, or for Cas13 family members, RNA, researchers can basically provide a given CRISPR enzyme with a way of homing in on any sequence of interest. Once a CRISPR protein finds its target, it can be used to edit that sequence, perhaps removing a disease-associated mutation.

In addition, CRISPR proteins have been engineered to modulate gene expression and even signal the presence of particular sequences, as in the case of the Cas13-based diagnostic, SHERLOCK.

Do you have a question for The Brain? Ask it here.

Team invents method to shrink objects to the nanoscale

MIT researchers have invented a way to fabricate nanoscale 3-D objects of nearly any shape. They can also pattern the objects with a variety of useful materials, including metals, quantum dots, and DNA.

“It’s a way of putting nearly any kind of material into a 3-D pattern with nanoscale precision,” says Edward Boyden, the Y. Eva Tan Professor in Neurotechnology and an associate professor of biological engineering and of brain and cognitive sciences at MIT.

Using the new technique, the researchers can create any shape and structure they want by patterning a polymer scaffold with a laser. After attaching other useful materials to the scaffold, they shrink it, generating structures one thousandth the volume of the original.

These tiny structures could have applications in many fields, from optics to medicine to robotics, the researchers say. The technique uses equipment that many biology and materials science labs already have, making it widely accessible for researchers who want to try it.

Boyden, who is also a member of MIT’s Media Lab, McGovern Institute for Brain Research, and Koch Institute for Integrative Cancer Research, is one of the senior authors of the paper, which appears in the Dec. 13 issue of Science. The other senior author is Adam Marblestone, a Media Lab research affiliate, and the paper’s lead authors are graduate students Daniel Oran and Samuel Rodriques.

Implosion fabrication

Existing techniques for creating nanostructures are limited in what they can accomplish. Etching patterns onto a surface with light can produce 2-D nanostructures but doesn’t work for 3-D structures. It is possible to make 3-D nanostructures by gradually adding layers on top of each other, but this process is slow and challenging. And, while methods exist that can directly 3-D print nanoscale objects, they are restricted to specialized materials like polymers and plastics, which lack the functional properties necessary for many applications. Furthermore, they can only generate self-supporting structures. (The technique can yield a solid pyramid, for example, but not a linked chain or a hollow sphere.)

To overcome these limitations, Boyden and his students decided to adapt a technique that his lab developed a few years ago for high-resolution imaging of brain tissue. This technique, known as expansion microscopy, involves embedding tissue into a hydrogel and then expanding it, allowing for high resolution imaging with a regular microscope. Hundreds of research groups in biology and medicine are now using expansion microscopy, since it enables 3-D visualization of cells and tissues with ordinary hardware.

By reversing this process, the researchers found that they could create large-scale objects embedded in expanded hydrogels and then shrink them to the nanoscale, an approach that they call “implosion fabrication.”

As they did for expansion microscopy, the researchers used a very absorbent material made of polyacrylate, commonly found in diapers, as the scaffold for their nanofabrication process. The scaffold is bathed in a solution that contains molecules of fluorescein, which attach to the scaffold when they are activated by laser light.

Using two-photon microscopy, which allows for precise targeting of points deep within a structure, the researchers attach fluorescein molecules to specific locations within the gel. The fluorescein molecules act as anchors that can bind to other types of molecules that the researchers add.

“You attach the anchors where you want with light, and later you can attach whatever you want to the anchors,” Boyden says. “It could be a quantum dot, it could be a piece of DNA, it could be a gold nanoparticle.”

“It’s a bit like film photography — a latent image is formed by exposing a sensitive material in a gel to light. Then, you can develop that latent image into a real image by attaching another material, silver, afterwards. In this way implosion fabrication can create all sorts of structures, including gradients, unconnected structures, and multimaterial patterns,” Oran says.

Once the desired molecules are attached in the right locations, the researchers shrink the entire structure by adding an acid. The acid blocks the negative charges in the polyacrylate gel so that they no longer repel each other, causing the gel to contract. Using this technique, the researchers can shrink the objects 10-fold in each dimension (for an overall 1,000-fold reduction in volume). This ability to shrink not only allows for increased resolution, but also makes it possible to assemble materials in a low-density scaffold. This enables easy access for modification, and later the material becomes a dense solid when it is shrunk.

“People have been trying to invent better equipment to make smaller nanomaterials for years, but we realized that if you just use existing systems and embed your materials in this gel, you can shrink them down to the nanoscale, without distorting the patterns,” Rodriques says.

Currently, the researchers can create objects that are around 1 cubic millimeter, patterned with a resolution of 50 nanometers. There is a tradeoff between size and resolution: If the researchers want to make larger objects, about 1 cubic centimeter, they can achieve a resolution of about 500 nanometers. However, that resolution could be improved with further refinement of the process, the researchers say.

Better optics

The MIT team is now exploring potential applications for this technology, and they anticipate that some of the earliest applications might be in optics — for example, making specialized lenses that could be used to study the fundamental properties of light. This technique might also allow for the fabrication of smaller, better lenses for applications such as cell phone cameras, microscopes, or endoscopes, the researchers say. Farther in the future, the researchers say that this approach could be used to build nanoscale electronics or robots.

“There are all kinds of things you can do with this,” Boyden says. “Democratizing nanofabrication could open up frontiers we can’t yet imagine.”

Many research labs are already stocked with the equipment required for this kind of fabrication. “With a laser you can already find in many biology labs, you can scan a pattern, then deposit metals, semiconductors, or DNA, and then shrink it down,” Boyden says.

The research was funded by the Kavli Dream Team Program, the HHMI-Simons Faculty Scholars Program, the Open Philanthropy Project, John Doerr, the Office of Naval Research, the National Institutes of Health, the New York Stem Cell Foundation-Robertson Award, the U.S. Army Research Office, K. Lisa Yang and Y. Eva Tan, and the MIT Media Lab.

SHERLOCK: A CRISPR tool to detect disease

This animation depicts how Cas13 — a CRISPR-associated protein — may be adapted to detect human disease. This new diagnostic tool, called SHERLOCK, targets RNA (rather than DNA), and has the potential to transform research and global public health.

 

How the brain switches between different sets of rules

Cognitive flexibility — the brain’s ability to switch between different rules or action plans depending on the context — is key to many of our everyday activities. For example, imagine you’re driving on a highway at 65 miles per hour. When you exit onto a local street, you realize that the situation has changed and you need to slow down.

When we move between different contexts like this, our brain holds multiple sets of rules in mind so that it can switch to the appropriate one when necessary. These neural representations of task rules are maintained in the prefrontal cortex, the part of the brain responsible for planning action.

A new study from MIT has found that a region of the thalamus is key to the process of switching between the rules required for different contexts. This region, called the mediodorsal thalamus, suppresses representations that are not currently needed. That suppression also protects the representations as a short-term memory that can be reactivated when needed.

“It seems like a way to toggle between irrelevant and relevant contexts, and one advantage is that it protects the currently irrelevant representations from being overwritten,” says Michael Halassa, an assistant professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research.

Halassa is the senior author of the paper, which appears in the Nov. 19 issue of Nature Neuroscience. The paper’s first author is former MIT graduate student Rajeev Rikhye, who is now a postdoc in Halassa’s lab. Aditya Gilra, a postdoc at the University of Bonn, is also an author.

Changing the rules

Previous studies have found that the prefrontal cortex is essential for cognitive flexibility, and that a part of the thalamus called the mediodorsal thalamus also contributes to this ability. In a 2017 study published in Nature, Halassa and his colleagues showed that the mediodorsal thalamus helps the prefrontal cortex to keep a thought in mind by temporarily strengthening the neuronal connections in the prefrontal cortex that encode that particular thought.

In the new study, Halassa wanted to further investigate the relationship between the mediodorsal thalamus and the prefrontal cortex. To do that, he created a task in which mice learn to switch back and forth between two different contexts — one in which they must follow visual instructions and one in which they must follow auditory instructions.

In each trial, the mice are given both a visual target (flash of light to the right or left) and an auditory target (a tone that sweeps from high to low pitch, or vice versa). These targets offer conflicting instructions. One tells the mouse to go to the right to get a reward; the other tells it to go left. Before each trial begins, the mice are given a cue that tells them whether to follow the visual or auditory target.

“The only way for the animal to solve the task is to keep the cue in mind over the entire delay, until the targets are given,” Halassa says.

The researchers found that thalamic input is necessary for the mice to successfully switch from one context to another. When they suppressed the mediodorsal thalamus during the cuing period of a series of trials in which the context did not change, there was no effect on performance. However, if they suppressed the mediodorsal thalamus during the switch to a different context, it took the mice much longer to switch.

By recording from neurons of the prefrontal cortex, the researchers found that when the mediodorsal thalamus was suppressed, the representation of the old context in the prefrontal cortex could not be turned off, making it much harder to switch to the new context.

In addition to helping the brain switch between contexts, this process also appears to help maintain the neural representation of the context that is not currently being used, so that it doesn’t get overwritten, Halassa says. This allows it to be activated again when needed. The mice could maintain these representations over hundreds of trials, but the next day, they had to relearn the rules associated with each context.

Sabine Kastner, a professor of psychology at the Princeton Neuroscience Institute, described the study as a major leap forward in the field of cognitive neuroscience.

“This is a tour-de-force from beginning to end, starting with a sophisticated behavioral design, state-of-the-art methods including causal manipulations, exciting empirical results that point to cell-type specific differences and interactions in functionality between thalamus and cortex, and a computational approach that links the neuroscience results to the field of artificial intelligence,” says Kastner, who was not involved in the research.

Multitasking AI

The findings could help guide the development of better artificial intelligence algorithms, Halassa says. The human brain is very good at learning many different kinds of tasks — singing, walking, talking, etc. However, neural networks (a type of artificial intelligence based on interconnected nodes similar to neurons) usually are good at learning only one thing. These networks are subject to a phenomenon called “catastrophic forgetting” — when they try to learn a new task, previous tasks become overwritten.

Halassa and his colleagues now hope to apply their findings to improve neural networks’ ability to store previously learned tasks while learning to perform new ones.

The research was funded by the National Institutes of Health, the Brain and Behavior Foundation, the Klingenstein Foundation, the Pew Foundation, the Simons Foundation, the Human Frontiers Science Program, and the German Ministry of Education.

Brain activity pattern may be early sign of schizophrenia

Schizophrenia, a brain disorder that produces hallucinations, delusions, and cognitive impairments, usually strikes during adolescence or young adulthood. While some signs can suggest that a person is at high risk for developing the disorder, there is no way to definitively diagnose it until the first psychotic episode occurs.

MIT neuroscientists working with researchers at Beth Israel Deaconess Medical Center, Brigham and Women’s Hospital, and the Shanghai Mental Health Center have now identified a pattern of brain activity correlated with development of schizophrenia, which they say could be used as a marker to diagnose the disease earlier.

“You can consider this pattern to be a risk factor. If we use these types of brain measurements, then maybe we can predict a little bit better who will end up developing psychosis, and that may also help tailor interventions,” says Guusje Collin, a visiting scientist at MIT’s McGovern Institute for Brain Research and the lead author of the paper.

The study, which appeared in the journal Molecular Psychiatry on Nov. 8, was performed at the Shanghai Mental Health Center. Susan Whitfield-Gabrieli, a visiting scientist at the McGovern Institute and a professor of psychology at Northeastern University, is one of the principal investigators for the study, along with Jijun Wang of the Shanghai Mental Health Center, William Stone of Beth Israel Deaconess Medical Center, the late Larry Seidman of Beth Israel Deaconess Medical Center, and Martha Shenton of Brigham and Women’s Hospital.

Abnormal connections

Before they experience a psychotic episode, characterized by sudden changes in behavior and a loss of touch with reality, patients can experience milder symptoms such as disordered thinking. This kind of thinking can lead to behaviors such as jumping from topic to topic at random, or giving answers unrelated to the original question. Previous studies have shown that about 25 percent of people who experience these early symptoms go on to develop schizophrenia.

The research team performed the study at the Shanghai Mental Health Center because the huge volume of patients who visit the hospital annually gave them a large enough sample of people at high risk of developing schizophrenia.

The researchers followed 158 people between the ages of 13 and 34 who were identified as high-risk because they had experienced early symptoms. The team also included 93 control subjects, who did not have any risk factors. At the beginning of the study, the researchers used functional magnetic resonance imaging (fMRI) to measure a type of brain activity involving “resting state networks.” Resting state networks consist of brain regions that preferentially connect with and communicate with each other when the brain is not performing any particular cognitive task.

“We were interested in looking at the intrinsic functional architecture of the brain to see if we could detect early aberrant brain connectivity or networks in individuals who are in the clinically high-risk phase of the disorder,” Whitfield-Gabrieli says.

One year after the initial scans, 23 of the high-risk patients had experienced a psychotic episode and were diagnosed with schizophrenia. In those patients’ scans, taken before their diagnosis, the researchers found a distinctive pattern of activity that was different from the healthy control subjects and the at-risk subjects who had not developed psychosis.

For example, in most people, a part of the brain known as the superior temporal gyrus, which is involved in auditory processing, is highly connected to brain regions involved in sensory perception and motor control. However, in patients who developed psychosis, the superior temporal gyrus became more connected to limbic regions, which are involved in processing emotions. This could help explain why patients with schizophrenia usually experience auditory hallucinations, the researchers say.

Meanwhile, the high-risk subjects who did not develop psychosis showed network connectivity nearly identical to that of the healthy subjects.

Early intervention

This type of distinctive brain activity could be useful as an early indicator of schizophrenia, especially since it is possible that it could be seen in even younger patients. The researchers are now performing similar studies with younger at-risk populations, including children with a family history of schizophrenia.

“That really gets at the heart of how we can translate this clinically, because we can get in earlier and earlier to identify aberrant networks in the hopes that we can do earlier interventions, and possibly even prevent psychiatric disorders,” Whitfield-Gabrieli says.

She and her colleagues are now testing early interventions that could help to combat the symptoms of schizophrenia, including cognitive behavioral therapy and neural feedback. The neural feedback approach involves training patients to use mindfulness meditation to reduce activity in the superior temporal gyrus, which tends to increase before and during auditory hallucinations.

The researchers also plan to continue following the patients in the current study, and they are now analyzing some additional data on the white matter connections in the brains of these patients, to see if those connections might yield additional differences that could also serve as early indicators of disease.

The research was funded by the National Institutes of Health, the Ministry of Science and Technology of China, and the Poitras Center for Psychiatric Disorders Research at MIT. Collin was supported by a Marie Curie Global Fellowship grant from the European Commission.

Is it worth the risk?

During the Klondike Gold Rush, thousands of prospectors climbed Alaska’s dangerous Chilkoot Pass in search of riches. McGovern scientists are exploring how a once-overlooked part of the brain might be at the root of cost-benefit decisions like these. McGovern researchers are studying how the brain balances risk and reward to make decisions.

Is it worth speeding up on the highway to save a few minutes’ time? How about accepting a job that pays more, but requires longer hours in the office?

Scientists call these types of real-life situations cost-benefit conflicts. Choosing well is an essential survival ability—consider the animal that must decide when to expose itself to predation to gather more food.

Now, McGovern researchers are discovering that this fundamental capacity to make decisions may originate in the basal ganglia—a brain region once considered unimportant to the human
experience—and that circuits associated with this structure may play a critical role in determining our state of mind.

Anatomy of decision-making

A few years back, McGovern investigator Ann Graybiel noticed that in the brain imaging literature, a specific part of the cortex called the pregenual anterior cingulate cortex or pACC, was implicated in certain psychiatric disorders as well as tasks involving cost-benefit decisions. Thanks to her now classic neuroanatomical work defining the complex anatomy and function of the basal ganglia, Graybiel knew that the pACC projected back into the basal ganglia—including its largest cluster of neurons, the striatum.

The striatum sits beneath the cortex, with a mouse-like main body and curving tail. It seems to serve as a critical way-station, communicating with both the brain’s sensory and motor areas above, and the limbic system (linked to emotion and memory) below. Running through the striatum are striosomes, column-like neurochemical compartments. They wire down to a small, but important part of the brain called the substantia nigra, which houses the huge majority of the brain’s dopamine neurons—a key neurochemical heavily involved, much like the basal ganglia as a whole, in reward, learning, and movement. The pACC region related to mood control targeted these striosomes, setting up a communication line from the neocortex to the dopamine neurons.

Graybiel discovered these striosomes early in her career, and understood them to have distinct wiring from other compartments in the striatum, but picking out these small, hard-to-find striosomes posed a technological challenge—so it was exciting to have this intriguing link to the pACC and mood disorders.

Working with Ken-ichi Amemori, then a research scientist in her lab, she adapted a common human cost-benefit conflict test for macaque monkeys. The monkeys could elect to receive a food treat, but the treat would always be accompanied by an annoying puff of air to the eyes. Before they decided, a visual cue told them exactly how much treat they could get, and exactly how strong the air puff would be, so they could choose if the treat was worth it.

Normal monkeys varied their choices in a fairly rational manner, rejecting the treat whenever it seemed like the air puff was too strong, or the treat too small to be worth it—and this corresponded with activity in the pACC neurons. Interestingly, they found that some pACC neurons respond more when animals approach the combined offers, while other pACC neurons
fire more when the animals avoid the offers. “It is as though there are two opposing armies. And the one that wins, controls the state of the animal.” Moreover, when Graybiel’s team electrically stimulated these pACC neurons, the animals begin to avoid the offers, even offers that they normally would approach. “It is as though when the stimulation is on, they think the future is worse than it really is,” Graybiel says.

Intriguingly, this effect only worked in situations where the animal had to weigh the value of a cost against a benefit. It had no effect on a decision between two negatives or two positives, like two different sizes of treats. The anxiety drug diazepam also reversed the stimulatory effect, but again, only on cost-benefit choices. “This particular kind of mood-influenced cost-benefit
decision-making occurs not only under conflict conditions but in our regular day to day lives. For example: I know that if I eat too much chocolate, I might get fat, but I love it, I want it.”

Glass half empty

Over the next few years, Graybiel, with another research scientist in her lab, Alexander Friedman, unraveled the circuit behind the macaques’ choices. They adapted the test for rats and mice,
so that they could more easily combine the cellular and molecular technologies needed to study striosomes, such as optogenetics and mouse engineering.

They found that the cortex (specifically, the pre-limbic region of the prefrontal cortex in rodents) wires onto both striosomes and fast-acting interneurons that also target the striosomes. In a
healthy circuit, these interneurons keep the striosomes in check by firing off fast inhibitory signals, hitting the brakes before the striosome can get started. But if the researchers broke that corticalstriatal connection with optogenetics or chronic stress, the animals became reckless, going for the high-risk, high-reward arm of the maze like a gambler throwing caution to the wind. If they amplified this inhibitory interneuron activity, they saw the opposite effect. With these techniques, they could block the effects of prior chronic stress.

This summer, Graybiel and Amemori published another paper furthering the story and returning to macaques. It was still too difficult to hit striosomes, and the researchers could only stimulate the striatum more generally. However, they replicated the effects in past studies.

Many electrodes had no effect, a small number made the monkeys choose the reward more often. Nearly a quarter though made the monkeys more avoidant—and this effect correlated with a change in the macaques’ brainwaves in a manner reminiscent of patients with depression.

But the surprise came when the avoidant-producing stimulation was turned off, the effects lasted unexpectedly long, only returning to normal on the third day.

Graybiel was stunned. “This is very important, because changes in the brain can get set off and have a life of their own,” she says. “This is true for some individuals who have had a terrible experience, and then live with the aftermath, even to the point of suffering from post-traumatic stress disorder.”

She suspects that this persistent state may actually be a form of affect, or mood. “When we change this decision boundary, we’re changing the mood, such that the animal overestimates cost, relative to benefit,” she explains. “This might be like a proxy state for pessimistic decision-making experienced during anxiety and depression, but may also occur, in a milder form, in you and me.”

Graybiel theorizes that this may tie back into the dopamine neurons that the striosomes project to: if this avoidance behavior is akin to avoidance observed in rodents, then they are stimulating a circuit that ultimately projects to dopamine neurons of the substantia nigra. There, she believes, they could act to suppress these dopamine neurons, which in turn project to the rest of the brain, creating some sort of long-term change in their neural activity. Or, put more simply, stimulation of these circuits creates a depressive funk.

Bottom up

Three floors below the Graybiel lab, postdoc Will Menegas is in the early stages of his own work untangling the role of dopamine and the striatum in decision-making. He joined Guoping Feng’s lab this summer after exploring the understudied “tail of the striatum” at Harvard University.

While dopamine pathways influence many parts of the brain, examination of connections to the striatum have largely focused on the frontmost part of the striatum, associated with valuations.

But as Menegas showed while at Harvard, dopamine neurons that project to the rear of the striatum are different. Those neurons get their input from parts of the brain associated with general arousal and sensation—and instead of responding to rewards, they respond to novelty and intense stimuli, like air puffs and loud noises.

In a new study published in Nature Neuroscience, Menegas used a neurotoxin to disrupt the dopamine projection from the substantia nigra to the posterior striatum to see how this circuit influences behavior. Normal mice approach novel items cautiously and back away after sniffing at them, but the mice in Menegas’ study failed to back away. They stopped avoiding a port that gave an air puff to the face and they didn’t behave like normal mice when Menegas dropped a strange or new object—say, a lego—into their cage. Disrupting the nigral-posterior striatum
seemed to turn off their avoidance habit.

“These neurons reinforce avoidance the same way that canonical dopamine neurons reinforce approach,” Menegas explains. It’s a new role for dopamine, suggesting that there may be two different and distinct systems of reinforcement, led by the same neuromodulator in different parts of the striatum.

This research, and Graybiel’s discoveries on cost-benefit decision circuits, share clear parallels, though the precise links between the two phenomena are yet to be fully determined. Menegas plans to extend this line of research into social behavior and related disorders like autism in marmoset monkeys.

“Will wants to learn the methods that we use in our lab to work on marmosets,” Graybiel says. “I think that working together, this could become a wonderful story, because it would involve social interactions.”

“This a very new angle, and it could really change our views of how the reward system works,” Feng says. “And we have very little understanding of social circuits so far and especially in higher organisms, so I think this would be very exciting. Whatever we learn, it’s going to be new.”

Human choices

Based on their preexisting work, Graybiel’s and Menegas’ projects are well-developed—but they are far from the only McGovern-based explorations into ways this brain region taps into our behaviors. Maiya Geddes, a visiting scientist in John Gabrieli’s lab, has recently published a paper exploring the little-known ways that aging affects the dopamine-based nigral-striatum-hippocampus learning and memory systems.

In Rebecca Saxe’s lab, postdoc Livia Tomova just kicked off a new pilot project using brain imaging to uncover dopamine-striatal circuitry behind social craving in humans and the urge to rejoin peers. “Could there be a craving response similar to hunger?” Tomova wonders. “No one has looked yet at the neural mechanisms of this.”

Graybiel also hopes to translate her findings into humans, beginning with collaborations at the Pizzagalli lab at McLean Hospital in Belmont. They are using fMRI to study whether patients
with anxiety and depression show some of the same dysfunctions in the cortico-striatal circuitry that she discovered in her macaques.

If she’s right about tapping into mood states and affect, it would be an expanded role for the striatum—and one with significant potential therapeutic benefits. “Affect state” colors many psychological functions and disorders, from memory and perception, to depression, chronic stress, obsessive-compulsive disorder, and PTSD.

For a region of the brain once dismissed as inconsequential, McGovern researchers have shown the basal ganglia to influence not only our choices but our state of mind—suggesting that this “primitive” brain region may actually be at the heart of the human experience.

 

 

Machines that learn language more like kids do

Children learn language by observing their environment, listening to the people around them, and connecting the dots between what they see and hear. Among other things, this helps children establish their language’s word order, such as where subjects and verbs fall in a sentence.

In computing, learning language is the task of syntactic and semantic parsers. These systems are trained on sentences annotated by humans that describe the structure and meaning behind words. Parsers are becoming increasingly important for web searches, natural-language database querying, and voice-recognition systems such as Alexa and Siri. Soon, they may also be used for home robotics.

But gathering the annotation data can be time-consuming and difficult for less common languages. Additionally, humans don’t always agree on the annotations, and the annotations themselves may not accurately reflect how people naturally speak.

In a paper being presented at this week’s Empirical Methods in Natural Language Processing conference, MIT researchers describe a parser that learns through observation to more closely mimic a child’s language-acquisition process, which could greatly extend the parser’s capabilities. To learn the structure of language, the parser observes captioned videos, with no other information, and associates the words with recorded objects and actions. Given a new sentence, the parser can then use what it’s learned about the structure of the language to accurately predict a sentence’s meaning, without the video.

This “weakly supervised” approach — meaning it requires limited training data — mimics how children can observe the world around them and learn language, without anyone providing direct context. The approach could expand the types of data and reduce the effort needed for training parsers, according to the researchers. A few directly annotated sentences, for instance, could be combined with many captioned videos, which are easier to come by, to improve performance.

In the future, the parser could be used to improve natural interaction between humans and personal robots. A robot equipped with the parser, for instance, could constantly observe its environment to reinforce its understanding of spoken commands, including when the spoken sentences aren’t fully grammatical or clear. “People talk to each other in partial sentences, run-on thoughts, and jumbled language. You want a robot in your home that will adapt to their particular way of speaking … and still figure out what they mean,” says co-author Andrei Barbu, a researcher in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Center for Brains, Minds, and Machines (CBMM) within MIT’s McGovern Institute.

The parser could also help researchers better understand how young children learn language. “A child has access to redundant, complementary information from different modalities, including hearing parents and siblings talk about the world, as well as tactile information and visual information, [which help him or her] to understand the world,” says co-author Boris Katz, a principal research scientist and head of the InfoLab Group at CSAIL. “It’s an amazing puzzle, to process all this simultaneous sensory input. This work is part of bigger piece to understand how this kind of learning happens in the world.”

Co-authors on the paper are: first author Candace Ross, a graduate student in the Department of Electrical Engineering and Computer Science and CSAIL, and a researcher in CBMM; Yevgeni Berzak PhD ’17, a postdoc in the Computational Psycholinguistics Group in the Department of Brain and Cognitive Sciences; and CSAIL graduate student Battushig Myanganbayar.

Visual learner

For their work, the researchers combined a semantic parser with a computer-vision component trained in object, human, and activity recognition in video. Semantic parsers are generally trained on sentences annotated with code that ascribes meaning to each word and the relationships between the words. Some have been trained on still images or computer simulations.

The new parser is the first to be trained using video, Ross says. In part, videos are more useful in reducing ambiguity. If the parser is unsure about, say, an action or object in a sentence, it can reference the video to clear things up. “There are temporal components — objects interacting with each other and with people — and high-level properties you wouldn’t see in a still image or just in language,” Ross says.

The researchers compiled a dataset of about 400 videos depicting people carrying out a number of actions, including picking up an object or putting it down, and walking toward an object. Participants on the crowdsourcing platform Mechanical Turk then provided 1,200 captions for those videos. They set aside 840 video-caption examples for training and tuning, and used 360 for testing. One advantage of using vision-based parsing is “you don’t need nearly as much data — although if you had [the data], you could scale up to huge datasets,” Barbu says.

In training, the researchers gave the parser the objective of determining whether a sentence accurately describes a given video. They fed the parser a video and matching caption. The parser extracts possible meanings of the caption as logical mathematical expressions. The sentence, “The woman is picking up an apple,” for instance, may be expressed as: λxy. woman x, pick_up x y, apple y.

Those expressions and the video are inputted to the computer-vision algorithm, called “Sentence Tracker,” developed by Barbu and other researchers. The algorithm looks at each video frame to track how objects and people transform over time, to determine if actions are playing out as described. In this way, it determines if the meaning is possibly true of the video.

Connecting the dots

The expression with the most closely matching representations for objects, humans, and actions becomes the most likely meaning of the caption. The expression, initially, may refer to many different objects and actions in the video, but the set of possible meanings serves as a training signal that helps the parser continuously winnow down possibilities. “By assuming that all of the sentences must follow the same rules, that they all come from the same language, and seeing many captioned videos, you can narrow down the meanings further,” Barbu says.

In short, the parser learns through passive observation: To determine if a caption is true of a video, the parser by necessity must identify the highest probability meaning of the caption. “The only way to figure out if the sentence is true of a video [is] to go through this intermediate step of, ‘What does the sentence mean?’ Otherwise, you have no idea how to connect the two,” Barbu explains. “We don’t give the system the meaning for the sentence. We say, ‘There’s a sentence and a video. The sentence has to be true of the video. Figure out some intermediate representation that makes it true of the video.’”

The training produces a syntactic and semantic grammar for the words it’s learned. Given a new sentence, the parser no longer requires videos, but leverages its grammar and lexicon to determine sentence structure and meaning.

Ultimately, this process is learning “as if you’re a kid,” Barbu says. “You see world around you and hear people speaking to learn meaning. One day, I can give you a sentence and ask what it means and, even without a visual, you know the meaning.”

“This research is exactly the right direction for natural language processing,” says Stefanie Tellex, a professor of computer science at Brown University who focuses on helping robots use natural language to communicate with humans. “To interpret grounded language, we need semantic representations, but it is not practicable to make it available at training time. Instead, this work captures representations of compositional structure using context from captioned videos. This is the paper I have been waiting for!”

In future work, the researchers are interested in modeling interactions, not just passive observations. “Children interact with the environment as they’re learning. Our idea is to have a model that would also use perception to learn,” Ross says.

This work was supported, in part, by the CBMM, the National Science Foundation, a Ford Foundation Graduate Research Fellowship, the Toyota Research Institute, and the MIT-IBM Brain-Inspired Multimedia Comprehension project.