Algorithms of intelligence

The following post is adapted from a story featured in a recent Brain Scan newsletter.

Machine vision systems are more and more common in everyday life, from social media to self-driving cars, but training artificial neural networks to “see” the world as we do—distinguishing cyclists from signposts—remains challenging. Will artificial neural networks ever decode the world as exquisitely as humans? Can we refine these models and influence perception in a person’s brain just by activating individual, selected neurons? The DiCarlo lab, including CBMM postdocs Kohitij Kar and Pouya Bashivan, are finding that we are surprisingly close to answering “yes” to such questions, all in the context of accelerated insights into artificial intelligence at the McGovern Institute for Brain Research, CBMM, and the Quest for Intelligence at MIT.

Precision Modeling

Beyond light hitting the retina, the recognition process that unfolds in the visual cortex is key to truly “seeing” the surrounding world. Information is decoded through the ventral visual stream, cortical brain regions that progressively build a more accurate, fine-grained, and accessible representation of the objects around us. Artificial neural networks have been modeled on these elegant cortical systems, and the most successful models, deep convolutional neural networks (DCNNs), can now decode objects at levels comparable to the primate brain. However, even leading DCNNs have problems with certain challenging images, presumably due to shadows, clutter, and other visual noise. While there’s no simple feature that unites all challenging images, the quest is on to tackle such images to attain precise recognition at a level commensurate with human object recognition.

“One next step is to couple this new precision tool with our emerging understanding of how neural patterns underlie object perception. This might allow us to create arrangements of pixels that look nothing like, for example, a cat, but that can fool the brain into thinking it’s seeing a cat.”- James DiCarlo

In a recent push, Kar and DiCarlo demonstrated that adding feedback connections, currently missing in most DCNNs, allows the system to better recognize objects in challenging situations, even those where a human can’t articulate why recognition is an issue for feedforward DCNNs. They also found that this recurrent circuit seems critical to primate success rates in performing this task. This is incredibly important for systems like self-driving cars, where the stakes for artificial visual systems are high, and faithful recognition is a must.

Now you see it

As artificial object recognition systems have become more precise in predicting neural activity, the DiCarlo lab wondered what such precision might allow: could they use their system to not only predict, but to control specific neuronal activity?

To demonstrate the power of their models, Bashivan, Kar, and colleagues zeroed in on targeted neurons in the brain. In a paper published in Science, they used an artificial neural network to generate a random-looking group of pixels that, when shown to an animal, activated the team’s target, a target they called “one hot neuron.” In other words, they showed the brain a synthetic pattern, and the pixels in the pattern precisely activated targeted neurons while other neurons remained relatively silent.

These findings show how the knowledge in today’s artificial neural network models might one day be used to noninvasively influence brain states with neural resolution. Such precise systems would be useful as we look to the future, toward visual prosthetics for the blind. Such a precise model of the ventral visual stream would have been incon-ceivable not so long ago, and all eyes are on where McGovern researchers will take these technologies in the coming years.

Why is the brain shaped like it is?

The human brain has a very striking shape, and one feature stands out large and clear: the cerebral cortex with its stereotyped pattern of gyri (folds and convolutions) and sulci (fissures and depressions). This characteristic folded shape of the cortex is a major innovation in evolution that allowed an increase in the size and complexity of the human brain.

How the brain adopts these complex folds is surprisingly unclear, but probably involves both shape changes and movement of cells. Mechanical constraints within the overall tissue, and imposed by surrounding tissues also contribute to the ultimate shape: the brain has to fit into the skull after all. McGovern postdoc Jonathan Wilde has a long-term interest in studying how the brain develops, and explained to us how the shape of the brain initially arises.

In the case of humans, our historical reliance upon intelligence has driven a massive expansion of the cerebral cortex.

“Believe it or not, all vertebrate brains begin as a flat sheet of epithelial cells that folds upon itself to form a tube,” explains Wilde. “This neural tube is made up of a single layer of neural stem cells that go through a rapid and highly orchestrated process of expansion and differentiation, giving rise to all of the neurons in the brain. Throughout the first steps of development, the brains of most vertebrates are indistinguishable from one another, but the final shape of the brain is highly dependent upon the organism and primarily reflects that organism’s lifestyle, environment, and cognitive demands.”

So essentially, the brain starts off as a similar shape for creatures with spinal cords. But why is the human brain such a distinct shape?

“In the case of humans,” explains Wilde, “our historical reliance upon intelligence has driven a massive expansion of the cerebral cortex, which is the primary brain structure responsible for critical thinking and higher cognitive abilities. Accordingly, the human cortex is strikingly large and covered in a labyrinth of folds that serve to increase its surface area and computational power.”

The anatomical shape of the human brain is striking, but it also helps researchers to map a hidden functional atlas: specific brain regions that selectively activate in fMRI when you see a face, scene, hear music and a variety of other tasks. I asked former McGovern graduate student, and current postdoc at Boston Children’s Hospital, Hilary Richardson, for her perspective on this more hidden structure in the brain and how it relates to brain shape.

Illustration of person rappelling into the brain's sylvian fissure.
The Sylvian fissure is a prominent groove on each side of the brain that separates the frontal and parietal lobes from the temporaal lobe. McGovern researchers are studying a region near the right Sylvian fissure, called the rTPJ, which is involved in thinking about what another person is thinking. Image: Joe Laney

“One of the most fascinating aspects of brain shape is how similar it is across individuals, even very young infants and children,” explains Richardson. “Despite the dramatic cognitive changes that happen across childhood, the shape of the brain is remarkably consistent. Given this, one open question is what kinds of neural changes support cognitive development. For example, while the anatomical shape and size of the rTPJ seems to stay the same across childhood, its response becomes more specialized to information about mental states – beliefs, desires, and emotions – as children get older. One intriguing hypothesis is that this specialization helps support social development in childhood.”

We’ll end with an ode to a prominent feature of brain shape: the “Sylvian fissure,” a prominent groove on each side of the brain that separates the frontal and parietal lobes from the temporal lobe. Such landmarks in brain shape help orient researchers, and the Sylvian fissure was recently immortalized in this image, from a postcard by illustrator Joe Laney.

______

Do you have a question for The Brain? Ask it here.

 

Neuroscientists reverse some behavioral symptoms of Williams Syndrome

Williams Syndrome, a rare neurodevelopmental disorder that affects about 1 in 10,000 babies born in the United States, produces a range of symptoms including cognitive impairments, cardiovascular problems, and extreme friendliness, or hypersociability.

In a study of mice, MIT neuroscientists have garnered new insight into the molecular mechanisms that underlie this hypersociability. They found that loss of one of the genes linked to Williams Syndrome leads to a thinning of the fatty layer that insulates neurons and helps them conduct electrical signals in the brain.

The researchers also showed that they could reverse the symptoms by boosting production of this coating, known as myelin. This is significant, because while Williams Syndrome is rare, many other neurodevelopmental disorders and neurological conditions have been linked to myelination deficits, says Guoping Feng, the James W. and Patricia Poitras Professor of Neuroscience and a member of MIT’s McGovern Institute for Brain Research.

“The importance is not only for Williams Syndrome,” says Feng, who is one of the senior authors of the study. “In other neurodevelopmental disorders, especially in some of the autism spectrum disorders, this could be potentially a new direction to look into, not only the pathology but also potential treatments.”

Zhigang He, a professor of neurology and ophthalmology at Harvard Medical School, is also a senior author of the paper, which appears in the April 22 issue of Nature Neuroscience. Former MIT postdoc Boaz Barak, currently a principal investigator at Tel Aviv University in Israel, is the lead author and a senior author of the paper.

Impaired myelination

Williams Syndrome, which is caused by the loss of one of the two copies of a segment of chromosome 7, can produce learning impairments, especially for tasks that require visual and motor skills, such as solving a jigsaw puzzle. Some people with the disorder also exhibit poor concentration and hyperactivity, and they are more likely to experience phobias.

In this study, the researchers decided to focus on one of the 25 genes in that segment, known as Gtf2i. Based on studies of patients with a smaller subset of the genes deleted, scientists have linked the Gtf2i gene to the hypersociability seen in Williams Syndrome.

Working with a mouse model, the researchers devised a way to knock out the gene specifically from excitatory neurons in the forebrain, which includes the cortex, the hippocampus, and the amygdala (a region important for processing emotions). They found that these mice did show increased levels of social behavior, measured by how much time they spent interacting with other mice. The mice also showed deficits in fine motor skills and increased nonsocial related anxiety, which are also symptoms of Williams Syndrome.

Next, the researchers sequenced the messenger RNA from the cortex of the mice to see which genes were affected by loss of Gtf2i. Gtf2i encodes a transcription factor, so it controls the expression of many other genes. The researchers found that about 70 percent of the genes with significantly reduced expression levels were involved in the process of myelination.

“Myelin is the insulation layer that wraps the axons that extend from the cell bodies of neurons,” Barak says. “When they don’t have the right properties, it will lead to faster or slower electrical signal transduction, which affects the synchronicity of brain activity.”

Further studies revealed that the mice had only about half the normal number of mature oligodendrocytes — the brain cells that produce myelin. However, the number of oligodendrocyte precursor cells was normal, so the researchers suspect that the maturation and differentiation processes of these cells are somehow impaired when Gtf2i is missing in the neurons.

This was surprising because Gtf2i was not knocked out in oligodendrocytes or their precursors. Thus, knocking out the gene in neurons may somehow influence the maturation process of oligodendrocytes, the researchers suggest. It is still unknown how this interaction might work.

“That’s a question we are interested in, but we don’t know whether it’s a secreted factor, or another kind of signal or activity,” Feng says.

In addition, the researchers found that the myelin surrounding axons of the forebrain was significantly thinner than in normal mice. Furthermore, electrical signals were smaller, and took more time to cross the brain in mice with Gtf2i missing.

The study is an example of pioneering research into the contribution of glial cells, which include oligodendrocytes, to neuropsychiatric disorders, says Doug Fields, chief of the nervous system development and plasticity section of the Eunice Kennedy Shriver National Institute of Child Health and Human Development.

“Traditionally myelin was only considered in the context of diseases that destroy myelin, such as multiple sclerosis, which prevents transmission of neural impulses. More recently it has become apparent that more subtle defects in myelin can impair neural circuit function, by causing delays in communication between neurons,” says Fields, who was not involved in the research.

Symptom reversal

It remains to be discovered precisely how this reduction in myelination leads to hypersociability. The researchers suspect that the lack of myelin affects brain circuits that normally inhibit social behaviors, making the mice more eager to interact with others.

“That’s probably the explanation, but exactly which circuits and how does it work, we still don’t know,” Feng says.

The researchers also found that they could reverse the symptoms by treating the mice with drugs that improve myelination. One of these drugs, an FDA-approved antihistamine called clemastine fumarate, is now in clinical trials to treat multiple sclerosis, which affects myelination of neurons in the brain and spinal cord. The researchers believe it would be worthwhile to test these drugs in Williams Syndrome patients because they found thinner myelin and reduced numbers of mature oligodendrocytes in brain samples from human subjects who had Williams Syndrome, compared to typical human brain samples.

“Mice are not humans, but the pathology is similar in this case, which means this could be translatable,” Feng says. “It could be that in these patients, if you improve their myelination early on, it could at least improve some of the conditions. That’s our hope.”

Such drugs would likely help mainly the social and fine-motor issues caused by Williams Syndrome, not the symptoms that are produced by deletion of other genes, the researchers say. They may also help treat other disorders, such as autism spectrum disorders, in which myelination is impaired in some cases, Feng says.

“We think this can be expanded into autism and other neurodevelopmental disorders. For these conditions, improved myelination may be a major factor in treatment,” he says. “We are now checking other animal models of neurodevelopmental disorders to see whether they have myelination defects, and whether improved myelination can improve some of the pathology of the defects.”

The research was funded by the Simons Foundation, the Poitras Center for Affective Disorders Research at MIT, the Stanley Center for Psychiatric Research at the Broad Institute of MIT and Harvard, and the Simons Center for the Social Brain at MIT.

2019 Scolnick Prize Awarded to Richard Huganir

The McGovern Institute announced today that the winner of the 2019 Edward M. Scolnick Prize in Neuroscience is Rick Huganir, the Bloomberg Distinguished Professor of Neuroscience and Psychological and Brain Sciences at the Johns Hopkins University School of Medicine. Huganir is being recognized for his role in understanding the molecular and biochemical underpinnings of “synaptic plasticity,” changes at synapses that are key to learning and memory formation. The Scolnick Prize is awarded annually by the McGovern Institute to recognize outstanding advances in any field of neuroscience.

“Rick Huganir has made a huge impact on our understanding of how neurons communicate with one another, and the award honors him for this ground-breaking research”, says Robert Desimone, director of the McGovern Institute and the chair of the committee.

“He conducts basic research on the synapses between neurons but his work has important implications for our understanding of many brain disorders that impair synaptic function.”

As the past president of the Society for Neuroscience, the world’s largest organization of researchers that study the brain and nervous system, Huganir is well-known in the global neuroscience community. He also directs the Kavli Neuroscience Discovery Institute and serves as director of the Solomon H. Snyder Department of Neuroscience at Johns Hopkins University School of Medicine and co-director of the Johns Hopkins Brain Science Institute.

From the beginning of his research career, Huganir was interested in neurotransmitter receptors, key to signaling at the synapse. He conducted his thesis work in the laboratory of Efraim Racker at Cornell University, where he first reconstituted one of these receptors, the nicotinic acetylcholine receptor, allowing its biochemical characterization. He went on to become a postdoctoral fellow in Paul Greengard’s lab at The Rockefeller University in New York. During this time, he made the first functional demonstration that phosphorylation, a reversible chemical modification, affects neurotransmitter receptor activity. Phosphorylation was shown to regulate desensitization, the process by which neurotransmitter receptors stop reacting during prolonged exposure to the neurotransmitter.

Upon arriving at Johns Hopkins University, Huganir broadened this concept, finding that the properties and functions of other key receptors and channels, including the GABAA, AMPA, and kainite receptors, could be controlled through phosphorylation. By understanding the sites of phosphorylation and the effects of this modification, Huganir was laying the foundation for the next major steps from his lab: showing that these modifications affect the strength of synaptic connections and transmission, i.e. synaptic plasticity, and in turn, behavior and memory. Huganir also uncovered proteins that interact with neurotransmitter receptors and influence synaptic transmission and plasticity, thus uncovering another layer of molecular regulation. He went on to define how these accessory factors have such influence, showing that they impact the subcellular targeting and cycling of neurotransmitter receptors to and from the synaptic membrane. These mechanisms influence the formation of, for example, fear memory, as well as its erasure. Indeed, Huganir found that a specific type of AMPA receptor is added to synapses in the amygdala after a traumatic event, and that specific removal results in fear erasure in a mouse model.

Among many awards and honors, Huganir received the Young Investigator Award and the Julius Axelrod Award of the Society for Neuroscience. He was also elected to the American Academy of Arts and Sciences, the US National Academy of Sciences, and the Institute of Medicine. He is also a fellow of the American Association for the Advancement of Science.

The Scolnick Prize was first awarded in 2004, and was established by Merck in honor of Edward M. Scolnick who was President of Merck Research Laboratories for 17 years. Scolnick is currently a core investigator at the Broad Institute, and chief scientist emeritus of the Stanley Center for Psychiatric Research at Broad Institute.

Huganir will deliver the Scolnick Prize lecture at the McGovern Institute on May 8, 2019 at 4:00pm in the Singleton Auditorium of MIT’s Brain and Cognitive Sciences Complex (Bldg 46-3002), 43 Vassar Street in Cambridge. The event is free and open to the public.

 

 

Plugging into the brain

Driven by curiosity and therapeutic goals, Anikeeva leaves no scientific stone unturned in her drive to invent neurotechnology.

The audience sits utterly riveted as Polina Anikeeva highlights the gaps she sees in the landscape of neural tools. With a background in optoelectronics, she has a decidedly unique take on the brain.

“In neuroscience,” says Anikeeva, “we are currently applying silicon-based neural probes with the elastic properties of a knife to a delicate material with the consistency of chocolate pudding—the brain.”

A key problem, summarized by Anikeeva, is that these sharp probes damage tissue, making such interfaces unreliable and thwarting long term brain studies of processes including development and aging. The state of the art is even grimmer in the clinic. An avid climber, Anikeeva recalls a friend sustaining a spinal cord injury. “She made a remarkable recovery,” explains Anikeeva, “but seeing the technology being used to help her was shocking. Not even the simplest electronic tools were used, it was basically lots of screws and physical therapy.” This crude approach, compared to the elegant optoelectronic tools familiar to Anikeeva, sparked a drive to bring advanced materials technology to biological systems.

Outside the box

As the group breaks up after the seminar, the chatter includes boxes, more precisely, thinking outside of them. An associate professor in material sciences and engineering at MIT, Anikeeva’s interest in neuroscience recently led to a McGovern Institute appointment. She sees her journey to neurobiology as serendipitous, having earned her doctorate designing light-emitting devices at MIT.

“I wanted to work on tools that don’t exist, and neuroscience seemed like an obvious choice. Neurons communicate in part through membrane voltage changes and as an electronics designer, I felt that I should be able to use voltage.”

Comfort at the intersection of sciences requires, according to Anikeeva, clarity and focus, also important in her chief athletic pursuits, running and climbing. Through long distant running, Anikeeva finds solitary time (“assuming that no one can chase me”) and the clarity to consider complicated technical questions. Climbing hones something different, absolute focus in the face of the often-tangled information that comes with working at scientific intersections.

“When climbing, you can only think about one thing, your next move. Only the most important thoughts float up.”

This became particularly important when, in Yosemite National Park, she made the decision to go up, instead of down, during an impending thunderstorm. Getting out depended on clear focus, despite imminent hypothermia and being exposed “on one of the tallest features in the area, holding large quantities of metal.” Polina and her climbing partner made it out, but her summary of events echoes her research philosophy: “What you learn and develop is a strong mindset where you don’t do the comfortable thing, the easy thing. Instead you always find, and execute, the most logical strategy.”

In this vein, Anikeeva’s research pursues two very novel, but exceptionally logical, paths to brain research and therapeutics: fiber development and magnetic nanomaterials.

Drawing new fibers

Walking into Anikeeva’s lab, the eye is immediately drawn to a robust metal frame containing, upon closer scrutiny, recognizable parts: a large drill bit, a motor, a heating element. This custom-built machine applies principles from telecommunications to draw multifunctional fibers using more “brain-friendly” materials.

“We start out with a macroscopic model, a preform, of the device that we ultimately want,” explains Anikeeva.

This “preform” is a transparent block of polymers, composites, and soft low-melting temperature metals with optical and electrical properties needed in the final fiber. “So, this could include
electrodes for recording, optical channels for optogenetics, microfluidics for drug delivery, and one day even components that allow chemical or mechanical sensing.” After sitting in a vacuum to remove gases and impurities, the two-inch by one-inch preform arrives at the fiber-drawing tower.

“Then we heat it and pull it, and the macroscopic model becomes a kilometer-long fiber with a lateral dimension of microns, even nanometers,” explains Anikeeva. “Take one of your hairs, and imagine that inside there are electrodes for recording, there are microfluidic channels to infuse drugs, optical channels for stimulation. All of this is combined in a single miniature form
factor, and it can be quite flexible and even stretchable.”

Construction crew

Anikeeva’s lab comprises an eclectic mix of 21 researchers from over 13 different countries, and a range of expertises, including materials science, chemistry, electrical and mechanical engineering, and neuroscience. In 2011, Andres Canales, a materials scientist from Mexico, was the second person to join Anikeeva’s lab.

“There was only an idea, a diagram,” explains Canales. “I didn’t want to work on biology when I arrived at MIT, but talking to Polina, seeing the pictures, thinking about what it would entail, I became very excited by the methods and the potential applications she was thinking of.”

Despite the lack of preliminary models, Anikeeva’s ideas were compelling. Elegant as the fibers are, the road involved painstaking, iterative refinement. From a materials perspective, drawing a fiber containing a continuous conductive element was challenging, as was validation of its properties. But the resulting fiber can deliver optogenetics vectors, monitor expression, and then stimulate neuronal activity in a single surgery, removing the spatial and temporal guesswork usually involved in such an experiment.

Seongjun Park, an electrical engineering graduate student in the lab, explains one biological challenge. “For long term recording in the spinal cord, there was even an additional challenge as the fiber needed to be stretchable to respond to the spine’s movement. For this we developed a drawing process compatible with an elastomer.”

The resulting fibers can be deployed chronically without the scar tissue accumulation that usually prevents long-term optical manipulation and drug delivery, making them good candidates for the treatment of brain disorders. The lab’s current papers find that these implanted fibers are useful for three months, and material innovations make them confident that longer time periods are possible.

Magnetic moments

Another wing of Anikeeva’s research aims to develop entirely non-invasive modalities, and use magnetic nanoparticles to stimulate the brain and deliver therapeutics.

“Magnetic fields are probably the best modality for getting any kind of stimulus to deep tissues,” explains Anikeeva, “because biological systems, except for very specialized systems, do not perceive magnetic fields. They go through us unattenuated, and they don’t couple to our physiology.”

In other words, magnetic fields can safely reach deep tissues, including the brain. Upon reaching their tissue targets these fields can be used to stimulate magnetic nanoparticles, which might one day, for example, be used to deliver dopamine to the brains of Parkinson’s disease patients. The alternating magnetic fields being used in these experiments are tiny, 100-1000 times smaller than fields clinically approved for MRI-based brain imaging.

Tiny fields, but they can be used to powerful effect. By manipulating magnetic moments in these nanoparticles, the magnetic field can cause heat dissipation by the particle that can stimulate thermal receptors in the nervous system. These receptors naturally detect heat, chili peppers and vanilla, but Anikeeva’s magnetic nanoparticles act as tiny heaters that activate these receptors, and, in turn, local neurons. This principle has already been used to activate the brain’s reward center in freely moving mice.

Siyuan Rao, a postdoc who works on the magnetic nanoparticles in collaboration with McGovern Investigator Guoping Feng, is unhesitating when asked what most inspires her.

“As a materials scientist, it is really rewarding to see my materials at work. We can remotely modulate mouse behavior, even turn hopeless behavior into motivation.”

Pushing the boundaries

Such collaborations are valued by Anikeeva. Early on she worked with McGovern Investigator Emilio Bizzi to use the above fiber technology in the spinal cord. “It is important to us to not just make these devices,” explains Anikeeva, “but to use them and show ourselves, and our colleagues, the types of experiments that they can enable.”

Far from an assembly line, the researchers in Anikeeva’s lab follow projects from ideation to deployment. “The student that designs a fiber, performs their own behavioral experiments, and data analysis,” says Anikeeva. “Biology is unforgiving. You can trivially design the most brilliant electrophysiological recording probe, but unless you are directly working in the system, it is easy to miss important design considerations.”

Inspired by this, Anikeeva’s students even started a project with Gloria Choi’s group on their own initiative. This collaborative, can-do ethos spreads beyond the walls of the lab, inspiring people around MIT.

“We often work with a teaching instructor, David Bono, who is an expert on electronics and magnetic instruments,” explains Alex Senko, a senior graduate student in the lab. “In his spare time, he helps those of us who work on electrical engineering flavored projects to hunt down components needed to build our devices.”

These components extend to whatever is needed. When a low frequency source was needed, the Anikeeva lab drafted a guitar amplifier.

Queried about difficulties that she faces having chosen to navigate such a broad swath of fields, Anikeeva is focused, as ever, on the unknown, the boundaries of knowledge.

“Honestly, I really, really enjoy it. It keeps me engaged and not bored. Even when thinking about complicated physics and chemistry, I always have eyes on the prize, that this will allow us to address really interesting neuroscience questions.”

With such thinking, and by relentlessly seeking the tools needed to accomplish scientific goals, Anikeeva and her lab continue to avoid the comfortable route, instead using logical routes toward new technologies.

SHERLOCK: A CRISPR tool to detect disease

This animation depicts how Cas13 — a CRISPR-associated protein — may be adapted to detect human disease. This new diagnostic tool, called SHERLOCK, targets RNA (rather than DNA), and has the potential to transform research and global public health.

 

Is it worth the risk?

During the Klondike Gold Rush, thousands of prospectors climbed Alaska’s dangerous Chilkoot Pass in search of riches. McGovern scientists are exploring how a once-overlooked part of the brain might be at the root of cost-benefit decisions like these. McGovern researchers are studying how the brain balances risk and reward to make decisions.

Is it worth speeding up on the highway to save a few minutes’ time? How about accepting a job that pays more, but requires longer hours in the office?

Scientists call these types of real-life situations cost-benefit conflicts. Choosing well is an essential survival ability—consider the animal that must decide when to expose itself to predation to gather more food.

Now, McGovern researchers are discovering that this fundamental capacity to make decisions may originate in the basal ganglia—a brain region once considered unimportant to the human
experience—and that circuits associated with this structure may play a critical role in determining our state of mind.

Anatomy of decision-making

A few years back, McGovern investigator Ann Graybiel noticed that in the brain imaging literature, a specific part of the cortex called the pregenual anterior cingulate cortex or pACC, was implicated in certain psychiatric disorders as well as tasks involving cost-benefit decisions. Thanks to her now classic neuroanatomical work defining the complex anatomy and function of the basal ganglia, Graybiel knew that the pACC projected back into the basal ganglia—including its largest cluster of neurons, the striatum.

The striatum sits beneath the cortex, with a mouse-like main body and curving tail. It seems to serve as a critical way-station, communicating with both the brain’s sensory and motor areas above, and the limbic system (linked to emotion and memory) below. Running through the striatum are striosomes, column-like neurochemical compartments. They wire down to a small, but important part of the brain called the substantia nigra, which houses the huge majority of the brain’s dopamine neurons—a key neurochemical heavily involved, much like the basal ganglia as a whole, in reward, learning, and movement. The pACC region related to mood control targeted these striosomes, setting up a communication line from the neocortex to the dopamine neurons.

Graybiel discovered these striosomes early in her career, and understood them to have distinct wiring from other compartments in the striatum, but picking out these small, hard-to-find striosomes posed a technological challenge—so it was exciting to have this intriguing link to the pACC and mood disorders.

Working with Ken-ichi Amemori, then a research scientist in her lab, she adapted a common human cost-benefit conflict test for macaque monkeys. The monkeys could elect to receive a food treat, but the treat would always be accompanied by an annoying puff of air to the eyes. Before they decided, a visual cue told them exactly how much treat they could get, and exactly how strong the air puff would be, so they could choose if the treat was worth it.

Normal monkeys varied their choices in a fairly rational manner, rejecting the treat whenever it seemed like the air puff was too strong, or the treat too small to be worth it—and this corresponded with activity in the pACC neurons. Interestingly, they found that some pACC neurons respond more when animals approach the combined offers, while other pACC neurons
fire more when the animals avoid the offers. “It is as though there are two opposing armies. And the one that wins, controls the state of the animal.” Moreover, when Graybiel’s team electrically stimulated these pACC neurons, the animals begin to avoid the offers, even offers that they normally would approach. “It is as though when the stimulation is on, they think the future is worse than it really is,” Graybiel says.

Intriguingly, this effect only worked in situations where the animal had to weigh the value of a cost against a benefit. It had no effect on a decision between two negatives or two positives, like two different sizes of treats. The anxiety drug diazepam also reversed the stimulatory effect, but again, only on cost-benefit choices. “This particular kind of mood-influenced cost-benefit
decision-making occurs not only under conflict conditions but in our regular day to day lives. For example: I know that if I eat too much chocolate, I might get fat, but I love it, I want it.”

Glass half empty

Over the next few years, Graybiel, with another research scientist in her lab, Alexander Friedman, unraveled the circuit behind the macaques’ choices. They adapted the test for rats and mice,
so that they could more easily combine the cellular and molecular technologies needed to study striosomes, such as optogenetics and mouse engineering.

They found that the cortex (specifically, the pre-limbic region of the prefrontal cortex in rodents) wires onto both striosomes and fast-acting interneurons that also target the striosomes. In a
healthy circuit, these interneurons keep the striosomes in check by firing off fast inhibitory signals, hitting the brakes before the striosome can get started. But if the researchers broke that corticalstriatal connection with optogenetics or chronic stress, the animals became reckless, going for the high-risk, high-reward arm of the maze like a gambler throwing caution to the wind. If they amplified this inhibitory interneuron activity, they saw the opposite effect. With these techniques, they could block the effects of prior chronic stress.

This summer, Graybiel and Amemori published another paper furthering the story and returning to macaques. It was still too difficult to hit striosomes, and the researchers could only stimulate the striatum more generally. However, they replicated the effects in past studies.

Many electrodes had no effect, a small number made the monkeys choose the reward more often. Nearly a quarter though made the monkeys more avoidant—and this effect correlated with a change in the macaques’ brainwaves in a manner reminiscent of patients with depression.

But the surprise came when the avoidant-producing stimulation was turned off, the effects lasted unexpectedly long, only returning to normal on the third day.

Graybiel was stunned. “This is very important, because changes in the brain can get set off and have a life of their own,” she says. “This is true for some individuals who have had a terrible experience, and then live with the aftermath, even to the point of suffering from post-traumatic stress disorder.”

She suspects that this persistent state may actually be a form of affect, or mood. “When we change this decision boundary, we’re changing the mood, such that the animal overestimates cost, relative to benefit,” she explains. “This might be like a proxy state for pessimistic decision-making experienced during anxiety and depression, but may also occur, in a milder form, in you and me.”

Graybiel theorizes that this may tie back into the dopamine neurons that the striosomes project to: if this avoidance behavior is akin to avoidance observed in rodents, then they are stimulating a circuit that ultimately projects to dopamine neurons of the substantia nigra. There, she believes, they could act to suppress these dopamine neurons, which in turn project to the rest of the brain, creating some sort of long-term change in their neural activity. Or, put more simply, stimulation of these circuits creates a depressive funk.

Bottom up

Three floors below the Graybiel lab, postdoc Will Menegas is in the early stages of his own work untangling the role of dopamine and the striatum in decision-making. He joined Guoping Feng’s lab this summer after exploring the understudied “tail of the striatum” at Harvard University.

While dopamine pathways influence many parts of the brain, examination of connections to the striatum have largely focused on the frontmost part of the striatum, associated with valuations.

But as Menegas showed while at Harvard, dopamine neurons that project to the rear of the striatum are different. Those neurons get their input from parts of the brain associated with general arousal and sensation—and instead of responding to rewards, they respond to novelty and intense stimuli, like air puffs and loud noises.

In a new study published in Nature Neuroscience, Menegas used a neurotoxin to disrupt the dopamine projection from the substantia nigra to the posterior striatum to see how this circuit influences behavior. Normal mice approach novel items cautiously and back away after sniffing at them, but the mice in Menegas’ study failed to back away. They stopped avoiding a port that gave an air puff to the face and they didn’t behave like normal mice when Menegas dropped a strange or new object—say, a lego—into their cage. Disrupting the nigral-posterior striatum
seemed to turn off their avoidance habit.

“These neurons reinforce avoidance the same way that canonical dopamine neurons reinforce approach,” Menegas explains. It’s a new role for dopamine, suggesting that there may be two different and distinct systems of reinforcement, led by the same neuromodulator in different parts of the striatum.

This research, and Graybiel’s discoveries on cost-benefit decision circuits, share clear parallels, though the precise links between the two phenomena are yet to be fully determined. Menegas plans to extend this line of research into social behavior and related disorders like autism in marmoset monkeys.

“Will wants to learn the methods that we use in our lab to work on marmosets,” Graybiel says. “I think that working together, this could become a wonderful story, because it would involve social interactions.”

“This a very new angle, and it could really change our views of how the reward system works,” Feng says. “And we have very little understanding of social circuits so far and especially in higher organisms, so I think this would be very exciting. Whatever we learn, it’s going to be new.”

Human choices

Based on their preexisting work, Graybiel’s and Menegas’ projects are well-developed—but they are far from the only McGovern-based explorations into ways this brain region taps into our behaviors. Maiya Geddes, a visiting scientist in John Gabrieli’s lab, has recently published a paper exploring the little-known ways that aging affects the dopamine-based nigral-striatum-hippocampus learning and memory systems.

In Rebecca Saxe’s lab, postdoc Livia Tomova just kicked off a new pilot project using brain imaging to uncover dopamine-striatal circuitry behind social craving in humans and the urge to rejoin peers. “Could there be a craving response similar to hunger?” Tomova wonders. “No one has looked yet at the neural mechanisms of this.”

Graybiel also hopes to translate her findings into humans, beginning with collaborations at the Pizzagalli lab at McLean Hospital in Belmont. They are using fMRI to study whether patients
with anxiety and depression show some of the same dysfunctions in the cortico-striatal circuitry that she discovered in her macaques.

If she’s right about tapping into mood states and affect, it would be an expanded role for the striatum—and one with significant potential therapeutic benefits. “Affect state” colors many psychological functions and disorders, from memory and perception, to depression, chronic stress, obsessive-compulsive disorder, and PTSD.

For a region of the brain once dismissed as inconsequential, McGovern researchers have shown the basal ganglia to influence not only our choices but our state of mind—suggesting that this “primitive” brain region may actually be at the heart of the human experience.

 

 

Mark Harnett’s “Holy Grail” experiment

Neurons in the human brain receive electrical signals from thousands of other cells, and long neural extensions called dendrites play a critical role in incorporating all of that information so the cells can respond appropriately.

Using hard-to-obtain samples of human brain tissue, McGovern neuroscientist Mark Harnett has now discovered that human dendrites have different electrical properties from those of other species. Their studies reveal that electrical signals weaken more as they flow along human dendrites, resulting in a higher degree of electrical compartmentalization, meaning that small sections of dendrites can behave independently from the rest of the neuron.

These differences may contribute to the enhanced computing power of the human brain, the researchers say.

Fujitsu Laboratories and MIT’s Center for Brains, Minds and Machines broaden partnership

Fujitsu Laboratories Ltd. and MIT’s Center for Brains, Minds and Machines (CBMM) has announced a multi-year philanthropic partnership focused on advancing the science and engineering of intelligence while supporting the next generation of researchers in this emerging field. The new commitment follows on several years of collaborative research among scientists at the two organizations.

Founded in 1968, Fujitsu Laboratories has conducted a wide range of basic and applied research in the areas of next-generation services, computer servers, networks, electronic devices, and advanced materials. CBMM, a multi-institutional, National Science Foundation funded science and technology center focusing on the interdisciplinary study of intelligence, was established in 2013 and is headquartered at MIT’s McGovern Institute for Brain Research. CBMM is also the foundation of “The Core” of the MIT Quest for Intelligence launched earlier this year. The partnership between the two organizations started in March 2017 when Fujitsu Laboratories sent a visiting scientist to CBMM.

“A fundamental understanding of how humans think, feel, and make decisions is critical to developing revolutionary technologies that will have a real impact on societal problems,” said Shigeru Sasaki, CEO of Fujitsu Laboratories. “The partnership between MIT’s Center for Brains, Minds and Machines and Fujitsu Laboratories will help advance critical R&D efforts in both human intelligence and the creation of next-generation technologies that will shape our lives,” he added.

The new Fujitsu Laboratories Co-Creation Research Fund, established with a philanthropic gift from Fujitsu Laboratories, will fuel new, innovative and challenging projects in areas of interest to both Fujitsu and CBMM, including the basic study of computations underlying visual recognition and language processing, creation of new machine learning methods, and development of the theory of deep learning. Alongside funding for research projects, Fujitsu Laboratories will also fund fellowships for graduate students attending CBMM’s summer course from 2019 to contribute to the future of research and society on a long term basis. The intensive three-week course gives advanced students from universities worldwide a “deep end” introduction to the problem of intelligence. These students will later have the opportunity to travel to Fujitsu Laboratories in Japan or its overseas locations in the U.S., Canada, U.K., Spain, and China to meet with Fujitsu researchers.

“CBMM faculty, students, and fellows are excited for the opportunity to work alongside scientists from Fujitsu to make advances in complex problems of intelligence, both real and artificial,” said CBMM’s director Tomaso Poggio, who is also an investigator at the McGovern Institute and the Eugene McDermott Professor in MIT’s Department of Brain and Cognitive Sciences. “Both Fujitsu Laboratories and MIT are committed to creating revolutionary tools and systems that will transform many industries, and to do that we are first looking to the extraordinary computations made by the human mind in everyday life.”

As part of the partnership, Poggio will be a featured keynote speaker at the Fujitsu Laboratories Advanced Technology Symposium on Oct. 9. In addition, Tomotake Sasaki, a former visiting scientist and current research affiliate in the Poggio Lab, will continue to collaborate with CBMM scientists and engineers on reinforcement learning and deep learning research projects. Moyuru Yamada, a visiting scientist in the Lab of Professor Josh Tenenbaum, is also studying the computational model of human cognition and exploring its industrial applications. Moreover, Fujitsu Laboratories is planning to invite CBMM researchers to Japan or overseas offices and arrange internships for interested students.

Can the brain recover after paralysis?

Why is it that motor skills can be gained after paralysis but vision cannot recover in similar ways? – Ajay, Puppala

Thank you so much for this very important question, Ajay. To answer, I asked two local experts in the field, Pawan Sinha who runs the vision research lab at MIT, and Xavier Guell, a postdoc in John Gabrieli’s lab at the McGovern Institute who also works in the ataxia unit at Massachusetts General Hospital.

“Simply stated, the prospects of improvement, whether in movement or in vision, depend on the cause of the impairment,” explains Sinha. “Often, the cause of paralysis is stroke, a reduction in blood supply to a localized part of the brain, resulting in tissue damage. Fortunately, the brain has some ability to rewire itself, allowing regions near the damaged one to take on some of the lost functionality. This rewiring manifests itself as improvements in movement abilities after an initial period of paralysis. However, if the paralysis is due to spinal-cord transection (as was the case following Christopher Reeve’s tragic injury in 1995), then prospects for improvement are diminished.”

“Turning to the domain of sight,” continues Sinha, “stroke can indeed cause vision loss. As with movement control, these losses can dissipate over time as the cortex reorganizes via rewiring. However, if the blindness is due to optic nerve transection, then the condition is likely to be permanent. It is also worth noting that many cases of blindness are due to problems in the eye itself. These include corneal opacities, cataracts and retinal damage. Some of these conditions (corneal opacities and cataracts) are eminently treatable while others (typically those associated with the retina and optic nerve) still pose challenges to medical science.”

You might be wondering what makes lesions in the eye and spinal cord hard to overcome. Some systems (the blood, skin, and intestine are good examples) contain a continuously active stem cell population in adults. These cells can divide and replenish lost cells in damaged regions. While “adult-born” neurons can arise, elements of a degenerating or damaged retina, optic nerve, or spinal cord cannot be replaced as easily lost skin cells can. There is currently a very active effort in the stem cell community to understand how we might be able to replace neurons in all cases of neuronal degeneration and injury using stem cell technologies. To further explore lesions that specifically affect the brain, and how these might lead to a different outcome in the two systems, I turned to Xavier Guell.

“It might be true that visual deficits in the population are less likely to recover when compared to motor deficits in the population. However, the scientific literature seems to indicate that our body has a similar capacity to recover from both motor and visual injuries,” explains Guell. “The reason for this apparent contradiction is that visual lesions are usually not in the cerebral cortex (but instead in other places such as the retina or the lens), while motor lesions in the cerebral cortex are more common. In fact, a large proportion of people who suffer a stroke will have damage in the motor aspects of the cerebral cortex, but no damage in the visual aspects of the cerebral cortex. Crucially, recovery of neurological functions is usually seen when lesions are in the cerebral cortex or in other parts of the cerebrum or cerebellum. In this way, while our body has a similar capacity to recover from both motor and visual injuries, motor injuries are more frequently located in the parts of our body that have a better capacity to regain function (specifically, the cerebral cortex).”

In short, some cells cannot be replaced in either system, but stem cell research provides hope there. That said, there is remarkable plasticity in the brain, so when the lesion is located there, we can see recovery with training.

Do you have a question for The Brain? Ask it here.