McGovern Institute postcard collection

A collection of 13 postcards arranged in columns.
The McGovern Institute postcard collection, 2023.

The McGovern Institute may be best known for its scientific breakthroughs, but a captivating series of brain-themed postcards developed by McGovern researchers and staff now reveals the institute’s artistic side.

What began in 2017 with a series of brain anatomy postcards inspired by the U.S. Works Projects Administration’s iconic national parks posters, has grown into a collection of twelve different prints, each featuring a unique fusion of neuroscience and art.

More information about each series in the McGovern Institute postcard collection, including the color-your-own mindfulness postcards, can be found below.

Mindfulness Postcard Series, 2023

In winter 2023, the institute released its mindfulness postcard series, a collection of four different neuroscience-themed illustrations that can be colored in with pencils, markers, or paint. The postcard series was inspired by research conducted in John Gabrieli’s lab, which found that practicing mindfulness reduced children’s stress levels and negative emotions during the pandemic. These findings contribute to a growing body of evidence that practicing mindfulness — focusing awareness on the present, typically through meditation, but also through coloring — can change patterns of brain activity associated with emotions and mental health.

Download and color your own postcards.

Genes

The McGovern Institute is at the cutting edge of applications based on CRISPR, a genome editing tool pioneered by McGovern Investigator Feng Zhang. Hidden within this DNA-themed postcard is a clam, virus, bacteriophage, snail, and the word CRISPR. Click the links to learn how these hidden elements relate to genetic engineering research at the McGovern Institute.

 

Line art showing strands of DNA and the McGovern Institute logo.
The McGovern Institute’s “mindfulness” postcard series includes this DNA-themed illustration containing five hidden design elements related to McGovern research. Image: Joseph Laney

Neurons

McGovern researchers probe the nanoscale and cellular processes that are critical to brain function, including the complex computations conducted in neurons, to the synapses and neurotransmitters that facilitate messaging between cells. Find the mouse, worm, and microscope — three critical elements related to cellular and molecular neuroscience research at the McGovern Institute — in the postcard below.

 

Line art showing multiple neurons and the McGovern Institute logo.
The McGovern Institute’s “mindfulness” postcard series includes this neuron-themed illustration containing three hidden design elements related to McGovern research. Image: Joseph Laney

Human Brain

Cognitive neuroscientists at the McGovern Institute examine the brain processes that come together to inform our thoughts and understanding of the world.​ Find the musical note, speech bubbles, and human face in this postcard and click on the links to learn more about how these hidden elements relate to brain research at the McGovern Institute.

 

Line art of a human brain and the McGovern Institute logo.
The McGovern Institute’s “mindfulness” postcard series includes this brain-themed illustration containing three hidden design elements related to McGovern research. Image: Joseph Laney

Artificial Intelligence

McGovern researchers develop machine learning systems that mimic human processing of visual and auditory cues and construct algorithms to help us understand the complex computations made by the brain. Find the speech bubbles, DNA, and cochlea (spiral) in this postcard and click on the links to learn more about how these hidden elements relate to computational neuroscience research at the McGovern Institute.

Line art showing an artificial neural network in the shape of the human brain and the McGovern Institute logo.
The McGovern Institute’s “mindfulness” postcard series includes this AI-themed illustration containing three hidden design elements related to McGovern research. Image: Joseph Laney

Neuron Postcard Series, 2019

In 2019, the McGovern Institute released a second series of postcards based on the anatomy of a neuron. Each postcard includes text on the back side that describes McGovern research related to that specific part of the neuron. The descriptive text for each postcard is shown beloSynapse

Synapse

Snow melting off the branch of a bush at the water's edge creates a ripple effect in the pool of water below. Words at the bottom of the image say "It All Begins at the SYNAPSE"Signals flow through the nervous system from one neuron to the next across synapses.

Synapses are exquisitely organized molecular machines that control the transmission of information.

McGovern researchers are studying how disruptions in synapse function can lead to brain disorders like autism.

Image: Joseph Laney

Axon

Illustration of three bears hunting for fish in a flowing river with the words: "Axon: Where Action Finds Potential"The axon is the long, thin neural cable that carries electrical impulses called action potentials from the soma to synaptic terminals at downstream neurons.

Researchers at the McGovern Institute are developing and using tracers that label axons to reveal the elaborate circuit architecture of the brain.

Image: Joseph Laney

Soma

An elk stands on a rocky outcropping overlooking a large lake with an island in the center. Words at the top read: "Collect Your Thoughts at the Soma"The soma, or cell body, is the control center of the neuron, where the nucleus is located.

It connects the dendrites to the axon, which sends information to other neurons.

At the McGovern Institute, neuroscientists are targeting the soma with proteins that can activate single neurons and map connections in the brain.

Image: Joseph Laney

Dendrites

A mountain lake at sunset with colorful fish and snow from a distant mountaintop melting into the lake. Words say "DENDRITIC ARBOR"Long branching neuronal processes called dendrites receive synaptic inputs from thousands of other neurons and carry those signals to the cell body.

McGovern neuroscientists have discovered that human dendrites have different electrical properties from those of other species, which may contribute to the enhanced computing power of the human brain.

Image: Joseph Laney

Brain Anatomy Postcard Series, 2017

The original brain anatomy-themed postcard series, developed in 2017, was inspired by the U.S. Works Projects Administration’s iconic national parks posters created in the 1930s and 1940s. Each postcard includes text on the back side that describes McGovern research related to that specific part of the neuron. The descriptive text for each postcard is shown below.

Sylvian Fissure

Illustration of explorer in cave labeled with temporal and parietal letters
The Sylvian fissure is a prominent groove on the right side of the brain that separates the frontal and parietal lobes from the temporal lobe. McGovern researchers are studying a region near the right Sylvian fissure, called the rTPJ, which is involved in thinking about what another person is thinking.

Hippocampus

The hippocampus, named after its resemblance to the seahorse, plays an important role in memory. McGovern researchers are studying how changes in the strength of synapses (connections between neurons) in the hippocampus contribute to the formation and retention of memories.

Basal Ganglia

The basal ganglia are a group of deep brain structures best known for their control of movement. McGovern researchers are studying how the connections between the cerebral cortex and a part of the basal ganglia known as the striatum play a role in emotional decision making and motivation.

 

 

 

The arcuate fasciculus is a bundle of axons in the brain that connects Broca’s area, involved in speech production, and Wernicke’s area, involved in understanding language. McGovern researchers have found a correlation between the size of this structure and the risk of dyslexia in children.

 

 

Order and Share

To order your own McGovern brain postcards, contact our colleagues at the MIT Museum, where proceeds will support current and future exhibitions at the growing museum.

Please share a photo of yourself in your own lab (or natural habitat) with one of our cards on social media. Tell us what you’re studying and don’t forget to tag us @mcgovernmit using the hashtag #McGovernPostcards.

New gene-editing system precisely inserts large DNA sequences into cellular DNA

A team led by researchers from Broad Institute of MIT and Harvard, and the McGovern Institute for Brain Research at MIT, has characterized and engineered a new gene-editing system that can precisely and efficiently insert large DNA sequences into a genome. The system, harnessed from cyanobacteria and called CRISPR-associated transposase (CAST), allows efficient introduction of DNA while reducing the potential error-prone steps in the process — adding key capabilities to gene-editing technology and addressing a long-sought goal for precision gene editing.

Precise insertion of DNA has the potential to treat a large swath of genetic diseases by integrating new DNA into the genome while disabling the disease-related sequence. To accomplish this in cells, researchers have typically used CRISPR enzymes to cut the genome at the site of the deleterious sequence, and then relied on the cell’s own repair machinery to stitch the old and new DNA elements together. However, this approach has many limitations.

Using Escherichia coli bacteria, the researchers have now demonstrated that CAST can be programmed to efficiently insert new DNA at a designated site, with minimal editing errors and without relying on the cell’s own repair machinery. The system holds potential for much more efficient gene insertion compared to previous technologies, according to the team.

The researchers are working to apply this editing platform in eukaryotic organisms, including plant and animal cells, for precision research and therapeutic applications.

The team molecularly characterized and harnessed CAST from two cyanobacteria, Scytonema hofmanni and Anabaena cylindrica, and additionally revealed a new way that some CRISPR systems perform in nature: not to protect bacteria from viruses, but to facilitate the spread of transposon DNA.

The work, appearing in Science, was led by first author Jonathan Strecker, a postdoctoral fellow at the Broad Institute; graduate student Alim Ladha at MIT; and senior author Feng Zhang, a core institute member at the Broad Institute, investigator at the McGovern Institute for Brain Research at MIT, the James and Patricia Poitras Professor of Neuroscience at MIT, and an associate professor at MIT, with joint appointments in the departments of Brain and Cognitive Sciences and Biological Engineering. Collaborators include Eugene Koonin at the National Institutes of Health.

A New Role for a CRISPR-Associated System

“One of the long-sought-after applications for molecular biology is the ability to introduce new DNA into the genome precisely, efficiently, and safely,” explains Zhang. “We have worked on many bacterial proteins in the past to harness them for editing in human cells, and we’re excited to further develop CAST and open up these new capabilities for manipulating the genome.”

To expand the gene-editing toolbox, the team turned to transposons. Transposons (sometimes called “jumping genes”) are DNA sequences with associated proteins — transposases — that allow the DNA to be cut-and-pasted into other places.

Most transposons appear to jump randomly throughout the cellular genome and out to viruses or plasmids that may also be inhabiting a cell. However, some transposon subtypes in cyanobacteria have been computationally associated with CRISPR systems, suggesting that these transposons may naturally be guided towards more-specific genetic targets. This theorized function would be a new role for CRISPR systems; most known CRISPR elements are instead part of a bacterial immune system, in which Cas enzymes and their guide RNA will target and destroy viruses or plasmids.

In this paper, the research team identified the mechanisms at work and determined that some CRISPR-associated transposases have hijacked an enzyme called Cas12k and its guide to insert DNA at specific targets, rather than just cutting the target for defensive purposes.

“We dove deeply into this system in cyanobacteria, began taking CAST apart to understand all of its components, and discovered this novel biological function,” says Strecker, a postdoctoral fellow in Zhang’s lab at the Broad Institute. “CRISPR-based tools are often DNA-cutting tools, and they’re very efficient at disrupting genes. In contrast, CAST is naturally set up to integrate genes. To our knowledge, it’s the first system of this kind that has been characterized and manipulated.”

Harnessing CAST for Genome Editing

Once all the elements and molecular requirements of the CAST system were laid bare, the team focused on programming CAST to insert DNA at desired sites in E. coli.

“We reconstituted the system in E. coli and co-opted this mechanism in a way that was useful,” says Strecker. “We reprogrammed the system to introduce new DNA, up to 10 kilobase pairs long, into specific locations in the genome.”

The team envisions basic research, agricultural, or therapeutic applications based on this platform, such as introducing new genes to replace DNA that has mutated in a harmful way — for example, in sickle cell disease. Systems developed with CAST could potentially be used to integrate a healthy version of a gene into a cell’s genome, disabling or overriding the DNA causing problems.

Alternatively, rather than inserting DNA with the purpose of fixing a deleterious version of a gene, CAST may be used to augment healthy cells with elements that are therapeutically beneficial, according to the team. For example, in immunotherapy, a researcher may want to introduce a “chimeric antigen receptor” (CAR) into a specific spot in the genome of a T cell — enabling the T cell to recognize and destroy cancer cells.

“For any situation where people want to insert DNA, CAST could be a much more attractive approach,” says Zhang. “This just underscores how diverse nature can be and how many unexpected features we have yet to find.”

Support for this study was provided in part by the Human Frontier Science Program, New York Stem Cell Foundation, Mathers Foundation, NIH (1R01-HG009761, 1R01-MH110049, and 1DP1-HL141201), Howard Hughes Medical Institute, Poitras Center for Psychiatric Disorders Research, J. and P. Poitras, and Hock E. Tan and K. Lisa Yang Center for Autism Research.

J.S. and F.Z. are co-inventors on US provisional patent application no. 62/780,658 filed by the Broad Institute, relating to CRISPR-associated transposases.

Expression plasmids are available from Addgene.

Our brains appear uniquely tuned for musical pitch

In the eternal search for understanding what makes us human, scientists found that our brains are more sensitive to pitch, the harmonic sounds we hear when listening to music, than our evolutionary relative the macaque monkey. The study, funded in part by the National Institutes of Health, highlights the promise of Sound Health, a joint project between the NIH and the John F. Kennedy Center for the Performing Arts, in association with the National Endowment for the Arts, that aims to understand the role of music in health.

“We found that a certain region of our brains has a stronger preference for sounds with pitch than macaque monkey brains,” said Bevil Conway, Ph.D., investigator in the NIH’s Intramural Research Program and a senior author of the study published in Nature Neuroscience. “The results raise the possibility that these sounds, which are embedded in speech and music, may have shaped the basic organization of the human brain.”

The study started with a friendly bet between Dr. Conway and Sam Norman-Haignere, Ph.D., a post-doctoral fellow at Columbia University’s Zuckerman Institute for Mind, Brain, and Behavior and the first author of the paper.

At the time, both were working at the Massachusetts Institute of Technology (MIT). Dr. Conway’s team had been searching for differences between how human and monkey brains control vision only to discover that there are very few. Their brain mapping studies suggested that humans and monkeys see the world in very similar ways. But then, Dr. Conway heard about some studies on hearing being done by Dr. Norman-Haignere, who, at the time, was a post-doctoral fellow in the laboratory of Josh H. McDermott, Ph.D., associate professor at MIT.

“I told Bevil that we had a method for reliably identifying a region in the human brain that selectively responds to sounds with pitch,” said Dr. Norman-Haignere, That is when they got the idea to compare humans with monkeys. Based on his studies, Dr. Conway bet that they would see no differences.

To test this, the researchers played a series of harmonic sounds, or tones, to healthy volunteers and monkeys. Meanwhile, functional magnetic resonance imaging (fMRI) was used to monitor brain activity in response to the sounds. The researchers also monitored brain activity in response to sounds of toneless noises that were designed to match the frequency levels of each tone played.

At first glance, the scans looked similar and confirmed previous studies. Maps of the auditory cortex of human and monkey brains had similar hot spots of activity regardless of whether the sounds contained tones.

However, when the researchers looked more closely at the data, they found evidence suggesting the human brain was highly sensitive to tones. The human auditory cortex was much more responsive than the monkey cortex when they looked at the relative activity between tones and equivalent noisy sounds.

“We found that human and monkey brains had very similar responses to sounds in any given frequency range. It’s when we added tonal structure to the sounds that some of these same regions of the human brain became more responsive,” said Dr. Conway. “These results suggest the macaque monkey may experience music and other sounds differently. In contrast, the macaque’s experience of the visual world is probably very similar to our own. It makes one wonder what kind of sounds our evolutionary ancestors experienced.”

Further experiments supported these results. Slightly raising the volume of the tonal sounds had little effect on the tone sensitivity observed in the brains of two monkeys.

Finally, the researchers saw similar results when they used sounds that contained more natural harmonies for monkeys by playing recordings of macaque calls. Brain scans showed that the human auditory cortex was much more responsive than the monkey cortex when they compared relative activity between the calls and toneless, noisy versions of the calls.

“This finding suggests that speech and music may have fundamentally changed the way our brain processes pitch,” said Dr. Conway. “It may also help explain why it has been so hard for scientists to train monkeys to perform auditory tasks that humans find relatively effortless.”

Earlier this year, other scientists from around the U.S. applied for the first round of NIH Sound Health research grants. Some of these grants may eventually support scientists who plan to explore how music turns on the circuitry of the auditory cortex that make our brains sensitive to musical pitch.

This study was supported by the NINDS, NEI, NIMH, and NIA Intramural Research Programs and grants from the NIH (EY13455; EY023322; EB015896; RR021110), the National Science Foundation (Grant 1353571; CCF-1231216), the McDonnell Foundation, the Howard Hughes Medical Institute.

Antenna-like inputs unexpectedly active in neural computation

Most neurons have many branching extensions called dendrites that receive input from thousands of other neurons. Dendrites aren’t just passive information-carriers, however. According to a new study from MIT, they appear to play a surprisingly large role in neurons’ ability to translate incoming signals into electrical activity.

Neuroscientists had previously suspected that dendrites might be active only rarely, under specific circumstances, but the MIT team found that dendrites are nearly always active when the main cell body of the neuron is active.

“It seems like dendritic spikes are an intrinsic feature of how neurons in our brain can compute information. They’re not a rare event,” says Lou Beaulieu-Laroche, an MIT graduate student and the lead author of the study. “All the neurons that we looked at had these dendritic spikes, and they had dendritic spikes very frequently.”

The findings suggest that the role of dendrites in the brain’s computational ability is much larger than had previously been thought, says Mark Harnett, who is the Fred and Carole Middleton Career Development Assistant Professor of Brain and Cognitive Sciences, a member of the McGovern Institute for Brain Research, and the senior author of the paper.

“It’s really quite different than how the field had been thinking about this,” he says. “This is evidence that dendrites are actively engaged in producing and shaping the outputs of neurons.”

Graduate student Enrique Toloza and technical associate Norma Brown are also authors of the paper, which appears in Neuron on June 6.

“A far-flung antenna”

Dendrites receive input from many other neurons and carry those signals to the cell body, also called the soma. If stimulated enough, a neuron fires an action potential — an electrical impulse that spreads to other neurons. Large networks of these neurons communicate with each other to perform complex cognitive tasks such as producing speech.

Through imaging and electrical recording, neuroscientists have learned a great deal about the anatomical and functional differences between different types of neurons in the brain’s cortex, but little is known about how they incorporate dendritic inputs and decide whether to fire an action potential. Dendrites give neurons their characteristic branching tree shape, and the size of the “dendritic arbor” far exceeds the size of the soma.

“It’s an enormous, far-flung antenna that’s listening to thousands of synaptic inputs distributed in space along that branching structure from all the other neurons in the network,” Harnett says.

Some neuroscientists have hypothesized that dendrites are active only rarely, while others thought it possible that dendrites play a more central role in neurons’ overall activity. Until now, it has been difficult to test which of these ideas is more accurate, Harnett says.

To explore dendrites’ role in neural computation, the MIT team used calcium imaging to simultaneously measure activity in both the soma and dendrites of individual neurons in the visual cortex of the brain. Calcium flows into neurons when they are electrically active, so this measurement allowed the researchers to compare the activity of dendrites and soma of the same neuron. The imaging was done while mice performed simple tasks such as running on a treadmill or watching a movie.

Unexpectedly, the researchers found that activity in the soma was highly correlated with dendrite activity. That is, when the soma of a particular neuron was active, the dendrites of that neuron were also active most of the time. This was particularly surprising because the animals weren’t performing any kind of cognitively demanding task, Harnett says.

“They weren’t engaged in a task where they had to really perform and call upon cognitive processes or memory. This is pretty simple, low-level processing, and already we have evidence for active dendritic processing in almost all the neurons,” he says. “We were really surprised to see that.”

Evolving patterns

The researchers don’t yet know precisely how dendritic input contributes to neurons’ overall activity, or what exactly the neurons they studied are doing.

“We know that some of those neurons respond to some visual stimuli, but we don’t necessarily know what those individual neurons are representing. All we can say is that whatever the neuron is representing, the dendrites are actively participating in that,” Beaulieu-Laroche says.

While more work remains to determine exactly how the activity in the dendrites and the soma are linked, “it is these tour-de-force in vivo measurements that are critical for explicitly testing hypotheses regarding electrical signaling in neurons,” says Marla Feller, a professor of neurobiology at the University of California at Berkeley, who was not involved in the research.

The MIT team now plans to investigate how dendritic activity contributes to overall neuronal function by manipulating dendrite activity and then measuring how it affects the activity of the cell body, Harnett says. They also plan to study whether the activity patterns they observed evolve as animals learn a new task.

“One hypothesis is that dendritic activity will actually sharpen up for representing features of a task you taught the animals, and all the other dendritic activity, and all the other somatic activity, is going to get dampened down in the rest of the cortical cells that are not involved,” Harnett says.

The research was funded by the Natural Sciences and Engineering Research Council of Canada and the U.S. National Institutes of Health.

How we make complex decisions

When making a complex decision, we often break the problem down into a series of smaller decisions. For example, when deciding how to treat a patient, a doctor may go through a hierarchy of steps — choosing a diagnostic test, interpreting the results, and then prescribing a medication.

Making hierarchical decisions is straightforward when the sequence of choices leads to the desired outcome. But when the result is unfavorable, it can be tough to decipher what went wrong. For example, if a patient doesn’t improve after treatment, there are many possible reasons why: Maybe the diagnostic test is accurate only 75 percent of the time, or perhaps the medication only works for 50 percent of the patients. To decide what do to next, the doctor must take these probabilities into account.

In a new study, MIT neuroscientists explored how the brain reasons about probable causes of failure after a hierarchy of decisions. They discovered that the brain performs two computations using a distributed network of areas in the frontal cortex. First, the brain computes confidence over the outcome of each decision to figure out the most likely cause of a failure, and second, when it is not easy to discern the cause, the brain makes additional attempts to gain more confidence.

“Creating a hierarchy in one’s mind and navigating that hierarchy while reasoning about outcomes is one of the exciting frontiers of cognitive neuroscience,” says Mehrdad Jazayeri, the Robert A. Swanson Career Development Professor of Life Sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

MIT graduate student Morteza Sarafyzad is the lead author of the paper, which appears in Science on May 16.

Hierarchical reasoning

Previous studies of decision-making in animal models have focused on relatively simple tasks. One line of research has focused on how the brain makes rapid decisions by evaluating momentary evidence. For example, a large body of work has characterized the neural substrates and mechanisms that allow animals to categorize unreliable stimuli on a trial-by-trial basis. Other research has focused on how the brain chooses among multiple options by relying on previous outcomes across multiple trials.

“These have been very fruitful lines of work,” Jazayeri says. “However, they really are the tip of the iceberg of what humans do when they make decisions. As soon as you put yourself in any real decision-making situation, be it choosing a partner, choosing a car, deciding whether to take this drug or not, these become really complicated decisions. Oftentimes there are many factors that influence the decision, and those factors can operate at different timescales.”

The MIT team devised a behavioral task that allowed them to study how the brain processes information at multiple timescales to make decisions. The basic design was that animals would make one of two eye movements depending on whether the time interval between two flashes of light was shorter or longer than 850 milliseconds.

A twist required the animals to solve the task through hierarchical reasoning: The rule that determined which of the two eye movements had to be made switched covertly after 10 to 28 trials. Therefore, to receive reward, the animals had to choose the correct rule, and then make the correct eye movement depending on the rule and interval. However, because the animals were not instructed about the rule switches, they could not straightforwardly determine whether an error was caused because they chose the wrong rule or because they misjudged the interval.

The researchers used this experimental design to probe the computational principles and neural mechanisms that support hierarchical reasoning. Theory and behavioral experiments in humans suggest that reasoning about the potential causes of errors depends in large part on the brain’s ability to measure the degree of confidence in each step of the process. “One of the things that is thought to be critical for hierarchical reasoning is to have some level of confidence about how likely it is that different nodes [of a hierarchy] could have led to the negative outcome,” Jazayeri says.

The researchers were able to study the effect of confidence by adjusting the difficulty of the task. In some trials, the interval between the two flashes was much shorter or longer than 850 milliseconds. These trials were relatively easy and afforded a high degree of confidence. In other trials, the animals were less confident in their judgments because the interval was closer to the boundary and difficult to discriminate.

As they had hypothesized, the researchers found that the animals’ behavior was influenced by their confidence in their performance. When the interval was easy to judge, the animals were much quicker to switch to the other rule when they found out they were wrong. When the interval was harder to judge, the animals were less confident in their performance and applied the same rule a few more times before switching.

“They know that they’re not confident, and they know that if they’re not confident, it’s not necessarily the case that the rule has changed. They know they might have made a mistake [in their interval judgment],” Jazayeri says.

Decision-making circuit

By recording neural activity in the frontal cortex just after each trial was finished, the researchers were able to identify two regions that are key to hierarchical decision-making. They found that both of these regions, known as the anterior cingulate cortex (ACC) and dorsomedial frontal cortex (DMFC), became active after the animals were informed about an incorrect response. When the researchers analyzed the neural activity in relation to the animals’ behavior, it became clear that neurons in both areas signaled the animals’ belief about a possible rule switch. Notably, the activity related to animals’ belief was “louder” when animals made a mistake after an easy trial, and after consecutive mistakes.

The researchers also found that while these areas showed similar patterns of activity, it was activity in the ACC in particular that predicted when the animal would switch rules, suggesting that ACC plays a central role in switching decision strategies. Indeed, the researchers found that direct manipulation of neural activity in ACC was sufficient to interfere with the animals’ rational behavior.

“There exists a distributed circuit in the frontal cortex involving these two areas, and they seem to be hierarchically organized, just like the task would demand,” Jazayeri says.

Daeyeol Lee, a professor of neuroscience, psychology, and psychiatry at Yale School of Medicine, says the study overcomes what has been a major obstacle in studying this kind of decision-making, namely, a lack of animal models to study the dynamics of brain activity at single-neuron resolution.

“Sarafyazd and Jazayeri have developed an elegant decision-making task that required animals to evaluate multiple types of evidence, and identified how the two separate regions in the medial frontal cortex are critically involved in handling different sources of errors in decision making,” says Lee, who was not involved in the research. “This study is a tour de force in both rigor and creativity, and peels off another layer of mystery about the prefrontal cortex.”

Putting vision models to the test

MIT neuroscientists have performed the most rigorous testing yet of computational models that mimic the brain’s visual cortex.

Using their current best model of the brain’s visual neural network, the researchers designed a new way to precisely control individual neurons and populations of neurons in the middle of that network. In an animal study, the team then showed that the information gained from the computational model enabled them to create images that strongly activated specific brain neurons of their choosing.

The findings suggest that the current versions of these models are similar enough to the brain that they could be used to control brain states in animals. The study also helps to establish the usefulness of these vision models, which have generated vigorous debate over whether they accurately mimic how the visual cortex works, says James DiCarlo, the head of MIT’s Department of Brain and Cognitive Sciences, an investigator in the McGovern Institute for Brain Research and the Center for Brains, Minds, and Machines, and the senior author of the study.

“People have questioned whether these models provide understanding of the visual system,” he says. “Rather than debate that in an academic sense, we showed that these models are already powerful enough to enable an important new application. Whether you understand how the model works or not, it’s already useful in that sense.”

MIT postdocs Pouya Bashivan and Kohitij Kar are the lead authors of the paper, which appears in the May 2 online edition of Science.

Neural control

Over the past several years, DiCarlo and others have developed models of the visual system based on artificial neural networks. Each network starts out with an arbitrary architecture consisting of model neurons, or nodes, that can be connected to each other with different strengths, also called weights.

The researchers then train the models on a library of more than 1 million images. As the researchers show the model each image, along with a label for the most prominent object in the image, such as an airplane or a chair, the model learns to recognize objects by changing the strengths of its connections.

It’s difficult to determine exactly how the model achieves this kind of recognition, but DiCarlo and his colleagues have previously shown that the “neurons” within these models produce activity patterns very similar to those seen in the animal visual cortex in response to the same images.

In the new study, the researchers wanted to test whether their models could perform some tasks that previously have not been demonstrated. In particular, they wanted to see if the models could be used to control neural activity in the visual cortex of animals.

“So far, what has been done with these models is predicting what the neural responses would be to other stimuli that they have not seen before,” Bashivan says. “The main difference here is that we are going one step further and using the models to drive the neurons into desired states.”

To achieve this, the researchers first created a one-to-one map of neurons in the brain’s visual area V4 to nodes in the computational model. They did this by showing images to animals and to the models, and comparing their responses to the same images. There are millions of neurons in area V4, but for this study, the researchers created maps for subpopulations of five to 40 neurons at a time.

“Once each neuron has an assignment, the model allows you to make predictions about that neuron,” DiCarlo says.

The researchers then set out to see if they could use those predictions to control the activity of individual neurons in the visual cortex. The first type of control, which they called “stretching,” involves showing an image that will drive the activity of a specific neuron far beyond the activity usually elicited by “natural” images similar to those used to train the neural networks.

The researchers found that when they showed animals these “synthetic” images, which are created by the models and do not resemble natural objects, the target neurons did respond as expected. On average, the neurons showed about 40 percent more activity in response to these images than when they were shown natural images like those used to train the model. This kind of control has never been reported before.

“That they succeeded in doing this is really amazing. It’s as if, for that neuron at least, its ideal image suddenly leaped into focus. The neuron was suddenly presented with the stimulus it had always been searching for,” says Aaron Batista, an associate professor of bioengineering at the University of Pittsburgh, who was not involved in the study. “This is a remarkable idea, and to pull it off is quite a feat. It is perhaps the strongest validation so far of the use of artificial neural networks to understand real neural networks.”

In a similar set of experiments, the researchers attempted to generate images that would drive one neuron maximally while also keeping the activity in nearby neurons very low, a more difficult task. For most of the neurons they tested, the researchers were able to enhance the activity of the target neuron with little increase in the surrounding neurons.

“A common trend in neuroscience is that experimental data collection and computational modeling are executed somewhat independently, resulting in very little model validation, and thus no measurable progress. Our efforts bring back to life this ‘closed loop’ approach, engaging model predictions and neural measurements that are critical to the success of building and testing models that will most resemble the brain,” Kar says.

Measuring accuracy

The researchers also showed that they could use the model to predict how neurons of area V4 would respond to synthetic images. Most previous tests of these models have used the same type of naturalistic images that were used to train the model. The MIT team found that the models were about 54 percent accurate at predicting how the brain would respond to the synthetic images, compared to nearly 90 percent accuracy when the natural images are used.

“In a sense, we’re quantifying how accurate these models are at making predictions outside the domain where they were trained,” Bashivan says. “Ideally the model should be able to predict accurately no matter what the input is.”

The researchers now hope to improve the models’ accuracy by allowing them to incorporate the new information they learn from seeing the synthetic images, which was not done in this study.

This kind of control could be useful for neuroscientists who want to study how different neurons interact with each other, and how they might be connected, the researchers say. Farther in the future, this approach could potentially be useful for treating mood disorders such as depression. The researchers are now working on extending their model to the inferotemporal cortex, which feeds into the amygdala, which is involved in processing emotions.

“If we had a good model of the neurons that are engaged in experiencing emotions or causing various kinds of disorders, then we could use that model to drive the neurons in a way that would help to ameliorate those disorders,” Bashivan says.

The research was funded by the Intelligence Advanced Research Projects Agency, the MIT-IBM Watson AI Lab, the National Eye Institute, and the Office of Naval Research.

Can we think without language?

As part of our Ask the Brain series, Anna Ivanova, a graduate student who studies how the brain processes language in the labs of Nancy Kanwisher and Evelina Fedorenko, answers the question, “Can we think without language?”

Anna Ivanova headshot
Graduate student Anna Ivanova studies language processing in the brain.

_____

Imagine a woman – let’s call her Sue. One day Sue gets a stroke that destroys large areas of brain tissue within her left hemisphere. As a result, she develops a condition known as global aphasia, meaning she can no longer produce or understand phrases and sentences. The question is: to what extent are Sue’s thinking abilities preserved?

Many writers and philosophers have drawn a strong connection between language and thought. Oscar Wilde called language “the parent, and not the child, of thought.” Ludwig Wittgenstein claimed that “the limits of my language mean the limits of my world.” And Bertrand Russell stated that the role of language is “to make possible thoughts which could not exist without it.” Given this view, Sue should have irreparable damage to her cognitive abilities when she loses access to language. Do neuroscientists agree? Not quite.

Neuroimaging evidence has revealed a specialized set of regions within the human brain that respond strongly and selectively to language.

This language system seems to be distinct from regions that are linked to our ability to plan, remember, reminisce on past and future, reason in social situations, experience empathy, make moral decisions, and construct one’s self-image. Thus, vast portions of our everyday cognitive experiences appear to be unrelated to language per se.

But what about Sue? Can she really think the way we do?

While we cannot directly measure what it’s like to think like a neurotypical adult, we can probe Sue’s cognitive abilities by asking her to perform a variety of different tasks. Turns out, patients with global aphasia can solve arithmetic problems, reason about intentions of others, and engage in complex causal reasoning tasks. They can tell whether a drawing depicts a real-life event and laugh when in doesn’t. Some of them play chess in their spare time. Some even engage in creative tasks – a composer Vissarion Shebalin continued to write music even after a stroke that left him severely aphasic.

Some readers might find these results surprising, given that their own thoughts seem to be tied to language so closely. If you find yourself in that category, I have a surprise for you – research has established that not everybody has inner speech experiences. A bilingual friend of mine sometimes gets asked if she thinks in English or Polish, but she doesn’t quite get the question (“how can you think in a language?”). Another friend of mine claims that he “thinks in landscapes,” a sentiment that conveys the pictorial nature of some people’s thoughts. Therefore, even inner speech does not appear to be necessary for thought.

Have we solved the mystery then? Can we claim that language and thought are completely independent and Bertrand Russell was wrong? Only to some extent. We have shown that damage to the language system within an adult human brain leaves most other cognitive functions intact. However, when it comes to the language-thought link across the entire lifespan, the picture is far less clear. While available evidence is scarce, it does indicate that some of the cognitive functions discussed above are, at least to some extent, acquired through language.

Perhaps the clearest case is numbers. There are certain tribes around the world whose languages do not have number words – some might only have words for one through five (Munduruku), and some won’t even have those (Pirahã). Speakers of Pirahã have been shown to make mistakes on one-to-one matching tasks (“get as many sticks as there are balls”), suggesting that language plays an important role in bootstrapping exact number manipulations.

Another way to examine the influence of language on cognition over time is by studying cases when language access is delayed. Deaf children born into hearing families often do not get exposure to sign languages for the first few months or even years of life; such language deprivation has been shown to impair their ability to engage in social interactions and reason about the intentions of others. Thus, while the language system may not be directly involved in the process of thinking, it is crucial for acquiring enough information to properly set up various cognitive domains.

Even after her stroke, our patient Sue will have access to a wide range of cognitive abilities. She will be able to think by drawing on neural systems underlying many non-linguistic skills, such as numerical cognition, planning, and social reasoning. It is worth bearing in mind, however, that at least some of those systems might have relied on language back when Sue was a child. While the static view of the human mind suggests that language and thought are largely disconnected, the dynamic view hints at a rich nature of language-thought interactions across development.

_____

Do you have a question for The Brain? Ask it here.

Ed Boyden elected to National Academy of Sciences

Ed Boyden has been elected to join the National Academy of Sciences (NAS). The organization, established by an act of Congress during the height of the Civil War, was founded to provide independent and objective advice on scientific matters to the nation, and is actively engaged in furthering science in the United States. Each year NAS members recognize fellow scientists through election to the academy based on their distinguished and continuing achievements in original research.

“I’m very honored and grateful to have been elected to the NAS,” says Boyden. “This is a testament to the work of many graduate students, postdoctoral scholars, research scientists, and staff at MIT who have worked with me over the years, and many collaborators and friends at MIT and around the world who have helped our group on this mission to advance neuroscience through new tools and ways of thinking.”

Boyden’s research creates and applies technologies that aim to expand our understanding of the brain. He notably co-invented optogenetics as an independent side collaboration, conducted in parallel to his PhD studies, a game-changing technology that has revolutionized neurobiology. This technology uses targeted expression of light-sensitive channels and pumps to activate or suppress neuronal activity in vivo using light. Optogenetics quickly swept the field of neurobiology and has been leveraged to understand how specific neurons and brain regions contribute to behavior and to disease.

His research since has an overarching focus on understanding the brain. To this end, he and his lab have the ambitious goal of developing technologies that can map, record, and manipulate the brain. This has led, as selected examples, to the invention of expansion microscopy, a super-resolution imaging technology that can capture neuron’s microstructures and reveal their complex connections, even across large-scale neural circuits; voltage-sensitive fluorescent reporters that allow neural activity to be monitored in vivo; and temporal interference stimulation, a non-invasive brain stimulation technique that allows selective activation of subcortical brain regions.

“We are all incredibly happy to see Ed being elected to the academy,” says Robert Desimone, director of the McGovern Institute for Brain Research at MIT. “He has been consistently innovative, inventing new ways of manipulating and observing neurons that are revolutionizing the field of neuroscience.”

This year the NAS, an organization that includes over 500 Nobel Laureates, elected 100 new members and 25 foreign associates. Three MIT professors were elected this year, with Paula T. Hammond (David H. Koch (1962) Professor of Engineering and Department Head, Chemical Engineering) and Aviv Regev (HHMI Investigator and Professor in the Department of Biology) being elected alongside Boyden. Boyden becomes the seventh member of the McGovern Institute faculty to join the National Academy of Sciences.

The formal induction ceremony for new NAS members, during which they sign the ledger whose first signatory is Abraham Lincoln, will be held at the Academy’s annual meeting in Washington D.C. next spring.

 

 

 

 

 

 

 

 

Alumnus gives MIT $4.5 million to study effects of cannabis on the brain

The following news is adapted from a press release issued in conjunction with Harvard Medical School.

Charles R. Broderick, an alumnus of MIT and Harvard University, has made gifts to both alma maters to support fundamental research into the effects of cannabis on the brain and behavior.

The gifts, totaling $9 million, represent the largest donation to date to support independent research on the science of cannabinoids. The donation will allow experts in the fields of neuroscience and biomedicine at MIT and Harvard Medical School to conduct research that may ultimately help unravel the biology of cannabinoids, illuminate their effects on the human brain, catalyze treatments, and inform evidence-based clinical guidelines, societal policies, and regulation of cannabis.

Lagging behind legislation

With the increasing use of cannabis both for medicinal and recreational purposes, there is a growing concern about critical gaps in knowledge.

In 2017, the National Academies of Sciences, Engineering, and Medicine issued a report calling upon philanthropic organizations, private companies, public agencies and others to develop a “comprehensive evidence base” on the short- and long-term health effects — both beneficial and harmful — of cannabis use.

“Our desire is to fill the research void that currently exists in the science of cannabis,” says Broderick, who was an early investor in Canada’s medical marijuana market.

Broderick is the founder of Uji Capital LLC, a family office focused on quantitative opportunities in global equity capital markets. Identifying the growth of the Canadian legal cannabis market as a strategic investment opportunity, Broderick took equity positions in Tweed Marijuana Inc. and Aphria Inc., which have since grown into two of North America’s most successful cannabis companies. Subsequently, Broderick made a private investment in and served as a board member for Tokyo Smoke, a cannabis brand portfolio, which merged in 2017 to create Hiku Brands, where he served as chairman. Hiku Brands was acquired by Canopy Growth Corp. in 2018.

Through the Broderick gifts to Harvard Medical School and MIT’s School of Science through the Picower Institute for Learning and Memory and the McGovern Institute for Brain Research, the Broderick funds will support independent studies of the neurobiology of cannabis; its effects on brain development, various organ systems and overall health, including treatment and therapeutic contexts; and cognitive, behavioral and social ramifications.

“I want to destigmatize the conversation around cannabis — and, in part, that means providing facts to the medical community, as well as the general public,” says Broderick, who argues that independent research needs to form the basis for policy discussions, regardless of whether it is good for business. “Then we’re all working from the same information. We need to replace rhetoric with research.”

MIT: Focused on brain health and function

The gift to MIT from Broderick will provide $4.5 million over three years to support independent research for four scientists at the McGovern and Picower institutes.

Two of these researchers — John Gabrieli, the Grover Hermann Professor of Health Sciences and Technology, a professor of brain and cognitive sciences, and a member of MIT’s McGovern Institute for Brain Research; and Myriam Heiman, the Latham Family Associate Professor of Neuroscience at the Picower Institute — will separately explore the relationship between cannabis and schizophrenia.

Gabrieli, who directs the Martinos Imaging Center at MIT, will monitor any potential therapeutic value of cannabis for adults with schizophrenia using fMRI scans and behavioral studies.

“The ultimate goal is to improve brain health and wellbeing,” says Gabrieli. “And we have to make informed decisions on the way to this goal, wherever the science leads us. We need more data.”

Heiman, who is a molecular neuroscientist, will study how chronic exposure to phytocannabinoid molecules THC and CBD may alter the developmental molecular trajectories of cell types implicated in schizophrenia.

“Our lab’s research may provide insight into why several emerging lines of evidence suggest that adolescent cannabis use can be associated with adverse outcomes not seen in adults,” says Heiman.

In addition to these studies, Gabrieli also hopes to investigate whether cannabis can have therapeutic value for autism spectrum disorders, and Heiman plans to look at whether cannabis can have therapeutic value for Huntington’s disease.

MIT Institute Professor Ann Graybiel has proposed to study the cannabinoid 1 (CB1) receptor, which mediates many of the effects of cannabinoids. Her team recently found that CB1 receptors are tightly linked to dopamine — a neurotransmitter that affects both mood and motivation. Graybiel, who is also a member of the McGovern Institute, will examine how CB1 receptors in the striatum, a deep brain structure implicated in learning and habit formation, may influence dopamine release in the brain. These findings will be important for understanding the effects of cannabis on casual users, as well as its relationship to addictive states and neuropsychiatric disorders.

Earl Miller, Picower Professor of Neuroscience at the Picower Institute, will study effects of cannabinoids on both attention and working memory. His lab has recently formulated a model of working memory and unlocked how anesthetics reduce consciousness, showing in both cases a key role in the brain’s frontal cortex for brain rhythms, or the synchronous firing of neurons. He will observe how these rhythms may be affected by cannabis use — findings that may be able to shed light on tasks like driving where maintenance of attention is especially crucial.

Harvard Medical School: Mobilizing basic scientists and clinicians to solve an acute biomedical challenge 

The Broderick gift provides $4.5 million to establish the Charles R. Broderick Phytocannabinoid Research Initiative at Harvard Medical School, funding basic, translational and clinical research across the HMS community to generate fundamental insights about the effects of cannabinoids on brain function, various organ systems, and overall health.

The research initiative will span basic science and clinical disciplines, ranging from neurobiology and immunology to psychiatry and neurology, taking advantage of the combined expertise of some 30 basic scientists and clinicians across the school and its affiliated hospitals.

The epicenter of these research efforts will be the Department of Neurobiology under the leadership of Bruce Bean and Wade Regehr.

“I am excited by Bob’s commitment to cannabinoid science,” says Regehr, professor of neurobiology in the Blavatnik Institute at Harvard Medical School. “The research efforts enabled by Bob’s vision set the stage for unraveling some of the most confounding mysteries of cannabinoids and their effects on the brain and various organ systems.”

Bean, Regehr, and fellow neurobiologists Rachel Wilson and Bernardo Sabatini, for example, focus on understanding the basic biology of the cannabinoid system, which includes hundreds of plant and synthetic compounds as well as naturally occurring cannabinoids made in the brain.

Cannabinoid compounds activate a variety of brain receptors, and the downstream biological effects of this activation are astoundingly complex, varying by age and sex, and complicated by a person’s physiologic condition and overall health. This complexity and high degree of variability in individual biology has hampered scientific understanding of the positive and negative effects of cannabis on the human body. Bean, Regehr, and colleagues have already made critical insights showing how cannabinoids influence cell-to-cell communication in the brain.

“Even though cannabis products are now widely available, and some used clinically, we still understand remarkably little about how they influence brain function and neuronal circuits in the brain,” says Bean, the Robert Winthrop Professor of Neurobiology in the Blavatnik Institute at HMS. “This gift will allow us to conduct critical research into the neurobiology of cannabinoids, which may ultimately inform new approaches for the treatment of pain, epilepsy, sleep and mood disorders, and more.”

To propel research findings from lab to clinic, basic scientists from HMS will partner with clinicians from Harvard-affiliated hospitals, bringing together clinicians and scientists from disciplines including cardiology, vascular medicine, neurology, and immunology in an effort to glean a deeper and more nuanced understanding of cannabinoids’ effects on various organ systems and the body as a whole, rather than just on isolated organs.

For example, Bean and colleague Gary Yellen, who are studying the mechanisms of action of antiepileptic drugs, have become interested in the effects of cannabinoids on epilepsy, an interest they share with Elizabeth Thiele, director of the pediatric epilepsy program at Massachusetts General Hospital. Thiele is a pioneer in the use of cannabidiol for the treatment of drug-resistant forms of epilepsy. Despite proven clinical efficacy and recent FDA approval for rare childhood epilepsies, researchers still do not know exactly how cannabidiol quiets the misfiring brain cells of patients with the seizure disorder. Understanding its mechanism of action could help in developing new agents for treating other forms of epilepsy and other neurologic disorders.

Algorithms of intelligence

The following post is adapted from a story featured in a recent Brain Scan newsletter.

Machine vision systems are more and more common in everyday life, from social media to self-driving cars, but training artificial neural networks to “see” the world as we do—distinguishing cyclists from signposts—remains challenging. Will artificial neural networks ever decode the world as exquisitely as humans? Can we refine these models and influence perception in a person’s brain just by activating individual, selected neurons? The DiCarlo lab, including CBMM postdocs Kohitij Kar and Pouya Bashivan, are finding that we are surprisingly close to answering “yes” to such questions, all in the context of accelerated insights into artificial intelligence at the McGovern Institute for Brain Research, CBMM, and the Quest for Intelligence at MIT.

Precision Modeling

Beyond light hitting the retina, the recognition process that unfolds in the visual cortex is key to truly “seeing” the surrounding world. Information is decoded through the ventral visual stream, cortical brain regions that progressively build a more accurate, fine-grained, and accessible representation of the objects around us. Artificial neural networks have been modeled on these elegant cortical systems, and the most successful models, deep convolutional neural networks (DCNNs), can now decode objects at levels comparable to the primate brain. However, even leading DCNNs have problems with certain challenging images, presumably due to shadows, clutter, and other visual noise. While there’s no simple feature that unites all challenging images, the quest is on to tackle such images to attain precise recognition at a level commensurate with human object recognition.

“One next step is to couple this new precision tool with our emerging understanding of how neural patterns underlie object perception. This might allow us to create arrangements of pixels that look nothing like, for example, a cat, but that can fool the brain into thinking it’s seeing a cat.”- James DiCarlo

In a recent push, Kar and DiCarlo demonstrated that adding feedback connections, currently missing in most DCNNs, allows the system to better recognize objects in challenging situations, even those where a human can’t articulate why recognition is an issue for feedforward DCNNs. They also found that this recurrent circuit seems critical to primate success rates in performing this task. This is incredibly important for systems like self-driving cars, where the stakes for artificial visual systems are high, and faithful recognition is a must.

Now you see it

As artificial object recognition systems have become more precise in predicting neural activity, the DiCarlo lab wondered what such precision might allow: could they use their system to not only predict, but to control specific neuronal activity?

To demonstrate the power of their models, Bashivan, Kar, and colleagues zeroed in on targeted neurons in the brain. In a paper published in Science, they used an artificial neural network to generate a random-looking group of pixels that, when shown to an animal, activated the team’s target, a target they called “one hot neuron.” In other words, they showed the brain a synthetic pattern, and the pixels in the pattern precisely activated targeted neurons while other neurons remained relatively silent.

These findings show how the knowledge in today’s artificial neural network models might one day be used to noninvasively influence brain states with neural resolution. Such precise systems would be useful as we look to the future, toward visual prosthetics for the blind. Such a precise model of the ventral visual stream would have been incon-ceivable not so long ago, and all eyes are on where McGovern researchers will take these technologies in the coming years.