McGovern Institute postcard collection

A collection of 13 postcards arranged in columns.
The McGovern Institute postcard collection, 2023.

The McGovern Institute may be best known for its scientific breakthroughs, but a captivating series of brain-themed postcards developed by McGovern researchers and staff now reveals the institute’s artistic side.

What began in 2017 with a series of brain anatomy postcards inspired by the U.S. Works Projects Administration’s iconic national parks posters, has grown into a collection of twelve different prints, each featuring a unique fusion of neuroscience and art.

More information about each series in the McGovern Institute postcard collection, including the color-your-own mindfulness postcards, can be found below.

Mindfulness Postcard Series, 2023

In winter 2023, the institute released its mindfulness postcard series, a collection of four different neuroscience-themed illustrations that can be colored in with pencils, markers, or paint. The postcard series was inspired by research conducted in John Gabrieli’s lab, which found that practicing mindfulness reduced children’s stress levels and negative emotions during the pandemic. These findings contribute to a growing body of evidence that practicing mindfulness — focusing awareness on the present, typically through meditation, but also through coloring — can change patterns of brain activity associated with emotions and mental health.

Download and color your own postcards.

Genes

The McGovern Institute is at the cutting edge of applications based on CRISPR, a genome editing tool pioneered by McGovern Investigator Feng Zhang. Hidden within this DNA-themed postcard is a clam, virus, bacteriophage, snail, and the word CRISPR. Click the links to learn how these hidden elements relate to genetic engineering research at the McGovern Institute.

 

Line art showing strands of DNA and the McGovern Institute logo.
The McGovern Institute’s “mindfulness” postcard series includes this DNA-themed illustration containing five hidden design elements related to McGovern research. Image: Joseph Laney

Neurons

McGovern researchers probe the nanoscale and cellular processes that are critical to brain function, including the complex computations conducted in neurons, to the synapses and neurotransmitters that facilitate messaging between cells. Find the mouse, worm, and microscope — three critical elements related to cellular and molecular neuroscience research at the McGovern Institute — in the postcard below.

 

Line art showing multiple neurons and the McGovern Institute logo.
The McGovern Institute’s “mindfulness” postcard series includes this neuron-themed illustration containing three hidden design elements related to McGovern research. Image: Joseph Laney

Human Brain

Cognitive neuroscientists at the McGovern Institute examine the brain processes that come together to inform our thoughts and understanding of the world.​ Find the musical note, speech bubbles, and human face in this postcard and click on the links to learn more about how these hidden elements relate to brain research at the McGovern Institute.

 

Line art of a human brain and the McGovern Institute logo.
The McGovern Institute’s “mindfulness” postcard series includes this brain-themed illustration containing three hidden design elements related to McGovern research. Image: Joseph Laney

Artificial Intelligence

McGovern researchers develop machine learning systems that mimic human processing of visual and auditory cues and construct algorithms to help us understand the complex computations made by the brain. Find the speech bubbles, DNA, and cochlea (spiral) in this postcard and click on the links to learn more about how these hidden elements relate to computational neuroscience research at the McGovern Institute.

Line art showing an artificial neural network in the shape of the human brain and the McGovern Institute logo.
The McGovern Institute’s “mindfulness” postcard series includes this AI-themed illustration containing three hidden design elements related to McGovern research. Image: Joseph Laney

Neuron Postcard Series, 2019

In 2019, the McGovern Institute released a second series of postcards based on the anatomy of a neuron. Each postcard includes text on the back side that describes McGovern research related to that specific part of the neuron. The descriptive text for each postcard is shown beloSynapse

Synapse

Snow melting off the branch of a bush at the water's edge creates a ripple effect in the pool of water below. Words at the bottom of the image say "It All Begins at the SYNAPSE"Signals flow through the nervous system from one neuron to the next across synapses.

Synapses are exquisitely organized molecular machines that control the transmission of information.

McGovern researchers are studying how disruptions in synapse function can lead to brain disorders like autism.

Image: Joseph Laney

Axon

Illustration of three bears hunting for fish in a flowing river with the words: "Axon: Where Action Finds Potential"The axon is the long, thin neural cable that carries electrical impulses called action potentials from the soma to synaptic terminals at downstream neurons.

Researchers at the McGovern Institute are developing and using tracers that label axons to reveal the elaborate circuit architecture of the brain.

Image: Joseph Laney

Soma

An elk stands on a rocky outcropping overlooking a large lake with an island in the center. Words at the top read: "Collect Your Thoughts at the Soma"The soma, or cell body, is the control center of the neuron, where the nucleus is located.

It connects the dendrites to the axon, which sends information to other neurons.

At the McGovern Institute, neuroscientists are targeting the soma with proteins that can activate single neurons and map connections in the brain.

Image: Joseph Laney

Dendrites

A mountain lake at sunset with colorful fish and snow from a distant mountaintop melting into the lake. Words say "DENDRITIC ARBOR"Long branching neuronal processes called dendrites receive synaptic inputs from thousands of other neurons and carry those signals to the cell body.

McGovern neuroscientists have discovered that human dendrites have different electrical properties from those of other species, which may contribute to the enhanced computing power of the human brain.

Image: Joseph Laney

Brain Anatomy Postcard Series, 2017

The original brain anatomy-themed postcard series, developed in 2017, was inspired by the U.S. Works Projects Administration’s iconic national parks posters created in the 1930s and 1940s. Each postcard includes text on the back side that describes McGovern research related to that specific part of the neuron. The descriptive text for each postcard is shown below.

Sylvian Fissure

Illustration of explorer in cave labeled with temporal and parietal letters
The Sylvian fissure is a prominent groove on the right side of the brain that separates the frontal and parietal lobes from the temporal lobe. McGovern researchers are studying a region near the right Sylvian fissure, called the rTPJ, which is involved in thinking about what another person is thinking.

Hippocampus

The hippocampus, named after its resemblance to the seahorse, plays an important role in memory. McGovern researchers are studying how changes in the strength of synapses (connections between neurons) in the hippocampus contribute to the formation and retention of memories.

Basal Ganglia

The basal ganglia are a group of deep brain structures best known for their control of movement. McGovern researchers are studying how the connections between the cerebral cortex and a part of the basal ganglia known as the striatum play a role in emotional decision making and motivation.

 

 

 

The arcuate fasciculus is a bundle of axons in the brain that connects Broca’s area, involved in speech production, and Wernicke’s area, involved in understanding language. McGovern researchers have found a correlation between the size of this structure and the risk of dyslexia in children.

 

 

Order and Share

To order your own McGovern brain postcards, contact our colleagues at the MIT Museum, where proceeds will support current and future exhibitions at the growing museum.

Please share a photo of yourself in your own lab (or natural habitat) with one of our cards on social media. Tell us what you’re studying and don’t forget to tag us @mcgovernmit using the hashtag #McGovernPostcards.

New gene-editing system precisely inserts large DNA sequences into cellular DNA

A team led by researchers from Broad Institute of MIT and Harvard, and the McGovern Institute for Brain Research at MIT, has characterized and engineered a new gene-editing system that can precisely and efficiently insert large DNA sequences into a genome. The system, harnessed from cyanobacteria and called CRISPR-associated transposase (CAST), allows efficient introduction of DNA while reducing the potential error-prone steps in the process — adding key capabilities to gene-editing technology and addressing a long-sought goal for precision gene editing.

Precise insertion of DNA has the potential to treat a large swath of genetic diseases by integrating new DNA into the genome while disabling the disease-related sequence. To accomplish this in cells, researchers have typically used CRISPR enzymes to cut the genome at the site of the deleterious sequence, and then relied on the cell’s own repair machinery to stitch the old and new DNA elements together. However, this approach has many limitations.

Using Escherichia coli bacteria, the researchers have now demonstrated that CAST can be programmed to efficiently insert new DNA at a designated site, with minimal editing errors and without relying on the cell’s own repair machinery. The system holds potential for much more efficient gene insertion compared to previous technologies, according to the team.

The researchers are working to apply this editing platform in eukaryotic organisms, including plant and animal cells, for precision research and therapeutic applications.

The team molecularly characterized and harnessed CAST from two cyanobacteria, Scytonema hofmanni and Anabaena cylindrica, and additionally revealed a new way that some CRISPR systems perform in nature: not to protect bacteria from viruses, but to facilitate the spread of transposon DNA.

The work, appearing in Science, was led by first author Jonathan Strecker, a postdoctoral fellow at the Broad Institute; graduate student Alim Ladha at MIT; and senior author Feng Zhang, a core institute member at the Broad Institute, investigator at the McGovern Institute for Brain Research at MIT, the James and Patricia Poitras Professor of Neuroscience at MIT, and an associate professor at MIT, with joint appointments in the departments of Brain and Cognitive Sciences and Biological Engineering. Collaborators include Eugene Koonin at the National Institutes of Health.

A New Role for a CRISPR-Associated System

“One of the long-sought-after applications for molecular biology is the ability to introduce new DNA into the genome precisely, efficiently, and safely,” explains Zhang. “We have worked on many bacterial proteins in the past to harness them for editing in human cells, and we’re excited to further develop CAST and open up these new capabilities for manipulating the genome.”

To expand the gene-editing toolbox, the team turned to transposons. Transposons (sometimes called “jumping genes”) are DNA sequences with associated proteins — transposases — that allow the DNA to be cut-and-pasted into other places.

Most transposons appear to jump randomly throughout the cellular genome and out to viruses or plasmids that may also be inhabiting a cell. However, some transposon subtypes in cyanobacteria have been computationally associated with CRISPR systems, suggesting that these transposons may naturally be guided towards more-specific genetic targets. This theorized function would be a new role for CRISPR systems; most known CRISPR elements are instead part of a bacterial immune system, in which Cas enzymes and their guide RNA will target and destroy viruses or plasmids.

In this paper, the research team identified the mechanisms at work and determined that some CRISPR-associated transposases have hijacked an enzyme called Cas12k and its guide to insert DNA at specific targets, rather than just cutting the target for defensive purposes.

“We dove deeply into this system in cyanobacteria, began taking CAST apart to understand all of its components, and discovered this novel biological function,” says Strecker, a postdoctoral fellow in Zhang’s lab at the Broad Institute. “CRISPR-based tools are often DNA-cutting tools, and they’re very efficient at disrupting genes. In contrast, CAST is naturally set up to integrate genes. To our knowledge, it’s the first system of this kind that has been characterized and manipulated.”

Harnessing CAST for Genome Editing

Once all the elements and molecular requirements of the CAST system were laid bare, the team focused on programming CAST to insert DNA at desired sites in E. coli.

“We reconstituted the system in E. coli and co-opted this mechanism in a way that was useful,” says Strecker. “We reprogrammed the system to introduce new DNA, up to 10 kilobase pairs long, into specific locations in the genome.”

The team envisions basic research, agricultural, or therapeutic applications based on this platform, such as introducing new genes to replace DNA that has mutated in a harmful way — for example, in sickle cell disease. Systems developed with CAST could potentially be used to integrate a healthy version of a gene into a cell’s genome, disabling or overriding the DNA causing problems.

Alternatively, rather than inserting DNA with the purpose of fixing a deleterious version of a gene, CAST may be used to augment healthy cells with elements that are therapeutically beneficial, according to the team. For example, in immunotherapy, a researcher may want to introduce a “chimeric antigen receptor” (CAR) into a specific spot in the genome of a T cell — enabling the T cell to recognize and destroy cancer cells.

“For any situation where people want to insert DNA, CAST could be a much more attractive approach,” says Zhang. “This just underscores how diverse nature can be and how many unexpected features we have yet to find.”

Support for this study was provided in part by the Human Frontier Science Program, New York Stem Cell Foundation, Mathers Foundation, NIH (1R01-HG009761, 1R01-MH110049, and 1DP1-HL141201), Howard Hughes Medical Institute, Poitras Center for Psychiatric Disorders Research, J. and P. Poitras, and Hock E. Tan and K. Lisa Yang Center for Autism Research.

J.S. and F.Z. are co-inventors on US provisional patent application no. 62/780,658 filed by the Broad Institute, relating to CRISPR-associated transposases.

Expression plasmids are available from Addgene.

Our brains appear uniquely tuned for musical pitch

In the eternal search for understanding what makes us human, scientists found that our brains are more sensitive to pitch, the harmonic sounds we hear when listening to music, than our evolutionary relative the macaque monkey. The study, funded in part by the National Institutes of Health, highlights the promise of Sound Health, a joint project between the NIH and the John F. Kennedy Center for the Performing Arts, in association with the National Endowment for the Arts, that aims to understand the role of music in health.

“We found that a certain region of our brains has a stronger preference for sounds with pitch than macaque monkey brains,” said Bevil Conway, Ph.D., investigator in the NIH’s Intramural Research Program and a senior author of the study published in Nature Neuroscience. “The results raise the possibility that these sounds, which are embedded in speech and music, may have shaped the basic organization of the human brain.”

The study started with a friendly bet between Dr. Conway and Sam Norman-Haignere, Ph.D., a post-doctoral fellow at Columbia University’s Zuckerman Institute for Mind, Brain, and Behavior and the first author of the paper.

At the time, both were working at the Massachusetts Institute of Technology (MIT). Dr. Conway’s team had been searching for differences between how human and monkey brains control vision only to discover that there are very few. Their brain mapping studies suggested that humans and monkeys see the world in very similar ways. But then, Dr. Conway heard about some studies on hearing being done by Dr. Norman-Haignere, who, at the time, was a post-doctoral fellow in the laboratory of Josh H. McDermott, Ph.D., associate professor at MIT.

“I told Bevil that we had a method for reliably identifying a region in the human brain that selectively responds to sounds with pitch,” said Dr. Norman-Haignere, That is when they got the idea to compare humans with monkeys. Based on his studies, Dr. Conway bet that they would see no differences.

To test this, the researchers played a series of harmonic sounds, or tones, to healthy volunteers and monkeys. Meanwhile, functional magnetic resonance imaging (fMRI) was used to monitor brain activity in response to the sounds. The researchers also monitored brain activity in response to sounds of toneless noises that were designed to match the frequency levels of each tone played.

At first glance, the scans looked similar and confirmed previous studies. Maps of the auditory cortex of human and monkey brains had similar hot spots of activity regardless of whether the sounds contained tones.

However, when the researchers looked more closely at the data, they found evidence suggesting the human brain was highly sensitive to tones. The human auditory cortex was much more responsive than the monkey cortex when they looked at the relative activity between tones and equivalent noisy sounds.

“We found that human and monkey brains had very similar responses to sounds in any given frequency range. It’s when we added tonal structure to the sounds that some of these same regions of the human brain became more responsive,” said Dr. Conway. “These results suggest the macaque monkey may experience music and other sounds differently. In contrast, the macaque’s experience of the visual world is probably very similar to our own. It makes one wonder what kind of sounds our evolutionary ancestors experienced.”

Further experiments supported these results. Slightly raising the volume of the tonal sounds had little effect on the tone sensitivity observed in the brains of two monkeys.

Finally, the researchers saw similar results when they used sounds that contained more natural harmonies for monkeys by playing recordings of macaque calls. Brain scans showed that the human auditory cortex was much more responsive than the monkey cortex when they compared relative activity between the calls and toneless, noisy versions of the calls.

“This finding suggests that speech and music may have fundamentally changed the way our brain processes pitch,” said Dr. Conway. “It may also help explain why it has been so hard for scientists to train monkeys to perform auditory tasks that humans find relatively effortless.”

Earlier this year, other scientists from around the U.S. applied for the first round of NIH Sound Health research grants. Some of these grants may eventually support scientists who plan to explore how music turns on the circuitry of the auditory cortex that make our brains sensitive to musical pitch.

This study was supported by the NINDS, NEI, NIMH, and NIA Intramural Research Programs and grants from the NIH (EY13455; EY023322; EB015896; RR021110), the National Science Foundation (Grant 1353571; CCF-1231216), the McDonnell Foundation, the Howard Hughes Medical Institute.

Can we think without language?

As part of our Ask the Brain series, Anna Ivanova, a graduate student who studies how the brain processes language in the labs of Nancy Kanwisher and Evelina Fedorenko, answers the question, “Can we think without language?”

Anna Ivanova headshot
Graduate student Anna Ivanova studies language processing in the brain.

_____

Imagine a woman – let’s call her Sue. One day Sue gets a stroke that destroys large areas of brain tissue within her left hemisphere. As a result, she develops a condition known as global aphasia, meaning she can no longer produce or understand phrases and sentences. The question is: to what extent are Sue’s thinking abilities preserved?

Many writers and philosophers have drawn a strong connection between language and thought. Oscar Wilde called language “the parent, and not the child, of thought.” Ludwig Wittgenstein claimed that “the limits of my language mean the limits of my world.” And Bertrand Russell stated that the role of language is “to make possible thoughts which could not exist without it.” Given this view, Sue should have irreparable damage to her cognitive abilities when she loses access to language. Do neuroscientists agree? Not quite.

Neuroimaging evidence has revealed a specialized set of regions within the human brain that respond strongly and selectively to language.

This language system seems to be distinct from regions that are linked to our ability to plan, remember, reminisce on past and future, reason in social situations, experience empathy, make moral decisions, and construct one’s self-image. Thus, vast portions of our everyday cognitive experiences appear to be unrelated to language per se.

But what about Sue? Can she really think the way we do?

While we cannot directly measure what it’s like to think like a neurotypical adult, we can probe Sue’s cognitive abilities by asking her to perform a variety of different tasks. Turns out, patients with global aphasia can solve arithmetic problems, reason about intentions of others, and engage in complex causal reasoning tasks. They can tell whether a drawing depicts a real-life event and laugh when in doesn’t. Some of them play chess in their spare time. Some even engage in creative tasks – a composer Vissarion Shebalin continued to write music even after a stroke that left him severely aphasic.

Some readers might find these results surprising, given that their own thoughts seem to be tied to language so closely. If you find yourself in that category, I have a surprise for you – research has established that not everybody has inner speech experiences. A bilingual friend of mine sometimes gets asked if she thinks in English or Polish, but she doesn’t quite get the question (“how can you think in a language?”). Another friend of mine claims that he “thinks in landscapes,” a sentiment that conveys the pictorial nature of some people’s thoughts. Therefore, even inner speech does not appear to be necessary for thought.

Have we solved the mystery then? Can we claim that language and thought are completely independent and Bertrand Russell was wrong? Only to some extent. We have shown that damage to the language system within an adult human brain leaves most other cognitive functions intact. However, when it comes to the language-thought link across the entire lifespan, the picture is far less clear. While available evidence is scarce, it does indicate that some of the cognitive functions discussed above are, at least to some extent, acquired through language.

Perhaps the clearest case is numbers. There are certain tribes around the world whose languages do not have number words – some might only have words for one through five (Munduruku), and some won’t even have those (Pirahã). Speakers of Pirahã have been shown to make mistakes on one-to-one matching tasks (“get as many sticks as there are balls”), suggesting that language plays an important role in bootstrapping exact number manipulations.

Another way to examine the influence of language on cognition over time is by studying cases when language access is delayed. Deaf children born into hearing families often do not get exposure to sign languages for the first few months or even years of life; such language deprivation has been shown to impair their ability to engage in social interactions and reason about the intentions of others. Thus, while the language system may not be directly involved in the process of thinking, it is crucial for acquiring enough information to properly set up various cognitive domains.

Even after her stroke, our patient Sue will have access to a wide range of cognitive abilities. She will be able to think by drawing on neural systems underlying many non-linguistic skills, such as numerical cognition, planning, and social reasoning. It is worth bearing in mind, however, that at least some of those systems might have relied on language back when Sue was a child. While the static view of the human mind suggests that language and thought are largely disconnected, the dynamic view hints at a rich nature of language-thought interactions across development.

_____

Do you have a question for The Brain? Ask it here.

Ed Boyden elected to National Academy of Sciences

Ed Boyden has been elected to join the National Academy of Sciences (NAS). The organization, established by an act of Congress during the height of the Civil War, was founded to provide independent and objective advice on scientific matters to the nation, and is actively engaged in furthering science in the United States. Each year NAS members recognize fellow scientists through election to the academy based on their distinguished and continuing achievements in original research.

“I’m very honored and grateful to have been elected to the NAS,” says Boyden. “This is a testament to the work of many graduate students, postdoctoral scholars, research scientists, and staff at MIT who have worked with me over the years, and many collaborators and friends at MIT and around the world who have helped our group on this mission to advance neuroscience through new tools and ways of thinking.”

Boyden’s research creates and applies technologies that aim to expand our understanding of the brain. He notably co-invented optogenetics as an independent side collaboration, conducted in parallel to his PhD studies, a game-changing technology that has revolutionized neurobiology. This technology uses targeted expression of light-sensitive channels and pumps to activate or suppress neuronal activity in vivo using light. Optogenetics quickly swept the field of neurobiology and has been leveraged to understand how specific neurons and brain regions contribute to behavior and to disease.

His research since has an overarching focus on understanding the brain. To this end, he and his lab have the ambitious goal of developing technologies that can map, record, and manipulate the brain. This has led, as selected examples, to the invention of expansion microscopy, a super-resolution imaging technology that can capture neuron’s microstructures and reveal their complex connections, even across large-scale neural circuits; voltage-sensitive fluorescent reporters that allow neural activity to be monitored in vivo; and temporal interference stimulation, a non-invasive brain stimulation technique that allows selective activation of subcortical brain regions.

“We are all incredibly happy to see Ed being elected to the academy,” says Robert Desimone, director of the McGovern Institute for Brain Research at MIT. “He has been consistently innovative, inventing new ways of manipulating and observing neurons that are revolutionizing the field of neuroscience.”

This year the NAS, an organization that includes over 500 Nobel Laureates, elected 100 new members and 25 foreign associates. Three MIT professors were elected this year, with Paula T. Hammond (David H. Koch (1962) Professor of Engineering and Department Head, Chemical Engineering) and Aviv Regev (HHMI Investigator and Professor in the Department of Biology) being elected alongside Boyden. Boyden becomes the seventh member of the McGovern Institute faculty to join the National Academy of Sciences.

The formal induction ceremony for new NAS members, during which they sign the ledger whose first signatory is Abraham Lincoln, will be held at the Academy’s annual meeting in Washington D.C. next spring.

 

 

 

 

 

 

 

 

Algorithms of intelligence

The following post is adapted from a story featured in a recent Brain Scan newsletter.

Machine vision systems are more and more common in everyday life, from social media to self-driving cars, but training artificial neural networks to “see” the world as we do—distinguishing cyclists from signposts—remains challenging. Will artificial neural networks ever decode the world as exquisitely as humans? Can we refine these models and influence perception in a person’s brain just by activating individual, selected neurons? The DiCarlo lab, including CBMM postdocs Kohitij Kar and Pouya Bashivan, are finding that we are surprisingly close to answering “yes” to such questions, all in the context of accelerated insights into artificial intelligence at the McGovern Institute for Brain Research, CBMM, and the Quest for Intelligence at MIT.

Precision Modeling

Beyond light hitting the retina, the recognition process that unfolds in the visual cortex is key to truly “seeing” the surrounding world. Information is decoded through the ventral visual stream, cortical brain regions that progressively build a more accurate, fine-grained, and accessible representation of the objects around us. Artificial neural networks have been modeled on these elegant cortical systems, and the most successful models, deep convolutional neural networks (DCNNs), can now decode objects at levels comparable to the primate brain. However, even leading DCNNs have problems with certain challenging images, presumably due to shadows, clutter, and other visual noise. While there’s no simple feature that unites all challenging images, the quest is on to tackle such images to attain precise recognition at a level commensurate with human object recognition.

“One next step is to couple this new precision tool with our emerging understanding of how neural patterns underlie object perception. This might allow us to create arrangements of pixels that look nothing like, for example, a cat, but that can fool the brain into thinking it’s seeing a cat.”- James DiCarlo

In a recent push, Kar and DiCarlo demonstrated that adding feedback connections, currently missing in most DCNNs, allows the system to better recognize objects in challenging situations, even those where a human can’t articulate why recognition is an issue for feedforward DCNNs. They also found that this recurrent circuit seems critical to primate success rates in performing this task. This is incredibly important for systems like self-driving cars, where the stakes for artificial visual systems are high, and faithful recognition is a must.

Now you see it

As artificial object recognition systems have become more precise in predicting neural activity, the DiCarlo lab wondered what such precision might allow: could they use their system to not only predict, but to control specific neuronal activity?

To demonstrate the power of their models, Bashivan, Kar, and colleagues zeroed in on targeted neurons in the brain. In a paper published in Science, they used an artificial neural network to generate a random-looking group of pixels that, when shown to an animal, activated the team’s target, a target they called “one hot neuron.” In other words, they showed the brain a synthetic pattern, and the pixels in the pattern precisely activated targeted neurons while other neurons remained relatively silent.

These findings show how the knowledge in today’s artificial neural network models might one day be used to noninvasively influence brain states with neural resolution. Such precise systems would be useful as we look to the future, toward visual prosthetics for the blind. Such a precise model of the ventral visual stream would have been incon-ceivable not so long ago, and all eyes are on where McGovern researchers will take these technologies in the coming years.

Recurrent architecture enhances object recognition in brain and AI

Your ability to recognize objects is remarkable. If you see a cup under unusual lighting or from unexpected directions, there’s a good chance that your brain will still compute that it is a cup. Such precise object recognition is one holy grail for AI developers, such as those improving self-driving car navigation. While modeling primate object recognition in the visual cortex has revolutionized artificial visual recognition systems, current deep learning systems are simplified, and fail to recognize some objects that are child’s play for primates such as humans. In findings published in Nature Neuroscience, McGovern Investigator James DiCarlo and colleagues have found evidence that feedback improves recognition of hard-to-recognize objects in the primate brain, and that adding feedback circuitry also improves the performance of artificial neural network systems used for vision applications.

Deep convolutional neural networks (DCNN) are currently the most successful models for accurately recognizing objects on a fast timescale (<100 ms) and have a general architecture inspired by the primate ventral visual stream, cortical regions that progressively build an accessible and refined representation of viewed objects. Most DCNNs are simple in comparison to the primate ventral stream however.

“For a long period of time, we were far from an model-based understanding. Thus our field got started on this quest by modeling visual recognition as a feedforward process,” explains senior author DiCarlo, who is also the head of MIT’s Department of Brain and Cognitive Sciences and Research Co-Leader in the Center for Brains, Minds, and Machines (CBMM). “However, we know there are recurrent anatomical connections in brain regions linked to object recognition.”

Think of feedforward DCNNs and the portion of the visual system that first attempts to capture objects as a subway line that runs forward through a series of stations. The extra, recurrent brain networks are instead like the streets above, interconnected and not unidirectional. Because it only takes about 200 ms for the brain to recognize an object quite accurately, it was unclear if these recurrent interconnections in the brain had any role at all in core object recognition. For example, perhaps those recurrent connections are only in place to keep the visual system in tune over long periods of time. For example, the return gutters of the streets help slowly clear it of water and trash, but are not strictly needed to quickly move people from one end of town to the other. DiCarlo, along with lead author and CBMM postdoc Kohitij Kar, set out to test whether a subtle role of recurrent operations in rapid visual object recognition was being overlooked.

Challenging recognition

The authors first needed to identify objects that are trivially decoded by the primate brain, but are challenging for artificial systems. Rather than trying to guess why deep learning was having problems recognizing an object (is it due to clutter in the image? a misleading shadow?), the authors took an unbiased approach that turned out to be critical.

Kar explained further that “we realized that AI-models actually don’t have problems with every image where an object is occluded or in clutter. Humans trying to guess why AI models were challenged turned out to be holding us back.”

Instead, the authors presented the deep learning system, as well as monkeys and humans, with images, homing in on “challenge images” where the primates could easily recognize the objects in those images, but a feed forward DCNN ran into problems. When they, and others, added appropriate recurrent processing to these DCNNs, object recognition in challenge images suddenly became a breeze.

Processing times

Kar used neural recording methods with very high spatial and temporal precision to whether these images were really so trivial for primates. Remarkably, they found that though challenge images had initially appeared to be child’s play to the human brain, they actually involve extra neural processing time (about additional 30 milliseconds), suggesting that recurrent loops operate in our brain too.

 “What the computer vision community has recently achieved by stacking more and more layers onto artificial neural networks, evolution has achieved through a brain architecture with recurrent connections.” — Kohitij Kar

Diane Beck, Professor of Psychology and Co-chair of the Intelligent Systems Theme at the Beckman Institute and not an author on the study, explained further. “Since entirely feed forward deep convolutional nets are now remarkably good at predicting primate brain activity, it raised questions about the role of feedback connections in the primate brain. This study shows that, yes, feedback connections are very likely playing a role in object recognition after all.”

What does this mean for a self-driving car? It shows that deep learning architectures involved in object recognition need recurrent components if they are to match the primate brain, and also indicates how to operationalize this procedure for the next generation of intelligent machines.

“Recurrent models offer predictions of neural activity and behavior over time,” says Kar. “We may now be able to model more involved tasks. Perhaps one day, the systems will not only recognize an object, such as a person, but also perform cognitive tasks that the human brain so easily manages, such as understanding the emotions of other people.”

This work was supported by the Office of Naval Research grant MURI-114407 (J.J.D.). Center for Brains, Minds, and Machines (CBMM) funded by NSF STC award CCF-1231216 (K.K.).

Why is the brain shaped like it is?

The human brain has a very striking shape, and one feature stands out large and clear: the cerebral cortex with its stereotyped pattern of gyri (folds and convolutions) and sulci (fissures and depressions). This characteristic folded shape of the cortex is a major innovation in evolution that allowed an increase in the size and complexity of the human brain.

How the brain adopts these complex folds is surprisingly unclear, but probably involves both shape changes and movement of cells. Mechanical constraints within the overall tissue, and imposed by surrounding tissues also contribute to the ultimate shape: the brain has to fit into the skull after all. McGovern postdoc Jonathan Wilde has a long-term interest in studying how the brain develops, and explained to us how the shape of the brain initially arises.

In the case of humans, our historical reliance upon intelligence has driven a massive expansion of the cerebral cortex.

“Believe it or not, all vertebrate brains begin as a flat sheet of epithelial cells that folds upon itself to form a tube,” explains Wilde. “This neural tube is made up of a single layer of neural stem cells that go through a rapid and highly orchestrated process of expansion and differentiation, giving rise to all of the neurons in the brain. Throughout the first steps of development, the brains of most vertebrates are indistinguishable from one another, but the final shape of the brain is highly dependent upon the organism and primarily reflects that organism’s lifestyle, environment, and cognitive demands.”

So essentially, the brain starts off as a similar shape for creatures with spinal cords. But why is the human brain such a distinct shape?

“In the case of humans,” explains Wilde, “our historical reliance upon intelligence has driven a massive expansion of the cerebral cortex, which is the primary brain structure responsible for critical thinking and higher cognitive abilities. Accordingly, the human cortex is strikingly large and covered in a labyrinth of folds that serve to increase its surface area and computational power.”

The anatomical shape of the human brain is striking, but it also helps researchers to map a hidden functional atlas: specific brain regions that selectively activate in fMRI when you see a face, scene, hear music and a variety of other tasks. I asked former McGovern graduate student, and current postdoc at Boston Children’s Hospital, Hilary Richardson, for her perspective on this more hidden structure in the brain and how it relates to brain shape.

Illustration of person rappelling into the brain's sylvian fissure.
The Sylvian fissure is a prominent groove on each side of the brain that separates the frontal and parietal lobes from the temporaal lobe. McGovern researchers are studying a region near the right Sylvian fissure, called the rTPJ, which is involved in thinking about what another person is thinking. Image: Joe Laney

“One of the most fascinating aspects of brain shape is how similar it is across individuals, even very young infants and children,” explains Richardson. “Despite the dramatic cognitive changes that happen across childhood, the shape of the brain is remarkably consistent. Given this, one open question is what kinds of neural changes support cognitive development. For example, while the anatomical shape and size of the rTPJ seems to stay the same across childhood, its response becomes more specialized to information about mental states – beliefs, desires, and emotions – as children get older. One intriguing hypothesis is that this specialization helps support social development in childhood.”

We’ll end with an ode to a prominent feature of brain shape: the “Sylvian fissure,” a prominent groove on each side of the brain that separates the frontal and parietal lobes from the temporal lobe. Such landmarks in brain shape help orient researchers, and the Sylvian fissure was recently immortalized in this image, from a postcard by illustrator Joe Laney.

______

Do you have a question for The Brain? Ask it here.

 

Neuroscientists reverse some behavioral symptoms of Williams Syndrome

Williams Syndrome, a rare neurodevelopmental disorder that affects about 1 in 10,000 babies born in the United States, produces a range of symptoms including cognitive impairments, cardiovascular problems, and extreme friendliness, or hypersociability.

In a study of mice, MIT neuroscientists have garnered new insight into the molecular mechanisms that underlie this hypersociability. They found that loss of one of the genes linked to Williams Syndrome leads to a thinning of the fatty layer that insulates neurons and helps them conduct electrical signals in the brain.

The researchers also showed that they could reverse the symptoms by boosting production of this coating, known as myelin. This is significant, because while Williams Syndrome is rare, many other neurodevelopmental disorders and neurological conditions have been linked to myelination deficits, says Guoping Feng, the James W. and Patricia Poitras Professor of Neuroscience and a member of MIT’s McGovern Institute for Brain Research.

“The importance is not only for Williams Syndrome,” says Feng, who is one of the senior authors of the study. “In other neurodevelopmental disorders, especially in some of the autism spectrum disorders, this could be potentially a new direction to look into, not only the pathology but also potential treatments.”

Zhigang He, a professor of neurology and ophthalmology at Harvard Medical School, is also a senior author of the paper, which appears in the April 22 issue of Nature Neuroscience. Former MIT postdoc Boaz Barak, currently a principal investigator at Tel Aviv University in Israel, is the lead author and a senior author of the paper.

Impaired myelination

Williams Syndrome, which is caused by the loss of one of the two copies of a segment of chromosome 7, can produce learning impairments, especially for tasks that require visual and motor skills, such as solving a jigsaw puzzle. Some people with the disorder also exhibit poor concentration and hyperactivity, and they are more likely to experience phobias.

In this study, the researchers decided to focus on one of the 25 genes in that segment, known as Gtf2i. Based on studies of patients with a smaller subset of the genes deleted, scientists have linked the Gtf2i gene to the hypersociability seen in Williams Syndrome.

Working with a mouse model, the researchers devised a way to knock out the gene specifically from excitatory neurons in the forebrain, which includes the cortex, the hippocampus, and the amygdala (a region important for processing emotions). They found that these mice did show increased levels of social behavior, measured by how much time they spent interacting with other mice. The mice also showed deficits in fine motor skills and increased nonsocial related anxiety, which are also symptoms of Williams Syndrome.

Next, the researchers sequenced the messenger RNA from the cortex of the mice to see which genes were affected by loss of Gtf2i. Gtf2i encodes a transcription factor, so it controls the expression of many other genes. The researchers found that about 70 percent of the genes with significantly reduced expression levels were involved in the process of myelination.

“Myelin is the insulation layer that wraps the axons that extend from the cell bodies of neurons,” Barak says. “When they don’t have the right properties, it will lead to faster or slower electrical signal transduction, which affects the synchronicity of brain activity.”

Further studies revealed that the mice had only about half the normal number of mature oligodendrocytes — the brain cells that produce myelin. However, the number of oligodendrocyte precursor cells was normal, so the researchers suspect that the maturation and differentiation processes of these cells are somehow impaired when Gtf2i is missing in the neurons.

This was surprising because Gtf2i was not knocked out in oligodendrocytes or their precursors. Thus, knocking out the gene in neurons may somehow influence the maturation process of oligodendrocytes, the researchers suggest. It is still unknown how this interaction might work.

“That’s a question we are interested in, but we don’t know whether it’s a secreted factor, or another kind of signal or activity,” Feng says.

In addition, the researchers found that the myelin surrounding axons of the forebrain was significantly thinner than in normal mice. Furthermore, electrical signals were smaller, and took more time to cross the brain in mice with Gtf2i missing.

The study is an example of pioneering research into the contribution of glial cells, which include oligodendrocytes, to neuropsychiatric disorders, says Doug Fields, chief of the nervous system development and plasticity section of the Eunice Kennedy Shriver National Institute of Child Health and Human Development.

“Traditionally myelin was only considered in the context of diseases that destroy myelin, such as multiple sclerosis, which prevents transmission of neural impulses. More recently it has become apparent that more subtle defects in myelin can impair neural circuit function, by causing delays in communication between neurons,” says Fields, who was not involved in the research.

Symptom reversal

It remains to be discovered precisely how this reduction in myelination leads to hypersociability. The researchers suspect that the lack of myelin affects brain circuits that normally inhibit social behaviors, making the mice more eager to interact with others.

“That’s probably the explanation, but exactly which circuits and how does it work, we still don’t know,” Feng says.

The researchers also found that they could reverse the symptoms by treating the mice with drugs that improve myelination. One of these drugs, an FDA-approved antihistamine called clemastine fumarate, is now in clinical trials to treat multiple sclerosis, which affects myelination of neurons in the brain and spinal cord. The researchers believe it would be worthwhile to test these drugs in Williams Syndrome patients because they found thinner myelin and reduced numbers of mature oligodendrocytes in brain samples from human subjects who had Williams Syndrome, compared to typical human brain samples.

“Mice are not humans, but the pathology is similar in this case, which means this could be translatable,” Feng says. “It could be that in these patients, if you improve their myelination early on, it could at least improve some of the conditions. That’s our hope.”

Such drugs would likely help mainly the social and fine-motor issues caused by Williams Syndrome, not the symptoms that are produced by deletion of other genes, the researchers say. They may also help treat other disorders, such as autism spectrum disorders, in which myelination is impaired in some cases, Feng says.

“We think this can be expanded into autism and other neurodevelopmental disorders. For these conditions, improved myelination may be a major factor in treatment,” he says. “We are now checking other animal models of neurodevelopmental disorders to see whether they have myelination defects, and whether improved myelination can improve some of the pathology of the defects.”

The research was funded by the Simons Foundation, the Poitras Center for Affective Disorders Research at MIT, the Stanley Center for Psychiatric Research at the Broad Institute of MIT and Harvard, and the Simons Center for the Social Brain at MIT.

How our gray matter tackles gray areas

When Katie O’Nell’s high school biology teacher showed a NOVA video on epigenetics after the AP exam, he was mostly trying to fill time. But for O’Nell, the video sparked a whole new area of curiosity.

She was fascinated by the idea that certain genes could be turned on and off, controlling what traits or processes were expressed without actually editing the genetic code itself. She was further excited about what this process could mean for the human mind.

But upon starting at MIT, she realized that she was less interested in the cellular level of neuroscience and more fascinated by bigger questions, such as, what makes certain people generous toward certain others? What’s the neuroscience behind morality?

“College is a time you can learn about anything you want, and what I want to know is why humans are really, really wacky,” she says. “We’re dumb, we make super irrational decisions, it makes no sense. Sometimes it’s beautiful, sometimes it’s awful.”

O’Nell, a senior majoring in brain and cognitive sciences, is one of five MIT students to have received a Marshall Scholarship this year. Her quest to understand the intricacies of the wacky human brain will not be limited to any one continent. She will be using the funding to earn her master’s in experimental psychology at Oxford University.

Chocolate milk and the mouse brain

O’Nell’s first neuroscience-related research experience at MIT took place during her sophomore and junior year, in the lab of Institute Professor Ann Graybiel at the McGovern Institute.

The research studied the neurological components of risk-vs-reward decision making, using a key ingredient: chocolate milk. In the experiments, mice were given two options — they could go toward the richer, sweeter chocolate milk, but they would also have to endure a brighter light. Or, they could go toward a more watered-down chocolate milk, with the benefit of a softer light. All the while, a fluorescence microscope tracked when certain cell types were being activated.

“I think that’s probably the closest thing I’ve ever had to a spiritual experience … watching this mouse in this maze deciding what to do, and watching the cells light up on the screen. You can see single-cell evidence of cognition going on. That’s just the coolest thing.”

In her junior spring, O’Nell delved even deeper into questions of morality in the lab of Professor Rebecca Saxe. Her research there centers on how the human brain parses people’s identities and emotional states from their faces alone, and how those computations are related to each other. Part of what interests O’Nell is the fact that we are constantly making decisions, about ourselves and others, with limited information.

“We’re always solving under uncertainty,” she says. “And our brain does it so well, in so many ways.”

International intrigue

Outside of class, O’Nell has no shortage of things to do. For starters, she has been serving as an associate advisor for a first-year seminar since the fall of her sophomore year.

“Basically it’s my job to sit in on a seminar and bully them into not taking seven classes at a time, and reminding them that yes, your first 8.01 exam is tomorrow,” she says with a laugh.

She has also continued an activity she was passionate about in high school — Model United Nations. One of the most fun parts for her is serving on the Historical Crisis Committee, in which delegates must try to figure out a way to solve a real historical problem, like the Cuban Missile Crisis or the French and Indian War.

“This year they failed and the world was a nuclear wasteland,” she says. “Last year, I don’t entirely know how this happened, but France decided that they wanted to abandon the North American theater entirely and just took over all of Britain’s holdings in India.”

She’s also part of an MIT program called the Addir Interfaith Fellowship, in which a small group of people meet each week and discuss a topic related to religion and spirituality. Before joining, she didn’t think it was something she’d be interested in — but after being placed in a first-year class about science and spirituality, she has found discussing religion to be really stimulating. She’s been a part of the group ever since.

O’Nell has also been heavily involved in writing and producing a Mystery Dinner Theater for Campus Preview Weekend, on behalf of her living group J Entry, in MacGregor House. The plot, generally, is MIT-themed — a physics professor might get killed by a swarm of CRISPR nanobots, for instance. When she’s not cooking up murder mysteries, she might be running SAT classes for high school students, playing piano, reading, or spending time with friends. Or, when she needs to go grocery shopping, she’ll be stopping by the Trader Joe’s on Boylston Avenue, as an excuse to visit the Boston Public Library across the street.

Quite excited for the future

O’Nell is excited that the Marshall Scholarship will enable her to live in the country that produced so many of the books she cherished as a kid, like “The Hobbit.” She’s also thrilled to further her research there. However, she jokes that she still needs to get some of the lingo down.

“I need to learn how to use the word ‘quite’ correctly. Because I overuse it in the American way,” she says.

Her master’s research will largely expand on the principles she’s been examining in the Saxe lab. Questions of morality, processing, and social interaction are where she aims to focus her attention.

“My master’s project is going to be basically taking a look at whether how difficult it is for you to determine someone else’s facial expression changes how generous you are with people,” she explains.

After that, she hopes to follow the standard research track of earning a PhD, doing postdoctoral research, and then entering academia as a professor and researcher. Teaching and researching, she says, are two of her favorite things — she’s excited to have the chance to do both at the same time. But that’s a few years ahead. Right now, she hopes to use her time in England to learn all she can about the deeper functions of the brain, with or without chocolate milk.