Speaking many languages

Ev Fedorenko studies the cognitive processes and brain regions underlying language, a signature cognitive skill that is uniquely and universally human. She investigates both people with linguistic impairments, and those that have exceptional language skills: hyperpolyglots, or people that are fluent in over a dozen languages. Indeed, she was recently interviewed for a BBC documentary about superlinguists as well as the New Yorker, for an article covering people with exceptional language skills.

When Fedorenko, an associate investigator at the McGovern Institute and assistant professor in the Department of Brain and Cognitive Sciences at MIT, came to the field, neuroscientists were still debating whether high-level cognitive skills such as language, are processed by multi-functional or dedicated brain regions. Using fMRI, Fedorenko and colleagues compared engagement of brain regions when individuals were engaged in linguistic vs. other high level cognitive tasks, such as arithmetic or music. Their data revealed a clear distinction between language and other cognitive processes, showing that our brains have dedicated language regions.

Here is my basic question. How do I get a thought from my mind into yours?

In the time since this key study, Fedorenko has continued to unpack language in the brain. How does the brain process the overarching rules and structure of language (syntax), as opposed to meanings of words? How do we construct complex meanings? What might underlie communicative difficulties in individuals diagnosed with autism? How does the aphasic brain recover language? Intriguingly, in contrast to individuals with linguistic difficulties, there are also individuals that stand out as being able to master many languages, so-called hyperpolyglots.

In 2013, she came across a young adult that had mastered over 30 languages, a prodigy in languages. To facilitate her analysis of processing of different languages Fedorenko has collected dozens of translations of Alice in Wonderland, for her ‘Alice in the language localizer Wonderland‘ project. She has already found that hyperpolyglots tend to show less activity in linguistic processing regions when reading in, or listening to, their native language, compared to carefully matched controls, perhaps indexing more efficient processing mechanisms. Fedorenko continues to study hyperpolyglots, along with other exciting new avenues of research. Stay tuned for upcoming advances in our understanding of the brain and language.

Mark Harnett receives a 2019 McKnight Scholar Award

McGovern Institute investigator Mark Harnett is one of six young researchers selected to receive a prestigious 2019 McKnight Scholar Award. The award supports his research “studying how dendrites, the antenna-like input structures of neurons, contribute to computation in neural networks.”

Harnett examines the biophysical properties of single neurons, ultimately aiming to understand how these relate to the complex computations that underlie behavior. His lab was the first to examine the biophysical properties of human dendrites. The Harnett lab found that human neurons have distinct properties, including increased dendritic compartmentalization that could allow more complex computations within single neurons. His lab recently discovered that such dendritic computations are not rare, or confined to specific behaviors, but are a widespread and general feature of neuronal activity.

“As a young investigator, it is hard to prioritize so many exciting directions and ideas,” explains Harnett. “I really want to thank the McKnight Foundation, both for the support, but also for the hard work the award committee puts into carefully thinking about and giving feedback on proposals. It means a lot to get this type of endorsement from a seriously committed and distinguished committee, and their support gives even stronger impetus to pursue this research direction.”

The McKnight Foundation has supported neuroscience research since 1977, and provides three prominent awards, with the Scholar award aimed at supporting young scientists, and drawing applications from the strongest young neuroscience faculty across the US. William L. McKnight (1887-1979) was an early leader of the 3M Company and had a personal interest in memory and brain diseases. The McKnight Foundation was established with this focus in mind, and the Scholar Award provides $75,000 per year for three years to support cutting edge neuroscience research.

 

A chemical approach to imaging cells from the inside

A team of researchers at the McGovern Institute and Broad Institute of MIT and Harvard have developed a new technique for mapping cells. The approach, called DNA microscopy, shows how biomolecules such as DNA and RNA are organized in cells and tissues, revealing spatial and molecular information that is not easily accessible through other microscopy methods. DNA microscopy also does not require specialized equipment, enabling large numbers of samples to be processed simultaneously.

“DNA microscopy is an entirely new way of visualizing cells that captures both spatial and genetic information simultaneously from a single specimen,” says first author Joshua Weinstein, a postdoctoral associate at the Broad Institute. “It will allow us to see how genetically unique cells — those comprising the immune system, cancer, or the gut, for instance — interact with one another and give rise to complex multicellular life.”

The new technique is described in Cell. Aviv Regev, core institute member and director of the Klarman Cell Observatory at the Broad Institute and professor of biology at MIT, and Feng Zhang, core institute member of the Broad Institute, investigator at the McGovern Institute for Brain Research at MIT, and the James and Patricia Poitras Professor of Neuroscience at MIT, are co-authors. Regev and Zhang are also Howard Hughes Medical Institute Investigators.

The evolution of biological imaging

In recent decades, researchers have developed tools to collect molecular information from tissue samples, data that cannot be captured by either light or electron microscopes. However, attempts to couple this molecular information with spatial data — to see how it is naturally arranged in a sample — are often machinery-intensive, with limited scalability.

DNA microscopy takes a new approach to combining molecular information with spatial data, using DNA itself as a tool.

To visualize a tissue sample, researchers first add small synthetic DNA tags, which latch on to molecules of genetic material inside cells. The tags are then replicated, diffusing in “clouds” across cells and chemically reacting with each other, further combining and creating more unique DNA labels. The labeled biomolecules are collected, sequenced, and computationally decoded to reconstruct their relative positions and a physical image of the sample.

The interactions between these DNA tags enable researchers to calculate the locations of the different molecules — somewhat analogous to cell phone towers triangulating the locations of different cell phones in their vicinity. Because the process only requires standard lab tools, it is efficient and scalable.

In this study, the authors demonstrate the ability to molecularly map the locations of individual human cancer cells in a sample by tagging RNA molecules. DNA microscopy could be used to map any group of molecules that will interact with the synthetic DNA tags, including cellular genomes, RNA, or proteins with DNA-labeled antibodies, according to the team.

“DNA microscopy gives us microscopic information without a microscope-defined coordinate system,” says Weinstein. “We’ve used DNA in a way that’s mathematically similar to photons in light microscopy. This allows us to visualize biology as cells see it and not as the human eye does. We’re excited to use this tool in expanding our understanding of genetic and molecular complexity.”

Funding for this study was provided by the Simons Foundation, Klarman Cell Observatory, NIH (R01HG009276, 1R01- HG009761, 1R01- MH110049, and 1DP1-HL141201), New York Stem Cell Foundation, Simons Foundation, Paul G. Allen Family Foundation, Vallee Foundation, the Poitras Center for Affective Disorders Research at MIT, the Hock E. Tan and K. Lisa Yang Center for Autism Research at MIT, J. and P. Poitras, and R. Metcalfe. 

The authors have applied for a patent on this technology.

McGovern Institute postcard collection

A collection of 13 postcards arranged in columns.
The McGovern Institute postcard collection, 2023.

The McGovern Institute may be best known for its scientific breakthroughs, but a captivating series of brain-themed postcards developed by McGovern researchers and staff now reveals the institute’s artistic side.

What began in 2017 with a series of brain anatomy postcards inspired by the U.S. Works Projects Administration’s iconic national parks posters, has grown into a collection of twelve different prints, each featuring a unique fusion of neuroscience and art.

More information about each series in the McGovern Institute postcard collection, including the color-your-own mindfulness postcards, can be found below.

Mindfulness Postcard Series, 2023

In winter 2023, the institute released its mindfulness postcard series, a collection of four different neuroscience-themed illustrations that can be colored in with pencils, markers, or paint. The postcard series was inspired by research conducted in John Gabrieli’s lab, which found that practicing mindfulness reduced children’s stress levels and negative emotions during the pandemic. These findings contribute to a growing body of evidence that practicing mindfulness — focusing awareness on the present, typically through meditation, but also through coloring — can change patterns of brain activity associated with emotions and mental health.

Download and color your own postcards.

Genes

The McGovern Institute is at the cutting edge of applications based on CRISPR, a genome editing tool pioneered by McGovern Investigator Feng Zhang. Hidden within this DNA-themed postcard is a clam, virus, bacteriophage, snail, and the word CRISPR. Click the links to learn how these hidden elements relate to genetic engineering research at the McGovern Institute.

 

Line art showing strands of DNA and the McGovern Institute logo.
The McGovern Institute’s “mindfulness” postcard series includes this DNA-themed illustration containing five hidden design elements related to McGovern research. Image: Joseph Laney

Neurons

McGovern researchers probe the nanoscale and cellular processes that are critical to brain function, including the complex computations conducted in neurons, to the synapses and neurotransmitters that facilitate messaging between cells. Find the mouse, worm, and microscope — three critical elements related to cellular and molecular neuroscience research at the McGovern Institute — in the postcard below.

 

Line art showing multiple neurons and the McGovern Institute logo.
The McGovern Institute’s “mindfulness” postcard series includes this neuron-themed illustration containing three hidden design elements related to McGovern research. Image: Joseph Laney

Human Brain

Cognitive neuroscientists at the McGovern Institute examine the brain processes that come together to inform our thoughts and understanding of the world.​ Find the musical note, speech bubbles, and human face in this postcard and click on the links to learn more about how these hidden elements relate to brain research at the McGovern Institute.

 

Line art of a human brain and the McGovern Institute logo.
The McGovern Institute’s “mindfulness” postcard series includes this brain-themed illustration containing three hidden design elements related to McGovern research. Image: Joseph Laney

Artificial Intelligence

McGovern researchers develop machine learning systems that mimic human processing of visual and auditory cues and construct algorithms to help us understand the complex computations made by the brain. Find the speech bubbles, DNA, and cochlea (spiral) in this postcard and click on the links to learn more about how these hidden elements relate to computational neuroscience research at the McGovern Institute.

Line art showing an artificial neural network in the shape of the human brain and the McGovern Institute logo.
The McGovern Institute’s “mindfulness” postcard series includes this AI-themed illustration containing three hidden design elements related to McGovern research. Image: Joseph Laney

Neuron Postcard Series, 2019

In 2019, the McGovern Institute released a second series of postcards based on the anatomy of a neuron. Each postcard includes text on the back side that describes McGovern research related to that specific part of the neuron. The descriptive text for each postcard is shown beloSynapse

Synapse

Snow melting off the branch of a bush at the water's edge creates a ripple effect in the pool of water below. Words at the bottom of the image say "It All Begins at the SYNAPSE"Signals flow through the nervous system from one neuron to the next across synapses.

Synapses are exquisitely organized molecular machines that control the transmission of information.

McGovern researchers are studying how disruptions in synapse function can lead to brain disorders like autism.

Image: Joseph Laney

Axon

Illustration of three bears hunting for fish in a flowing river with the words: "Axon: Where Action Finds Potential"The axon is the long, thin neural cable that carries electrical impulses called action potentials from the soma to synaptic terminals at downstream neurons.

Researchers at the McGovern Institute are developing and using tracers that label axons to reveal the elaborate circuit architecture of the brain.

Image: Joseph Laney

Soma

An elk stands on a rocky outcropping overlooking a large lake with an island in the center. Words at the top read: "Collect Your Thoughts at the Soma"The soma, or cell body, is the control center of the neuron, where the nucleus is located.

It connects the dendrites to the axon, which sends information to other neurons.

At the McGovern Institute, neuroscientists are targeting the soma with proteins that can activate single neurons and map connections in the brain.

Image: Joseph Laney

Dendrites

A mountain lake at sunset with colorful fish and snow from a distant mountaintop melting into the lake. Words say "DENDRITIC ARBOR"Long branching neuronal processes called dendrites receive synaptic inputs from thousands of other neurons and carry those signals to the cell body.

McGovern neuroscientists have discovered that human dendrites have different electrical properties from those of other species, which may contribute to the enhanced computing power of the human brain.

Image: Joseph Laney

Brain Anatomy Postcard Series, 2017

The original brain anatomy-themed postcard series, developed in 2017, was inspired by the U.S. Works Projects Administration’s iconic national parks posters created in the 1930s and 1940s. Each postcard includes text on the back side that describes McGovern research related to that specific part of the neuron. The descriptive text for each postcard is shown below.

Sylvian Fissure

Illustration of explorer in cave labeled with temporal and parietal letters
The Sylvian fissure is a prominent groove on the right side of the brain that separates the frontal and parietal lobes from the temporal lobe. McGovern researchers are studying a region near the right Sylvian fissure, called the rTPJ, which is involved in thinking about what another person is thinking.

Hippocampus

The hippocampus, named after its resemblance to the seahorse, plays an important role in memory. McGovern researchers are studying how changes in the strength of synapses (connections between neurons) in the hippocampus contribute to the formation and retention of memories.

Basal Ganglia

The basal ganglia are a group of deep brain structures best known for their control of movement. McGovern researchers are studying how the connections between the cerebral cortex and a part of the basal ganglia known as the striatum play a role in emotional decision making and motivation.

 

 

 

The arcuate fasciculus is a bundle of axons in the brain that connects Broca’s area, involved in speech production, and Wernicke’s area, involved in understanding language. McGovern researchers have found a correlation between the size of this structure and the risk of dyslexia in children.

 

 

Order and Share

To order your own McGovern brain postcards, contact our colleagues at the MIT Museum, where proceeds will support current and future exhibitions at the growing museum.

Please share a photo of yourself in your own lab (or natural habitat) with one of our cards on social media. Tell us what you’re studying and don’t forget to tag us @mcgovernmit using the hashtag #McGovernPostcards.

New gene-editing system precisely inserts large DNA sequences into cellular DNA

A team led by researchers from Broad Institute of MIT and Harvard, and the McGovern Institute for Brain Research at MIT, has characterized and engineered a new gene-editing system that can precisely and efficiently insert large DNA sequences into a genome. The system, harnessed from cyanobacteria and called CRISPR-associated transposase (CAST), allows efficient introduction of DNA while reducing the potential error-prone steps in the process — adding key capabilities to gene-editing technology and addressing a long-sought goal for precision gene editing.

Precise insertion of DNA has the potential to treat a large swath of genetic diseases by integrating new DNA into the genome while disabling the disease-related sequence. To accomplish this in cells, researchers have typically used CRISPR enzymes to cut the genome at the site of the deleterious sequence, and then relied on the cell’s own repair machinery to stitch the old and new DNA elements together. However, this approach has many limitations.

Using Escherichia coli bacteria, the researchers have now demonstrated that CAST can be programmed to efficiently insert new DNA at a designated site, with minimal editing errors and without relying on the cell’s own repair machinery. The system holds potential for much more efficient gene insertion compared to previous technologies, according to the team.

The researchers are working to apply this editing platform in eukaryotic organisms, including plant and animal cells, for precision research and therapeutic applications.

The team molecularly characterized and harnessed CAST from two cyanobacteria, Scytonema hofmanni and Anabaena cylindrica, and additionally revealed a new way that some CRISPR systems perform in nature: not to protect bacteria from viruses, but to facilitate the spread of transposon DNA.

The work, appearing in Science, was led by first author Jonathan Strecker, a postdoctoral fellow at the Broad Institute; graduate student Alim Ladha at MIT; and senior author Feng Zhang, a core institute member at the Broad Institute, investigator at the McGovern Institute for Brain Research at MIT, the James and Patricia Poitras Professor of Neuroscience at MIT, and an associate professor at MIT, with joint appointments in the departments of Brain and Cognitive Sciences and Biological Engineering. Collaborators include Eugene Koonin at the National Institutes of Health.

A New Role for a CRISPR-Associated System

“One of the long-sought-after applications for molecular biology is the ability to introduce new DNA into the genome precisely, efficiently, and safely,” explains Zhang. “We have worked on many bacterial proteins in the past to harness them for editing in human cells, and we’re excited to further develop CAST and open up these new capabilities for manipulating the genome.”

To expand the gene-editing toolbox, the team turned to transposons. Transposons (sometimes called “jumping genes”) are DNA sequences with associated proteins — transposases — that allow the DNA to be cut-and-pasted into other places.

Most transposons appear to jump randomly throughout the cellular genome and out to viruses or plasmids that may also be inhabiting a cell. However, some transposon subtypes in cyanobacteria have been computationally associated with CRISPR systems, suggesting that these transposons may naturally be guided towards more-specific genetic targets. This theorized function would be a new role for CRISPR systems; most known CRISPR elements are instead part of a bacterial immune system, in which Cas enzymes and their guide RNA will target and destroy viruses or plasmids.

In this paper, the research team identified the mechanisms at work and determined that some CRISPR-associated transposases have hijacked an enzyme called Cas12k and its guide to insert DNA at specific targets, rather than just cutting the target for defensive purposes.

“We dove deeply into this system in cyanobacteria, began taking CAST apart to understand all of its components, and discovered this novel biological function,” says Strecker, a postdoctoral fellow in Zhang’s lab at the Broad Institute. “CRISPR-based tools are often DNA-cutting tools, and they’re very efficient at disrupting genes. In contrast, CAST is naturally set up to integrate genes. To our knowledge, it’s the first system of this kind that has been characterized and manipulated.”

Harnessing CAST for Genome Editing

Once all the elements and molecular requirements of the CAST system were laid bare, the team focused on programming CAST to insert DNA at desired sites in E. coli.

“We reconstituted the system in E. coli and co-opted this mechanism in a way that was useful,” says Strecker. “We reprogrammed the system to introduce new DNA, up to 10 kilobase pairs long, into specific locations in the genome.”

The team envisions basic research, agricultural, or therapeutic applications based on this platform, such as introducing new genes to replace DNA that has mutated in a harmful way — for example, in sickle cell disease. Systems developed with CAST could potentially be used to integrate a healthy version of a gene into a cell’s genome, disabling or overriding the DNA causing problems.

Alternatively, rather than inserting DNA with the purpose of fixing a deleterious version of a gene, CAST may be used to augment healthy cells with elements that are therapeutically beneficial, according to the team. For example, in immunotherapy, a researcher may want to introduce a “chimeric antigen receptor” (CAR) into a specific spot in the genome of a T cell — enabling the T cell to recognize and destroy cancer cells.

“For any situation where people want to insert DNA, CAST could be a much more attractive approach,” says Zhang. “This just underscores how diverse nature can be and how many unexpected features we have yet to find.”

Support for this study was provided in part by the Human Frontier Science Program, New York Stem Cell Foundation, Mathers Foundation, NIH (1R01-HG009761, 1R01-MH110049, and 1DP1-HL141201), Howard Hughes Medical Institute, Poitras Center for Psychiatric Disorders Research, J. and P. Poitras, and Hock E. Tan and K. Lisa Yang Center for Autism Research.

J.S. and F.Z. are co-inventors on US provisional patent application no. 62/780,658 filed by the Broad Institute, relating to CRISPR-associated transposases.

Expression plasmids are available from Addgene.

Our brains appear uniquely tuned for musical pitch

In the eternal search for understanding what makes us human, scientists found that our brains are more sensitive to pitch, the harmonic sounds we hear when listening to music, than our evolutionary relative the macaque monkey. The study, funded in part by the National Institutes of Health, highlights the promise of Sound Health, a joint project between the NIH and the John F. Kennedy Center for the Performing Arts, in association with the National Endowment for the Arts, that aims to understand the role of music in health.

“We found that a certain region of our brains has a stronger preference for sounds with pitch than macaque monkey brains,” said Bevil Conway, Ph.D., investigator in the NIH’s Intramural Research Program and a senior author of the study published in Nature Neuroscience. “The results raise the possibility that these sounds, which are embedded in speech and music, may have shaped the basic organization of the human brain.”

The study started with a friendly bet between Dr. Conway and Sam Norman-Haignere, Ph.D., a post-doctoral fellow at Columbia University’s Zuckerman Institute for Mind, Brain, and Behavior and the first author of the paper.

At the time, both were working at the Massachusetts Institute of Technology (MIT). Dr. Conway’s team had been searching for differences between how human and monkey brains control vision only to discover that there are very few. Their brain mapping studies suggested that humans and monkeys see the world in very similar ways. But then, Dr. Conway heard about some studies on hearing being done by Dr. Norman-Haignere, who, at the time, was a post-doctoral fellow in the laboratory of Josh H. McDermott, Ph.D., associate professor at MIT.

“I told Bevil that we had a method for reliably identifying a region in the human brain that selectively responds to sounds with pitch,” said Dr. Norman-Haignere, That is when they got the idea to compare humans with monkeys. Based on his studies, Dr. Conway bet that they would see no differences.

To test this, the researchers played a series of harmonic sounds, or tones, to healthy volunteers and monkeys. Meanwhile, functional magnetic resonance imaging (fMRI) was used to monitor brain activity in response to the sounds. The researchers also monitored brain activity in response to sounds of toneless noises that were designed to match the frequency levels of each tone played.

At first glance, the scans looked similar and confirmed previous studies. Maps of the auditory cortex of human and monkey brains had similar hot spots of activity regardless of whether the sounds contained tones.

However, when the researchers looked more closely at the data, they found evidence suggesting the human brain was highly sensitive to tones. The human auditory cortex was much more responsive than the monkey cortex when they looked at the relative activity between tones and equivalent noisy sounds.

“We found that human and monkey brains had very similar responses to sounds in any given frequency range. It’s when we added tonal structure to the sounds that some of these same regions of the human brain became more responsive,” said Dr. Conway. “These results suggest the macaque monkey may experience music and other sounds differently. In contrast, the macaque’s experience of the visual world is probably very similar to our own. It makes one wonder what kind of sounds our evolutionary ancestors experienced.”

Further experiments supported these results. Slightly raising the volume of the tonal sounds had little effect on the tone sensitivity observed in the brains of two monkeys.

Finally, the researchers saw similar results when they used sounds that contained more natural harmonies for monkeys by playing recordings of macaque calls. Brain scans showed that the human auditory cortex was much more responsive than the monkey cortex when they compared relative activity between the calls and toneless, noisy versions of the calls.

“This finding suggests that speech and music may have fundamentally changed the way our brain processes pitch,” said Dr. Conway. “It may also help explain why it has been so hard for scientists to train monkeys to perform auditory tasks that humans find relatively effortless.”

Earlier this year, other scientists from around the U.S. applied for the first round of NIH Sound Health research grants. Some of these grants may eventually support scientists who plan to explore how music turns on the circuitry of the auditory cortex that make our brains sensitive to musical pitch.

This study was supported by the NINDS, NEI, NIMH, and NIA Intramural Research Programs and grants from the NIH (EY13455; EY023322; EB015896; RR021110), the National Science Foundation (Grant 1353571; CCF-1231216), the McDonnell Foundation, the Howard Hughes Medical Institute.

Can we think without language?

As part of our Ask the Brain series, Anna Ivanova, a graduate student who studies how the brain processes language in the labs of Nancy Kanwisher and Evelina Fedorenko, answers the question, “Can we think without language?”

Anna Ivanova headshot
Graduate student Anna Ivanova studies language processing in the brain.

_____

Imagine a woman – let’s call her Sue. One day Sue gets a stroke that destroys large areas of brain tissue within her left hemisphere. As a result, she develops a condition known as global aphasia, meaning she can no longer produce or understand phrases and sentences. The question is: to what extent are Sue’s thinking abilities preserved?

Many writers and philosophers have drawn a strong connection between language and thought. Oscar Wilde called language “the parent, and not the child, of thought.” Ludwig Wittgenstein claimed that “the limits of my language mean the limits of my world.” And Bertrand Russell stated that the role of language is “to make possible thoughts which could not exist without it.” Given this view, Sue should have irreparable damage to her cognitive abilities when she loses access to language. Do neuroscientists agree? Not quite.

Neuroimaging evidence has revealed a specialized set of regions within the human brain that respond strongly and selectively to language.

This language system seems to be distinct from regions that are linked to our ability to plan, remember, reminisce on past and future, reason in social situations, experience empathy, make moral decisions, and construct one’s self-image. Thus, vast portions of our everyday cognitive experiences appear to be unrelated to language per se.

But what about Sue? Can she really think the way we do?

While we cannot directly measure what it’s like to think like a neurotypical adult, we can probe Sue’s cognitive abilities by asking her to perform a variety of different tasks. Turns out, patients with global aphasia can solve arithmetic problems, reason about intentions of others, and engage in complex causal reasoning tasks. They can tell whether a drawing depicts a real-life event and laugh when in doesn’t. Some of them play chess in their spare time. Some even engage in creative tasks – a composer Vissarion Shebalin continued to write music even after a stroke that left him severely aphasic.

Some readers might find these results surprising, given that their own thoughts seem to be tied to language so closely. If you find yourself in that category, I have a surprise for you – research has established that not everybody has inner speech experiences. A bilingual friend of mine sometimes gets asked if she thinks in English or Polish, but she doesn’t quite get the question (“how can you think in a language?”). Another friend of mine claims that he “thinks in landscapes,” a sentiment that conveys the pictorial nature of some people’s thoughts. Therefore, even inner speech does not appear to be necessary for thought.

Have we solved the mystery then? Can we claim that language and thought are completely independent and Bertrand Russell was wrong? Only to some extent. We have shown that damage to the language system within an adult human brain leaves most other cognitive functions intact. However, when it comes to the language-thought link across the entire lifespan, the picture is far less clear. While available evidence is scarce, it does indicate that some of the cognitive functions discussed above are, at least to some extent, acquired through language.

Perhaps the clearest case is numbers. There are certain tribes around the world whose languages do not have number words – some might only have words for one through five (Munduruku), and some won’t even have those (Pirahã). Speakers of Pirahã have been shown to make mistakes on one-to-one matching tasks (“get as many sticks as there are balls”), suggesting that language plays an important role in bootstrapping exact number manipulations.

Another way to examine the influence of language on cognition over time is by studying cases when language access is delayed. Deaf children born into hearing families often do not get exposure to sign languages for the first few months or even years of life; such language deprivation has been shown to impair their ability to engage in social interactions and reason about the intentions of others. Thus, while the language system may not be directly involved in the process of thinking, it is crucial for acquiring enough information to properly set up various cognitive domains.

Even after her stroke, our patient Sue will have access to a wide range of cognitive abilities. She will be able to think by drawing on neural systems underlying many non-linguistic skills, such as numerical cognition, planning, and social reasoning. It is worth bearing in mind, however, that at least some of those systems might have relied on language back when Sue was a child. While the static view of the human mind suggests that language and thought are largely disconnected, the dynamic view hints at a rich nature of language-thought interactions across development.

_____

Do you have a question for The Brain? Ask it here.

Ed Boyden elected to National Academy of Sciences

Ed Boyden has been elected to join the National Academy of Sciences (NAS). The organization, established by an act of Congress during the height of the Civil War, was founded to provide independent and objective advice on scientific matters to the nation, and is actively engaged in furthering science in the United States. Each year NAS members recognize fellow scientists through election to the academy based on their distinguished and continuing achievements in original research.

“I’m very honored and grateful to have been elected to the NAS,” says Boyden. “This is a testament to the work of many graduate students, postdoctoral scholars, research scientists, and staff at MIT who have worked with me over the years, and many collaborators and friends at MIT and around the world who have helped our group on this mission to advance neuroscience through new tools and ways of thinking.”

Boyden’s research creates and applies technologies that aim to expand our understanding of the brain. He notably co-invented optogenetics as an independent side collaboration, conducted in parallel to his PhD studies, a game-changing technology that has revolutionized neurobiology. This technology uses targeted expression of light-sensitive channels and pumps to activate or suppress neuronal activity in vivo using light. Optogenetics quickly swept the field of neurobiology and has been leveraged to understand how specific neurons and brain regions contribute to behavior and to disease.

His research since has an overarching focus on understanding the brain. To this end, he and his lab have the ambitious goal of developing technologies that can map, record, and manipulate the brain. This has led, as selected examples, to the invention of expansion microscopy, a super-resolution imaging technology that can capture neuron’s microstructures and reveal their complex connections, even across large-scale neural circuits; voltage-sensitive fluorescent reporters that allow neural activity to be monitored in vivo; and temporal interference stimulation, a non-invasive brain stimulation technique that allows selective activation of subcortical brain regions.

“We are all incredibly happy to see Ed being elected to the academy,” says Robert Desimone, director of the McGovern Institute for Brain Research at MIT. “He has been consistently innovative, inventing new ways of manipulating and observing neurons that are revolutionizing the field of neuroscience.”

This year the NAS, an organization that includes over 500 Nobel Laureates, elected 100 new members and 25 foreign associates. Three MIT professors were elected this year, with Paula T. Hammond (David H. Koch (1962) Professor of Engineering and Department Head, Chemical Engineering) and Aviv Regev (HHMI Investigator and Professor in the Department of Biology) being elected alongside Boyden. Boyden becomes the seventh member of the McGovern Institute faculty to join the National Academy of Sciences.

The formal induction ceremony for new NAS members, during which they sign the ledger whose first signatory is Abraham Lincoln, will be held at the Academy’s annual meeting in Washington D.C. next spring.

 

 

 

 

 

 

 

 

Algorithms of intelligence

The following post is adapted from a story featured in a recent Brain Scan newsletter.

Machine vision systems are more and more common in everyday life, from social media to self-driving cars, but training artificial neural networks to “see” the world as we do—distinguishing cyclists from signposts—remains challenging. Will artificial neural networks ever decode the world as exquisitely as humans? Can we refine these models and influence perception in a person’s brain just by activating individual, selected neurons? The DiCarlo lab, including CBMM postdocs Kohitij Kar and Pouya Bashivan, are finding that we are surprisingly close to answering “yes” to such questions, all in the context of accelerated insights into artificial intelligence at the McGovern Institute for Brain Research, CBMM, and the Quest for Intelligence at MIT.

Precision Modeling

Beyond light hitting the retina, the recognition process that unfolds in the visual cortex is key to truly “seeing” the surrounding world. Information is decoded through the ventral visual stream, cortical brain regions that progressively build a more accurate, fine-grained, and accessible representation of the objects around us. Artificial neural networks have been modeled on these elegant cortical systems, and the most successful models, deep convolutional neural networks (DCNNs), can now decode objects at levels comparable to the primate brain. However, even leading DCNNs have problems with certain challenging images, presumably due to shadows, clutter, and other visual noise. While there’s no simple feature that unites all challenging images, the quest is on to tackle such images to attain precise recognition at a level commensurate with human object recognition.

“One next step is to couple this new precision tool with our emerging understanding of how neural patterns underlie object perception. This might allow us to create arrangements of pixels that look nothing like, for example, a cat, but that can fool the brain into thinking it’s seeing a cat.”- James DiCarlo

In a recent push, Kar and DiCarlo demonstrated that adding feedback connections, currently missing in most DCNNs, allows the system to better recognize objects in challenging situations, even those where a human can’t articulate why recognition is an issue for feedforward DCNNs. They also found that this recurrent circuit seems critical to primate success rates in performing this task. This is incredibly important for systems like self-driving cars, where the stakes for artificial visual systems are high, and faithful recognition is a must.

Now you see it

As artificial object recognition systems have become more precise in predicting neural activity, the DiCarlo lab wondered what such precision might allow: could they use their system to not only predict, but to control specific neuronal activity?

To demonstrate the power of their models, Bashivan, Kar, and colleagues zeroed in on targeted neurons in the brain. In a paper published in Science, they used an artificial neural network to generate a random-looking group of pixels that, when shown to an animal, activated the team’s target, a target they called “one hot neuron.” In other words, they showed the brain a synthetic pattern, and the pixels in the pattern precisely activated targeted neurons while other neurons remained relatively silent.

These findings show how the knowledge in today’s artificial neural network models might one day be used to noninvasively influence brain states with neural resolution. Such precise systems would be useful as we look to the future, toward visual prosthetics for the blind. Such a precise model of the ventral visual stream would have been incon-ceivable not so long ago, and all eyes are on where McGovern researchers will take these technologies in the coming years.

Recurrent architecture enhances object recognition in brain and AI

Your ability to recognize objects is remarkable. If you see a cup under unusual lighting or from unexpected directions, there’s a good chance that your brain will still compute that it is a cup. Such precise object recognition is one holy grail for AI developers, such as those improving self-driving car navigation. While modeling primate object recognition in the visual cortex has revolutionized artificial visual recognition systems, current deep learning systems are simplified, and fail to recognize some objects that are child’s play for primates such as humans. In findings published in Nature Neuroscience, McGovern Investigator James DiCarlo and colleagues have found evidence that feedback improves recognition of hard-to-recognize objects in the primate brain, and that adding feedback circuitry also improves the performance of artificial neural network systems used for vision applications.

Deep convolutional neural networks (DCNN) are currently the most successful models for accurately recognizing objects on a fast timescale (<100 ms) and have a general architecture inspired by the primate ventral visual stream, cortical regions that progressively build an accessible and refined representation of viewed objects. Most DCNNs are simple in comparison to the primate ventral stream however.

“For a long period of time, we were far from an model-based understanding. Thus our field got started on this quest by modeling visual recognition as a feedforward process,” explains senior author DiCarlo, who is also the head of MIT’s Department of Brain and Cognitive Sciences and Research Co-Leader in the Center for Brains, Minds, and Machines (CBMM). “However, we know there are recurrent anatomical connections in brain regions linked to object recognition.”

Think of feedforward DCNNs and the portion of the visual system that first attempts to capture objects as a subway line that runs forward through a series of stations. The extra, recurrent brain networks are instead like the streets above, interconnected and not unidirectional. Because it only takes about 200 ms for the brain to recognize an object quite accurately, it was unclear if these recurrent interconnections in the brain had any role at all in core object recognition. For example, perhaps those recurrent connections are only in place to keep the visual system in tune over long periods of time. For example, the return gutters of the streets help slowly clear it of water and trash, but are not strictly needed to quickly move people from one end of town to the other. DiCarlo, along with lead author and CBMM postdoc Kohitij Kar, set out to test whether a subtle role of recurrent operations in rapid visual object recognition was being overlooked.

Challenging recognition

The authors first needed to identify objects that are trivially decoded by the primate brain, but are challenging for artificial systems. Rather than trying to guess why deep learning was having problems recognizing an object (is it due to clutter in the image? a misleading shadow?), the authors took an unbiased approach that turned out to be critical.

Kar explained further that “we realized that AI-models actually don’t have problems with every image where an object is occluded or in clutter. Humans trying to guess why AI models were challenged turned out to be holding us back.”

Instead, the authors presented the deep learning system, as well as monkeys and humans, with images, homing in on “challenge images” where the primates could easily recognize the objects in those images, but a feed forward DCNN ran into problems. When they, and others, added appropriate recurrent processing to these DCNNs, object recognition in challenge images suddenly became a breeze.

Processing times

Kar used neural recording methods with very high spatial and temporal precision to whether these images were really so trivial for primates. Remarkably, they found that though challenge images had initially appeared to be child’s play to the human brain, they actually involve extra neural processing time (about additional 30 milliseconds), suggesting that recurrent loops operate in our brain too.

 “What the computer vision community has recently achieved by stacking more and more layers onto artificial neural networks, evolution has achieved through a brain architecture with recurrent connections.” — Kohitij Kar

Diane Beck, Professor of Psychology and Co-chair of the Intelligent Systems Theme at the Beckman Institute and not an author on the study, explained further. “Since entirely feed forward deep convolutional nets are now remarkably good at predicting primate brain activity, it raised questions about the role of feedback connections in the primate brain. This study shows that, yes, feedback connections are very likely playing a role in object recognition after all.”

What does this mean for a self-driving car? It shows that deep learning architectures involved in object recognition need recurrent components if they are to match the primate brain, and also indicates how to operationalize this procedure for the next generation of intelligent machines.

“Recurrent models offer predictions of neural activity and behavior over time,” says Kar. “We may now be able to model more involved tasks. Perhaps one day, the systems will not only recognize an object, such as a person, but also perform cognitive tasks that the human brain so easily manages, such as understanding the emotions of other people.”

This work was supported by the Office of Naval Research grant MURI-114407 (J.J.D.). Center for Brains, Minds, and Machines (CBMM) funded by NSF STC award CCF-1231216 (K.K.).