Single neurons can encode distinct landmarks

The organization of many neurons wired together in a complex circuit gives the brain its ability to perform powerful calculations. Work from the Harnett lab recently showed that even single neurons can process more information than previously thought, representing distinct variables at the subcellular level during behavior.

McGovern Investigator Mark Harnett and postdoc Jakob Voigts conducted an extremely delicate and intricate imaging experiment on different parts of the same neuron in the mouse retinosplenial cortex during 2-D navigation. Their set up allowed 2-photon imaging of neuronal sub-compartments during free 2-D navigation with head rotation, the latter being important to follow neural activity during naturalistic, complex behavior.

Recording computation by subcompartments in neurons.

 

In the work, published recently in Neuron, the authors used Ca2+-imaging to show that the soma in a single neuron was consistently active when mice were at particular landmarks as they navigated in an arena. The dendrites (tree-like antennas that receive input from other neurons) of exactly the same neuron were robustly active independent of the soma at distinct positions and orientations in the arena. This strongly suggests that the dendrites encode distinct information compared to their parent soma, in this case spatial variables during navigation, laying the foundation for studying sub-cellular processes during complex behaviors.

 

Shrinking CRISPR tools

Before CRISPR gene-editing tools can be used to treat brain disorders, scientists must find safe ways to deliver the tools to the brain. One promising method involves harnessing viruses that are benign, and replacing non-essential genetic cargo with therapeutic CRISPR tools. But there is limited room for additional tools in a vector already stuffed with essential gear.

Squeezing all the tools that are needed to edit the genome into a single delivery vector is a challenge. Soumya Kannan is addressing this capacity problem in Feng Zhang’s lab with fellow graduate student Han Altae-Tran, by developing smaller CRISPR tools that can be more easily packaged into viral vectors for delivery. She is focused on RNA editors, members of the Cas13 family that can fix small mutations in RNA without making changes to the genome itself.

“The limitation is that RNA editors are large. At this point though, we know that editing works, we understand the mechanism by which it works, and there’s feasible packaging in AAV. We’re now trying to shrink systems such as RESCUE and REPAIR so that they fit into the packaging for delivery.”

One of many avenues the Zhang lab has taken to tool-finding in the past is to explore biodiversity for new versions of tools, and this is an approach that intrigues Soumya.

“Metagenomics projects are literally sequencing life from the Antarctic ice cores to hot sea vents. It fascinates me that the CRISPR tools of ancient organisms and those that live in extreme conditions.”

Researchers continue to search these troves of sequencing data for new tools.

 

Two CRISPR scientists on the future of gene editing

As part of our Ask the Brain series, Martin Wienisch and Jonathan Wilde of the Feng lab look into the crystal ball to predict the future of CRISPR tech.

_____

Where will CRISPR be in five years?

Jonathan: We’ll definitely have more efficient, more precise, and safer editing tools. An immediate impact on human health may be closer than we think through more nutritious and resilient crops. Also, I think we will have more viable tools available for repairing disease-causing mutations in the brain, which is something that the field is really lacking right now.

Martin: And we can use these technologies with new disease models to help us understand brain disorders such as Huntington’s disease.

Jonathan: There are also incredible tools being discovered in nature: exotic CRISPR systems from newly discovered bacteria and viruses. We could use these to attack disease-causing bacteria.

Martin: We would then be using CRISPR systems for the reason they evolved. Also improved gene drives, CRISPR-systems that can wipe out disease-carrying organisms such as mosquitoes, could impact human health in that time frame.

What will move gene therapy forward?

Martin: A breakthrough on delivery. That’s when therapy will exponentially move forward. Therapy will be tailored to different diseases and disorders, depending on relevant cell types or the location of mutations for example.

Jonathan: Also panning biodiversity even faster: we’ve only looked at one small part of the tree of life for tools. Sequencing and computational advances can help: a future where we collect and analyze genomes in the wild using portable sequencers and laptops can only quicken the pace of new discoveries.

_____

Do you have a question for The Brain? Ask it here.

CRISPR: From toolkit to therapy

Think of the human body as a community of cells with specialized roles. Each cell carries the same blueprint, an array of genes comprising the genome, but different cell types have unique functions — immune cells fight invading bacteria, while neurons transmit information.

But when something goes awry, the specialization of these cells becomes a challenge for treatment. For example, neurons lack active cell repair systems required for promising gene editing techniques like CRISPR.

Can current gene editing tools be modified to work in neurons? Can we reach neurons without impacting healthy cells nearby? McGovern Institute researchers are trying to answer these questions by developing gene editing tools and delivery systems that can target — and repair — faulty brain cells.

Expanding the toolkit

Feng Zhang with folded arms in lab
McGovern Investigator Feng Zhang in his lab.

Natural CRISPR systems help bacteria fend off would-be attackers. Our first glimpse of the impact of such systems was the use of CRISPR-Cas9 to edit human cells.

“Harnessing Cas9 was a major game-changer in the life sciences,” explains Feng Zhang, an investigator at the McGovern Institute and the James and Patricia Poitras Professor of Neuroscience at MIT. “But Cas9 is just one flavor of one kind of bacterial defense system — there is a treasure trove of natural systems that may have enormous potential, just waiting to be unlocked.”

By finding and optimizing new molecular tools, the Zhang lab and others have developed CRISPR tools that can now potentially target neurons and fix diverse mutation types, bringing gene therapy within reach.

Precise in space and time

A single letter change to a gene can be devastating. These genes may function only briefly during development, so a temporary “fix” during this window could be beneficial. For such cases, the Zhang lab and others have engineered tools that target short-lived RNAs. These molecules act as messengers, carrying information from DNA to be converted into functional factors in the cell.

“RNA editing is powerful from an ethical and safety standpoint,” explains Soumya Kannan, a graduate student in the Zhang lab working on these tools. “By targeting RNA molecules, which are only present for a short time, we can avoid permanent changes to the genetic material, and we can make these changes in any type of cell.”

Soumya Kannan in the lab
Graduate student Soumya Kannan is developing smaller CRISPR tools that can be more easily packaged into viral vectors for delivery. Photo: Caitlin Cunningham

Zhang’s team has developed twin RNA-editing tools, REPAIR and RESCUE, which can fix single RNA bases by bringing together a base editor with the CRISPR protein Cas13. These RNA-editing tools can be used in neurons because they do not rely on cellular machinery to make the targeted changes. They also have the potential to tackle a wide array of diseases in other tissue types.

CAST addition

If a gene is severely disrupted, more radical help may be needed: insertion of a normal gene. For this situation, Zhang’s lab recently identified CRISPR-associated transposases (CASTs) from cyanobacteria. CASTs combine Cas12k, which is targeted by a guide RNA to a precise genome location, with an enzyme that can insert gene-sized pieces of DNA.

“With traditional CRISPR you can make simple changes, similar to changing a few letters or words in a Word document. The new system can ‘copy and paste’ entire genes.” – Alim Ladha

Transposases were originally identified as enzymes that help rogue genes “jump” from one place to another in the genome. CAST uses a similar activity to insert entire genes self-sufficiently without help from the target cell so, like REPAIR and RESCUE, it can potentially be used in neurons.

“Our initial work was to fully characterize how this new system works, and test whether it can actually insert genes,” explains Alim Ladha, a graduate fellow in the Tan-Yang Center for Autism Research, who worked on CAST with Jonathan Strecker, a postdoctoral fellow in the Zhang lab.

The goal is now to use CAST to precisely target neurons and other specific cell types affected by disease.

Toward delivery

As the gene-editing toolbox expands, McGovern labs are working on precise delivery systems.Adeno-associated virus (AAV) is an FDA-approved virus for delivering genes, but has limited room to carry the necessary cargo — CRISPR machinery plus templates — to fix genes.

To tackle this problem, McGovern Investigators Guoping Feng and Feng Zhang are working on reducing the cargo needed for therapy. In addition, the Zhang, Gootenberg and Abudayyeh labs are working on methods to precisely deliver the therapeutic packages to neurons, such as new tissue-specific viruses that can carry bigger payloads. Finally, entirely new modalities for delivery are being explored in the effort to develop gene therapy to a point where it can be safely delivered to patients.

“Cas9 has been a very useful tool for the life sciences,” says Zhang. “And it’ll be exciting to see continued progress with the broadening toolkit and delivery systems, as we make further progress toward safe gene therapies.

McGovern scientists named STAT Wunderkinds

McGovern researchers Sam Rodriques and Jonathan Strecker have been named to the class of 2019 STAT wunderkinds. This group of 22 researchers was selected from a national pool of hundreds of nominees, and aims to recognize trail-blazing scientists that are on the cusp of launching their careers but not yet fully independent.

“We were thrilled to receive this news,” said Robert Desimone, director of the McGovern Institute. “It’s great to see the remarkable progress being made by young scientists in McGovern labs be recognized in this way.”

Finding context

Sam Rodriques works in Ed Boyden’s lab at the McGovern Institute, where he develops new technologies that enable researchers to understand the behaviors of cells within their native spatial and temporal context.

“Psychiatric disease is a huge problem, but only a handful of first-in-class drugs for psychiatric diseases approved since the 1960s,” explains Rodriques, also affiliated with the MIT Media Lab and Broad Institute. “Coming up with novel cures is going to require new ways to generate hypotheses about the biological processes that underpin disease.”

Rodriques also works on several technologies within the Boyden lab, including preserving spatial information in molecular mapping technologies, finding ways of following neural connectivity in the brain, and Implosion Fabrication, or “Imp Fab.” This nanofabrication technology allows objects to be evenly shrunk to the nanoscale and has a wide range of potential applications, including building new miniature devices for examining neural function.

“I was very surprised, not expecting it at all!” explains Rodriques when asked about becoming a STAT Wunderkind, “I’m sure that all of the hundreds of applicants are very accomplished scientists, and so to be chosen like this is really an honor.”

New tools for gene editing

Jonathan Strecker is currently a postdoc working in Feng Zhang’s lab, and associated with both the McGovern Institute and Broad Institute. While CRISPR-Cas9 continues to have a profound effect and huge potential for research and biomedical, and agricultural applications, the ability to move entire genes into specific target locations remained out reach.

“Genome editing with CRISPR-Cas enzymes typically involves cutting and disrupting genes, or making certain base edits,” explains Strecker, “however, inserting large pieces of DNA is still hard to accomplish.”

As a postdoctoral researcher in the lab of CRISPR pioneer Feng Zhang, Strecker led research that showed how large sequences could be inserted into a genome at a given location.

“Nature often has interesting solutions to these problems and we were fortunate to identify and characterize a remarkable CRISPR system from cyanobacteria that functions as a programmable transposase.”

Importantly, the system he discovered, called CAST, doesn’t require cellular machinery to insert DNA. This is important as it means that CAST could work in many cell types, including those that have stopped dividing such as neurons, something that is being pursued.

By finding new sources of inspiration, be it nature or art, both Rodriques and Strecker join a stellar line up of young investigators being recognized for creativity and innovation.

 

Word Play

Ev Fedorenko uses the widely translated book “Alice in Wonderland” to test brain responses to different languages.

Language is a uniquely human ability that allows us to build vibrant pictures of non-existent places (think Wonderland or Westeros). How does the brain build mental worlds from words? Can machines do the same? Can we recover this ability after brain injury? These questions require an understanding of how the brain processes language, a fascination for Ev Fedorenko.

“I’ve always been interested in language. Early on, I wanted to found a company that teaches kids languages that share structure — Spanish, French, Italian — in one go,” says Fedorenko, an associate investigator at the McGovern Institute and an assistant professor in brain and cognitive sciences at MIT.

Her road to understanding how thoughts, ideas, emotions, and meaning can be delivered through sound and words became clear when she realized that language was accessible through cognitive neuroscience.

Early on, Fedorenko made a seminal finding that undermined dominant theories of the time. Scientists believed a single network was extracting meaning from all we experience: language, music, math, etc. Evolving separate networks for these functions seemed unlikely, as these capabilities arose recently in human evolution.

Language Regions
Ev Fedorenko has found that language regions of the brain (shown in teal) are sensitive to both word meaning and sentence structure. Image: Ev Fedorenko

But when Fedorenko examined brain activity in subjects while they read or heard sentences in the MRI, she found a network of brain regions that is indeed specialized for language.

“A lot of brain areas, like motor and social systems, were already in place when language emerged during human evolution,” explains Fedorenko. “In some sense, the brain seemed fully occupied. But rather than co-opt these existing systems, the evolution of language in humans involved language carving out specific brain regions.”

Different aspects of language recruit brain regions across the left hemisphere, including Broca’s area and portions of the temporal lobe. Many believe that certain regions are involved in processing word meaning while others unpack the rules of language. Fedorenko and colleagues have however shown that the entire language network is selectively engaged in linguistic tasks, processing both the rules (syntax) and meaning (semantics) of language in the same brain areas.

Semantic Argument

Fedorenko’s lab even challenges the prevailing view that syntax is core to language processing. By gradually degrading sentence structure through local word swaps (see figure), they found that language regions still respond strongly to these degraded sentences, deciphering meaning from them, even as syntax, or combinatorial rules, disappear.

The Fedorenko lab has shown that the brain finds meaning in a sentence, even when “local” words are swapped (2, 3). But when clusters of neighboring words are scrambled (4), the brain struggles to find its meaning.

“A lot of focus in language research has been on structure-building, or building a type of hierarchical graph of the words in a sentence. But actually the language system seems optimized and driven to find rich, representational meaning in a string of words processed together,” explains Fedorenko.

Computing Language

When asked about emerging areas of research, Fedorenko points to the data structures and algorithms underlying linguistic processing. Modern computational models can perform sophisticated tasks, including translation, ever more effectively. Consider Google translate. A decade ago, the system translated one word at a time with laughable results. Now, instead of treating words as providing context for each other, the latest artificial translation systems are performing more accurately. Understanding how they resolve meaning could be very revealing.

“Maybe we can link these models to human neural data to both get insights about linguistic computations in the human brain, and maybe help improve artificial systems by making them more human-like,” says Fedorenko.

She is also trying to understand how the system breaks down, how it over-performs, and even more philosophical questions. Can a person who loses language abilities (with aphasia, for example) recover — a very relevant question given the language-processing network occupies such specific brain regions. How are some unique people able to understand 10, 15 or even more languages? Do we need words to have thoughts?

Using a battery of approaches, Fedorenko seems poised to answer some of these questions.

New method visualizes groups of neurons as they compute

Using a fluorescent probe that lights up when brain cells are electrically active, MIT and Boston University researchers have shown that they can image the activity of many neurons at once, in the brains of mice.

McGovern Investigator Ed Boyden has developed a technology that allows neuroscientists to visualize the activity of circuits within the brain and link them to specific behaviors.

This technique, which can be performed using a simple light microscope, could allow neuroscientists to visualize the activity of circuits within the brain and link them to specific behaviors, says Edward Boyden, the Y. Eva Tan Professor in Neurotechnology and a professor of biological engineering and of brain and cognitive sciences at MIT.

“If you want to study a behavior, or a disease, you need to image the activity of populations of neurons because they work together in a network,” says Boyden, who is also a member of MIT’s McGovern Institute for Brain Research, Media Lab, and Koch Institute for Integrative Cancer Research.

Using this voltage-sensing molecule, the researchers showed that they could record electrical activity from many more neurons than has been possible with any existing, fully genetically encoded, fluorescent voltage probe.

Boyden and Xue Han, an associate professor of biomedical engineering at Boston University, are the senior authors of the study, which appears in the Oct. 9 online edition of Nature. The lead authors of the paper are MIT postdoc Kiryl Piatkevich, BU graduate student Seth Bensussen, and BU research scientist Hua-an Tseng.

Seeing connections

Neurons compute using rapid electrical impulses, which underlie our thoughts, behavior, and perception of the world. Traditional methods for measuring this electrical activity require inserting an electrode into the brain, a process that is labor-intensive and usually allows researchers to record from only one neuron at a time. Multielectrode arrays allow the monitoring of electrical activity from many neurons at once, but they don’t sample densely enough to get all the neurons within a given volume.  Calcium imaging does allow such dense sampling, but it measures calcium, an indirect and slow measure of neural electrical activity.

In 2018, MIT researchers developed a light-sensitive protein that can be embedded into neuron membranes, where it emits a fluorescent signal that indicates how much voltage a particular cell is experiencing. Image courtesy of the researchers

In 2018, Boyden’s team developed an alternative way to monitor electrical activity by labeling neurons with a fluorescent probe. Using a technique known as directed protein evolution, his group engineered a molecule called Archon1 that can be genetically inserted into neurons, where it becomes embedded in the cell membrane. When a neuron’s electrical activity increases, the molecule becomes brighter, and this fluorescence can be seen with a standard light microscope.

In the 2018 paper, Boyden and his colleagues showed that they could use the molecule to image electrical activity in the brains of transparent worms and zebrafish embryos, and also in mouse brain slices. In the new study, they wanted to try to use it in living, awake mice as they engaged in a specific behavior.

To do that, the researchers had to modify the probe so that it would go to a subregion of the neuron membrane. They found that when the molecule inserts itself throughout the entire cell membrane, the resulting images are blurry because the axons and dendrites that extend from neurons also fluoresce. To overcome that, the researchers attached a small peptide that guides the probe specifically to membranes of the cell bodies of neurons. They called this modified protein SomArchon.

“With SomArchon, you can see each cell as a distinct sphere,” Boyden says. “Rather than having one cell’s light blurring all its neighbors, each cell can speak by itself loudly and clearly, uncontaminated by its neighbors.”

The researchers used this probe to image activity in a part of the brain called the striatum, which is involved in planning movement, as mice ran on a ball. They were able to monitor activity in several neurons simultaneously and correlate each one’s activity with the mice’s movement. Some neurons’ activity went up when the mice were running, some went down, and others showed no significant change.

“Over the years, my lab has tried many different versions of voltage sensors, and none of them have worked in living mammalian brains until this one,” Han says.

Using this fluorescent probe, the researchers were able to obtain measurements similar to those recorded by an electrical probe, which can pick up activity on a very rapid timescale. This makes the measurements more informative than existing techniques such as imaging calcium, which neuroscientists often use as a proxy for electrical activity.

“We want to record electrical activity on a millisecond timescale,” Han says. “The timescale and activity patterns that we get from calcium imaging are very different. We really don’t know exactly how these calcium changes are related to electrical dynamics.”

With the new voltage sensor, it is also possible to measure very small fluctuations in activity that occur even when a neuron is not firing a spike. This could help neuroscientists study how small fluctuations impact a neuron’s overall behavior, which has previously been very difficult in living brains, Han says.

Mapping circuits

The researchers also showed that this imaging technique can be combined with optogenetics — a technique developed by the Boyden lab and collaborators that allows researchers to turn neurons on and off with light by engineering them to express light-sensitive proteins. In this case, the researchers activated certain neurons with light and then measured the resulting electrical activity in these neurons.

This imaging technology could also be combined with expansion microscopy, a technique that Boyden’s lab developed to expand brain tissue before imaging it, make it easier to see the anatomical connections between neurons in high resolution.

“One of my dream experiments is to image all the activity in a brain, and then use expansion microscopy to find the wiring between those neurons,” Boyden says. “Then can we predict how neural computations emerge from the wiring.”

Such wiring diagrams could allow researchers to pinpoint circuit abnormalities that underlie brain disorders, and may also help researchers to design artificial intelligence that more closely mimics the human brain, Boyden says.

The MIT portion of the research was funded by Edward and Kay Poitras, the National Institutes of Health, including a Director’s Pioneer Award, Charles Hieken, John Doerr, the National Science Foundation, the HHMI-Simons Faculty Scholars Program, the Human Frontier Science Program, and the U.S. Army Research Office.

Controlling our internal world

Olympic skaters can launch, perform multiple aerial turns, and land gracefully, anticipating imperfections and reacting quickly to correct course. To make such elegant movements, the brain must have an internal model of the body to control, predict, and make almost instantaneous adjustments to motor commands. So-called “internal models” are a fundamental concept in engineering and have long been suggested to underlie control of movement by the brain, but what about processes that occur in the absence of movement, such as contemplation, anticipation, planning?

Using a novel combination of task design, data analysis, and modeling, MIT neuroscientist Mehrdad Jazayeri and colleagues now provide compelling evidence that the core elements of an internal model also control purely mental processes in a study published in Nature Neuroscience.

“During my thesis I realized that I’m interested, not so much in how our senses react to sensory inputs, but instead in how my internal model of the world helps me make sense of those inputs,”says Jazayeri, the Robert A. Swanson Career Development Professor of Life Sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Indeed, understanding the building blocks exerting control of such mental processes could help to paint a better picture of disruptions in mental disorders, such as schizophrenia.

Internal models for mental processes

Scientists working on the motor system have long theorized that the brain overcomes noisy and slow signals using an accurate internal model of the body. This internal model serves three critical functions: it provides motor to control movement, simulates upcoming movement to overcome delays, and uses feedback to make real-time adjustments.

“The framework that we currently use to think about how the brain controls our actions is one that we have borrowed from robotics: we use controllers, simulators, and sensory measurements to control machines and train operators,” explains Reza Shadmehr, a professor at the Johns Hopkins School of Medicine who was not involved with the study. “That framework has largely influenced how we imagine our brain controlling our movements.”

Jazazyeri and colleagues wondered whether the same framework might explain the control principles governing mental states in the absence of any movement.

“When we’re simply sitting, thoughts and images run through our heads and, fundamental to intellect, we can control them,” explains lead author Seth Egger, a former postdoctoral associate in the Jazayeri lab and now at Duke University.

“We wanted to find out what’s happening between our ears when we are engaged in thinking,” says Egger.

Imagine, for example, a sign language interpreter keeping up with a fast speaker. To track speech accurately, the translator continuously anticipates where the speech is going, rapidly adjusting when the actual words deviate from the prediction. The interpreter could be using an internal model to anticipate upcoming words, and use feedback to make adjustments on the fly.

1-2-3…Go

Hypothesizing about how the components of an internal model function in scenarios such as translation is one thing. Cleanly measuring and proving the existence of these elements is much more complicated as the activity of the controller, simulator, and feedback are intertwined. To tackle this problem, Jazayeri and colleagues devised a clever task with primate models in which the controller, simulator, and feedback act at distinct times.

In this task, called “1-2-3-Go,” the animal sees three consecutive flashes (1, 2, and 3) that form a regular beat, and learns to make an eye movement (Go) when they anticipate the 4th flash should occur. During the task, researchers measured neural activity in a region of the frontal cortex they had previously linked to the timing of movement.

Jazayeri and colleagues had clear predictions about when the controller would act (between the third flash and “Go”) and when feedback would be engaged (with each flash of light). The key surprise came when researchers saw evidence for the simulator anticipating the third flash. This unexpected neural activity has dynamics that resemble the controller, but was not associated with a response. In other words, the researchers uncovered a covert plan that functions as the simulator, thus uncovering all three elements of an internal model for a mental process, the planning and anticipation of “Go” in the “1-2-3-Go” sequence.

“Jazayeri’s work is important because it demonstrates how to study mental simulation in animals,” explains Shadmehr, “and where in the brain that simulation is taking place.”

Having found how and where to measure an internal model in action, Jazayeri and colleagues now plan to ask whether these control strategies can explain how primates effortlessly generalize their knowledge from one behavioral context to another. For example, how does an interpreter rapidly adjust when someone with widely different speech habits takes the podium? This line of investigation promises to shed light on high-level mental capacities of the primate brain that simpler animals seem to lack, that go awry in mental disorders, and that designers of artificial intelligence systems so fondly seek.

What is the social brain?

As part of our Ask the Brain series, Anila D’Mello, a postdoctoral fellow in John Gabrieli’s lab answers the question,”What is the social brain?”

_____

Anila D'Mello portrait
Anila D’Mello is the Simons Center for the Social Brain Postdoctoral Fellow in John Gabrieli’s lab at the McGovern Institute.

“Knock Knock.”
“Who’s there?”
“The Social Brain.”
“The Social Brain, who?”

Call and response jokes, like the “Knock Knock” joke above, leverage our common understanding of how a social interaction typically proceeds. Joke telling allows us to interact socially with others based on our shared experiences and understanding of the world. But where do these abilities “live” in the brain and how does the social brain develop?

Neuroimaging and lesion studies have identified a network of brain regions that support social interaction, including the ability to understand and partake in jokes – we refer to this as the “social brain.” This social brain network is made up of multiple regions throughout the brain that together support complex social interactions. Within this network, each region likely contributes to a specific type of social processing. The right temporo-parietal junction, for instance, is important for thinking about another person’s mental state, whereas the amygdala is important for the interpretation of emotional facial expressions and fear processing. Damage to these brain regions can have striking effects on social behaviors. One recent study even found that individuals with bigger amygdala volumes had larger and more complex social networks!

Though social interaction is such a fundamental human trait, we aren’t born with a prewired social brain.

Much of our social ability is grown and honed over time through repeated social interactions. Brain networks that support social interaction continue to specialize into adulthood. Neuroimaging work suggests that though newborn infants may have all the right brain parts to support social interaction, these regions may not yet be specialized or connected in the right way. This means that early experiences and environments can have large influences on the social brain. For instance, social neglect, especially very early in development, can have negative impacts on social behaviors and on how the social brain is wired. One prominent example is that of children raised in orphanages or institutions, who are sometimes faced with limited adult interaction or access to language. Children raised in these conditions are more likely to have social challenges including difficulties forming attachments. Prolonged lack of social stimulation also alters the social brain in these children resulting in changes in amygdala size and connections between social brain regions.

The social brain is not just a result of our environment. Genetics and biology also contribute to the social brain in ways we don’t yet fully understand. For example, individuals with autism / autistic individuals may experience difficulties with social interaction and communication. This may include challenges with things like understanding the punchline of a joke. These challenges in autism have led to the hypothesis that there may be differences in the social brain network in autism. However, despite documented behavioral differences in social tasks, there is conflicting brain imaging evidence for whether differences exist between people with and without autism in the social brain network.

Examples such as that of autism imply that the reality of the social brain is probably much more complex than the story painted here. It is likely that social interaction calls upon many different parts of the brain, even beyond those that we have termed the “social brain,” that must work in concert to support this highly complex set of behaviors. These include regions of the brain important for listening, seeing, speaking, and moving. In addition, it’s important to remember that the social brain and regions that make it up do not stand alone. Regions of the social brain also play an intimate role in language, humor, and other cognitive processes.

“Knock Knock”
“Who’s there?”
“The Social Brain”
“The Social Brain, who?”
“I just told you…didn’t you read what I wrote?”

Anila D’Mello earned her bachelor’s degree in psychology from Georgetown University in 2012, and went on to receive her PhD in Behavior, Cognition, and Neuroscience from American University in 2017. She joined the Gabrieli lab as a postdoc in 2017 and studies the neural correlates of social communication in autism.

_____

Do you have a question for The Brain? Ask it here.

Can I rewire my brain?

As part of our Ask the Brain series, Halie Olson, a graduate student in the labs of John Gabrieli and Rebecca Saxe, pens her answer to the question,”Can I rewire my brain?”

_____

Yes, kind of, sometimes – it all depends on what you mean by “rewiring” the brain.

Halie Olson, a graduate student in the Gabrieli and Saxe labs.

If you’re asking whether you can remove all memories of your ex from your head, then no. (That’s probably for the best – just watch Eternal Sunshine of the Spotless Mind.) However, if you’re asking whether you can teach a dog new tricks – that have a physical implementation in the brain – then yes.

To embrace the analogy that “rewiring” alludes to, let’s imagine you live in an old house with outlets in less-than-optimal locations. You really want your brand-new TV to be plugged in on the far side of the living room, but there is no outlet to be found. So you call up your electrician, she pops over, and moves some wires around in the living room wall to give you a new outlet. No sweat!

Local changes in neural connectivity happen throughout the lifespan. With over 100 billion neurons and 100 trillion connections – or synapses – between these neurons in the adult human brain, it is unsurprising that some pathways end up being more important than others. When we learn something new, the connections between relevant neurons communicating with each other are strengthened. To paraphrase Donald Hebb, one of the most influential psychologists of the twentieth century, “neurons that fire together, wire together” – by forming new synapses or more efficiently connecting the ones that are already there. This ability to rewire neural connections at a local level is a key feature of the brain, enabling us to tailor our neural infrastructure to our needs.

Plasticity in our brain allows us to learn, adjust, and thrive in our environments.

We can also see this plasticity in the brain at a larger scale. My favorite example of “rewiring” in the brain is when children learn to read. Our brains did not evolve to enable us to read – there is no built-in “reading region” that magically comes online when a child enters school. However, if you stick a proficient reader in an MRI scanner, you will see a region in the left lateral occipitotemporal sulcus (that is, the back bottom left of your cortex) that is particularly active when you read written text. Before children learn to read, this region – known as the visual word form area – is not exceptionally interested in words, but as children get acquainted with written language and start connecting letters with sounds, it becomes selective for familiar written language – no matter the font, CaPItaLIZation, or size.

Now, let’s say that you wake up in the middle of the night with a desire to move your oven and stovetop from the kitchen into your swanky new living room with the TV. You call up your electrician – she tells you this is impossible, and to stop calling her in the middle of the night.

Similarly, your brain comes with a particular infrastructure – a floorplan, let’s call it – that cannot be easily adjusted when you are an adult. Large lesions tend to have large consequences. For instance, an adult who suffers a serious stroke in their left hemisphere will likely struggle with language, a condition called aphasia. Young children’s brains, on the other hand, can sometimes rewire in profound ways. An entire half of the brain can be damaged early on with minimal functional consequences. So if you’re going for a remodel? Better do it really early.

Plasticity in our brain allows us to learn, adjust, and thrive in our environments. It also gives neuroscientists like me something to study – since clearly I would fail as an electrician.

Halie Olson earned her bachelor’s degree in neurobiology from Harvard College in 2017. She is currently a graduate student in MIT’s Department of Brain and Cognitive Sciences working with John Gabrieli and Rebecca Saxe. She studies how early life experiences and environments impact brain development, particularly in the context of reading and language, and what this means for children’s educational outcomes.

_____

Do you have a question for The Brain? Ask it here.