Word Play

Ev Fedorenko uses the widely translated book “Alice in Wonderland” to test brain responses to different languages.

Language is a uniquely human ability that allows us to build vibrant pictures of non-existent places (think Wonderland or Westeros). How does the brain build mental worlds from words? Can machines do the same? Can we recover this ability after brain injury? These questions require an understanding of how the brain processes language, a fascination for Ev Fedorenko.

“I’ve always been interested in language. Early on, I wanted to found a company that teaches kids languages that share structure — Spanish, French, Italian — in one go,” says Fedorenko, an associate investigator at the McGovern Institute and an assistant professor in brain and cognitive sciences at MIT.

Her road to understanding how thoughts, ideas, emotions, and meaning can be delivered through sound and words became clear when she realized that language was accessible through cognitive neuroscience.

Early on, Fedorenko made a seminal finding that undermined dominant theories of the time. Scientists believed a single network was extracting meaning from all we experience: language, music, math, etc. Evolving separate networks for these functions seemed unlikely, as these capabilities arose recently in human evolution.

Language Regions
Ev Fedorenko has found that language regions of the brain (shown in teal) are sensitive to both word meaning and sentence structure. Image: Ev Fedorenko

But when Fedorenko examined brain activity in subjects while they read or heard sentences in the MRI, she found a network of brain regions that is indeed specialized for language.

“A lot of brain areas, like motor and social systems, were already in place when language emerged during human evolution,” explains Fedorenko. “In some sense, the brain seemed fully occupied. But rather than co-opt these existing systems, the evolution of language in humans involved language carving out specific brain regions.”

Different aspects of language recruit brain regions across the left hemisphere, including Broca’s area and portions of the temporal lobe. Many believe that certain regions are involved in processing word meaning while others unpack the rules of language. Fedorenko and colleagues have however shown that the entire language network is selectively engaged in linguistic tasks, processing both the rules (syntax) and meaning (semantics) of language in the same brain areas.

Semantic Argument

Fedorenko’s lab even challenges the prevailing view that syntax is core to language processing. By gradually degrading sentence structure through local word swaps (see figure), they found that language regions still respond strongly to these degraded sentences, deciphering meaning from them, even as syntax, or combinatorial rules, disappear.

The Fedorenko lab has shown that the brain finds meaning in a sentence, even when “local” words are swapped (2, 3). But when clusters of neighboring words are scrambled (4), the brain struggles to find its meaning.

“A lot of focus in language research has been on structure-building, or building a type of hierarchical graph of the words in a sentence. But actually the language system seems optimized and driven to find rich, representational meaning in a string of words processed together,” explains Fedorenko.

Computing Language

When asked about emerging areas of research, Fedorenko points to the data structures and algorithms underlying linguistic processing. Modern computational models can perform sophisticated tasks, including translation, ever more effectively. Consider Google translate. A decade ago, the system translated one word at a time with laughable results. Now, instead of treating words as providing context for each other, the latest artificial translation systems are performing more accurately. Understanding how they resolve meaning could be very revealing.

“Maybe we can link these models to human neural data to both get insights about linguistic computations in the human brain, and maybe help improve artificial systems by making them more human-like,” says Fedorenko.

She is also trying to understand how the system breaks down, how it over-performs, and even more philosophical questions. Can a person who loses language abilities (with aphasia, for example) recover — a very relevant question given the language-processing network occupies such specific brain regions. How are some unique people able to understand 10, 15 or even more languages? Do we need words to have thoughts?

Using a battery of approaches, Fedorenko seems poised to answer some of these questions.

New method visualizes groups of neurons as they compute

Using a fluorescent probe that lights up when brain cells are electrically active, MIT and Boston University researchers have shown that they can image the activity of many neurons at once, in the brains of mice.

McGovern Investigator Ed Boyden has developed a technology that allows neuroscientists to visualize the activity of circuits within the brain and link them to specific behaviors.

This technique, which can be performed using a simple light microscope, could allow neuroscientists to visualize the activity of circuits within the brain and link them to specific behaviors, says Edward Boyden, the Y. Eva Tan Professor in Neurotechnology and a professor of biological engineering and of brain and cognitive sciences at MIT.

“If you want to study a behavior, or a disease, you need to image the activity of populations of neurons because they work together in a network,” says Boyden, who is also a member of MIT’s McGovern Institute for Brain Research, Media Lab, and Koch Institute for Integrative Cancer Research.

Using this voltage-sensing molecule, the researchers showed that they could record electrical activity from many more neurons than has been possible with any existing, fully genetically encoded, fluorescent voltage probe.

Boyden and Xue Han, an associate professor of biomedical engineering at Boston University, are the senior authors of the study, which appears in the Oct. 9 online edition of Nature. The lead authors of the paper are MIT postdoc Kiryl Piatkevich, BU graduate student Seth Bensussen, and BU research scientist Hua-an Tseng.

Seeing connections

Neurons compute using rapid electrical impulses, which underlie our thoughts, behavior, and perception of the world. Traditional methods for measuring this electrical activity require inserting an electrode into the brain, a process that is labor-intensive and usually allows researchers to record from only one neuron at a time. Multielectrode arrays allow the monitoring of electrical activity from many neurons at once, but they don’t sample densely enough to get all the neurons within a given volume.  Calcium imaging does allow such dense sampling, but it measures calcium, an indirect and slow measure of neural electrical activity.

In 2018, MIT researchers developed a light-sensitive protein that can be embedded into neuron membranes, where it emits a fluorescent signal that indicates how much voltage a particular cell is experiencing. Image courtesy of the researchers

In 2018, Boyden’s team developed an alternative way to monitor electrical activity by labeling neurons with a fluorescent probe. Using a technique known as directed protein evolution, his group engineered a molecule called Archon1 that can be genetically inserted into neurons, where it becomes embedded in the cell membrane. When a neuron’s electrical activity increases, the molecule becomes brighter, and this fluorescence can be seen with a standard light microscope.

In the 2018 paper, Boyden and his colleagues showed that they could use the molecule to image electrical activity in the brains of transparent worms and zebrafish embryos, and also in mouse brain slices. In the new study, they wanted to try to use it in living, awake mice as they engaged in a specific behavior.

To do that, the researchers had to modify the probe so that it would go to a subregion of the neuron membrane. They found that when the molecule inserts itself throughout the entire cell membrane, the resulting images are blurry because the axons and dendrites that extend from neurons also fluoresce. To overcome that, the researchers attached a small peptide that guides the probe specifically to membranes of the cell bodies of neurons. They called this modified protein SomArchon.

“With SomArchon, you can see each cell as a distinct sphere,” Boyden says. “Rather than having one cell’s light blurring all its neighbors, each cell can speak by itself loudly and clearly, uncontaminated by its neighbors.”

The researchers used this probe to image activity in a part of the brain called the striatum, which is involved in planning movement, as mice ran on a ball. They were able to monitor activity in several neurons simultaneously and correlate each one’s activity with the mice’s movement. Some neurons’ activity went up when the mice were running, some went down, and others showed no significant change.

“Over the years, my lab has tried many different versions of voltage sensors, and none of them have worked in living mammalian brains until this one,” Han says.

Using this fluorescent probe, the researchers were able to obtain measurements similar to those recorded by an electrical probe, which can pick up activity on a very rapid timescale. This makes the measurements more informative than existing techniques such as imaging calcium, which neuroscientists often use as a proxy for electrical activity.

“We want to record electrical activity on a millisecond timescale,” Han says. “The timescale and activity patterns that we get from calcium imaging are very different. We really don’t know exactly how these calcium changes are related to electrical dynamics.”

With the new voltage sensor, it is also possible to measure very small fluctuations in activity that occur even when a neuron is not firing a spike. This could help neuroscientists study how small fluctuations impact a neuron’s overall behavior, which has previously been very difficult in living brains, Han says.

Mapping circuits

The researchers also showed that this imaging technique can be combined with optogenetics — a technique developed by the Boyden lab and collaborators that allows researchers to turn neurons on and off with light by engineering them to express light-sensitive proteins. In this case, the researchers activated certain neurons with light and then measured the resulting electrical activity in these neurons.

This imaging technology could also be combined with expansion microscopy, a technique that Boyden’s lab developed to expand brain tissue before imaging it, make it easier to see the anatomical connections between neurons in high resolution.

“One of my dream experiments is to image all the activity in a brain, and then use expansion microscopy to find the wiring between those neurons,” Boyden says. “Then can we predict how neural computations emerge from the wiring.”

Such wiring diagrams could allow researchers to pinpoint circuit abnormalities that underlie brain disorders, and may also help researchers to design artificial intelligence that more closely mimics the human brain, Boyden says.

The MIT portion of the research was funded by Edward and Kay Poitras, the National Institutes of Health, including a Director’s Pioneer Award, Charles Hieken, John Doerr, the National Science Foundation, the HHMI-Simons Faculty Scholars Program, the Human Frontier Science Program, and the U.S. Army Research Office.

Controlling our internal world

Olympic skaters can launch, perform multiple aerial turns, and land gracefully, anticipating imperfections and reacting quickly to correct course. To make such elegant movements, the brain must have an internal model of the body to control, predict, and make almost instantaneous adjustments to motor commands. So-called “internal models” are a fundamental concept in engineering and have long been suggested to underlie control of movement by the brain, but what about processes that occur in the absence of movement, such as contemplation, anticipation, planning?

Using a novel combination of task design, data analysis, and modeling, MIT neuroscientist Mehrdad Jazayeri and colleagues now provide compelling evidence that the core elements of an internal model also control purely mental processes in a study published in Nature Neuroscience.

“During my thesis I realized that I’m interested, not so much in how our senses react to sensory inputs, but instead in how my internal model of the world helps me make sense of those inputs,”says Jazayeri, the Robert A. Swanson Career Development Professor of Life Sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Indeed, understanding the building blocks exerting control of such mental processes could help to paint a better picture of disruptions in mental disorders, such as schizophrenia.

Internal models for mental processes

Scientists working on the motor system have long theorized that the brain overcomes noisy and slow signals using an accurate internal model of the body. This internal model serves three critical functions: it provides motor to control movement, simulates upcoming movement to overcome delays, and uses feedback to make real-time adjustments.

“The framework that we currently use to think about how the brain controls our actions is one that we have borrowed from robotics: we use controllers, simulators, and sensory measurements to control machines and train operators,” explains Reza Shadmehr, a professor at the Johns Hopkins School of Medicine who was not involved with the study. “That framework has largely influenced how we imagine our brain controlling our movements.”

Jazazyeri and colleagues wondered whether the same framework might explain the control principles governing mental states in the absence of any movement.

“When we’re simply sitting, thoughts and images run through our heads and, fundamental to intellect, we can control them,” explains lead author Seth Egger, a former postdoctoral associate in the Jazayeri lab and now at Duke University.

“We wanted to find out what’s happening between our ears when we are engaged in thinking,” says Egger.

Imagine, for example, a sign language interpreter keeping up with a fast speaker. To track speech accurately, the translator continuously anticipates where the speech is going, rapidly adjusting when the actual words deviate from the prediction. The interpreter could be using an internal model to anticipate upcoming words, and use feedback to make adjustments on the fly.

1-2-3…Go

Hypothesizing about how the components of an internal model function in scenarios such as translation is one thing. Cleanly measuring and proving the existence of these elements is much more complicated as the activity of the controller, simulator, and feedback are intertwined. To tackle this problem, Jazayeri and colleagues devised a clever task with primate models in which the controller, simulator, and feedback act at distinct times.

In this task, called “1-2-3-Go,” the animal sees three consecutive flashes (1, 2, and 3) that form a regular beat, and learns to make an eye movement (Go) when they anticipate the 4th flash should occur. During the task, researchers measured neural activity in a region of the frontal cortex they had previously linked to the timing of movement.

Jazayeri and colleagues had clear predictions about when the controller would act (between the third flash and “Go”) and when feedback would be engaged (with each flash of light). The key surprise came when researchers saw evidence for the simulator anticipating the third flash. This unexpected neural activity has dynamics that resemble the controller, but was not associated with a response. In other words, the researchers uncovered a covert plan that functions as the simulator, thus uncovering all three elements of an internal model for a mental process, the planning and anticipation of “Go” in the “1-2-3-Go” sequence.

“Jazayeri’s work is important because it demonstrates how to study mental simulation in animals,” explains Shadmehr, “and where in the brain that simulation is taking place.”

Having found how and where to measure an internal model in action, Jazayeri and colleagues now plan to ask whether these control strategies can explain how primates effortlessly generalize their knowledge from one behavioral context to another. For example, how does an interpreter rapidly adjust when someone with widely different speech habits takes the podium? This line of investigation promises to shed light on high-level mental capacities of the primate brain that simpler animals seem to lack, that go awry in mental disorders, and that designers of artificial intelligence systems so fondly seek.

What is the social brain?

As part of our Ask the Brain series, Anila D’Mello, a postdoctoral fellow in John Gabrieli’s lab answers the question,”What is the social brain?”

_____

Anila D'Mello portrait
Anila D’Mello is the Simons Center for the Social Brain Postdoctoral Fellow in John Gabrieli’s lab at the McGovern Institute.

“Knock Knock.”
“Who’s there?”
“The Social Brain.”
“The Social Brain, who?”

Call and response jokes, like the “Knock Knock” joke above, leverage our common understanding of how a social interaction typically proceeds. Joke telling allows us to interact socially with others based on our shared experiences and understanding of the world. But where do these abilities “live” in the brain and how does the social brain develop?

Neuroimaging and lesion studies have identified a network of brain regions that support social interaction, including the ability to understand and partake in jokes – we refer to this as the “social brain.” This social brain network is made up of multiple regions throughout the brain that together support complex social interactions. Within this network, each region likely contributes to a specific type of social processing. The right temporo-parietal junction, for instance, is important for thinking about another person’s mental state, whereas the amygdala is important for the interpretation of emotional facial expressions and fear processing. Damage to these brain regions can have striking effects on social behaviors. One recent study even found that individuals with bigger amygdala volumes had larger and more complex social networks!

Though social interaction is such a fundamental human trait, we aren’t born with a prewired social brain.

Much of our social ability is grown and honed over time through repeated social interactions. Brain networks that support social interaction continue to specialize into adulthood. Neuroimaging work suggests that though newborn infants may have all the right brain parts to support social interaction, these regions may not yet be specialized or connected in the right way. This means that early experiences and environments can have large influences on the social brain. For instance, social neglect, especially very early in development, can have negative impacts on social behaviors and on how the social brain is wired. One prominent example is that of children raised in orphanages or institutions, who are sometimes faced with limited adult interaction or access to language. Children raised in these conditions are more likely to have social challenges including difficulties forming attachments. Prolonged lack of social stimulation also alters the social brain in these children resulting in changes in amygdala size and connections between social brain regions.

The social brain is not just a result of our environment. Genetics and biology also contribute to the social brain in ways we don’t yet fully understand. For example, individuals with autism / autistic individuals may experience difficulties with social interaction and communication. This may include challenges with things like understanding the punchline of a joke. These challenges in autism have led to the hypothesis that there may be differences in the social brain network in autism. However, despite documented behavioral differences in social tasks, there is conflicting brain imaging evidence for whether differences exist between people with and without autism in the social brain network.

Examples such as that of autism imply that the reality of the social brain is probably much more complex than the story painted here. It is likely that social interaction calls upon many different parts of the brain, even beyond those that we have termed the “social brain,” that must work in concert to support this highly complex set of behaviors. These include regions of the brain important for listening, seeing, speaking, and moving. In addition, it’s important to remember that the social brain and regions that make it up do not stand alone. Regions of the social brain also play an intimate role in language, humor, and other cognitive processes.

“Knock Knock”
“Who’s there?”
“The Social Brain”
“The Social Brain, who?”
“I just told you…didn’t you read what I wrote?”

Anila D’Mello earned her bachelor’s degree in psychology from Georgetown University in 2012, and went on to receive her PhD in Behavior, Cognition, and Neuroscience from American University in 2017. She joined the Gabrieli lab as a postdoc in 2017 and studies the neural correlates of social communication in autism.

_____

Do you have a question for The Brain? Ask it here.

Can I rewire my brain?

As part of our Ask the Brain series, Halie Olson, a graduate student in the labs of John Gabrieli and Rebecca Saxe, pens her answer to the question,”Can I rewire my brain?”

_____

Yes, kind of, sometimes – it all depends on what you mean by “rewiring” the brain.

Halie Olson, a graduate student in the Gabrieli and Saxe labs.

If you’re asking whether you can remove all memories of your ex from your head, then no. (That’s probably for the best – just watch Eternal Sunshine of the Spotless Mind.) However, if you’re asking whether you can teach a dog new tricks – that have a physical implementation in the brain – then yes.

To embrace the analogy that “rewiring” alludes to, let’s imagine you live in an old house with outlets in less-than-optimal locations. You really want your brand-new TV to be plugged in on the far side of the living room, but there is no outlet to be found. So you call up your electrician, she pops over, and moves some wires around in the living room wall to give you a new outlet. No sweat!

Local changes in neural connectivity happen throughout the lifespan. With over 100 billion neurons and 100 trillion connections – or synapses – between these neurons in the adult human brain, it is unsurprising that some pathways end up being more important than others. When we learn something new, the connections between relevant neurons communicating with each other are strengthened. To paraphrase Donald Hebb, one of the most influential psychologists of the twentieth century, “neurons that fire together, wire together” – by forming new synapses or more efficiently connecting the ones that are already there. This ability to rewire neural connections at a local level is a key feature of the brain, enabling us to tailor our neural infrastructure to our needs.

Plasticity in our brain allows us to learn, adjust, and thrive in our environments.

We can also see this plasticity in the brain at a larger scale. My favorite example of “rewiring” in the brain is when children learn to read. Our brains did not evolve to enable us to read – there is no built-in “reading region” that magically comes online when a child enters school. However, if you stick a proficient reader in an MRI scanner, you will see a region in the left lateral occipitotemporal sulcus (that is, the back bottom left of your cortex) that is particularly active when you read written text. Before children learn to read, this region – known as the visual word form area – is not exceptionally interested in words, but as children get acquainted with written language and start connecting letters with sounds, it becomes selective for familiar written language – no matter the font, CaPItaLIZation, or size.

Now, let’s say that you wake up in the middle of the night with a desire to move your oven and stovetop from the kitchen into your swanky new living room with the TV. You call up your electrician – she tells you this is impossible, and to stop calling her in the middle of the night.

Similarly, your brain comes with a particular infrastructure – a floorplan, let’s call it – that cannot be easily adjusted when you are an adult. Large lesions tend to have large consequences. For instance, an adult who suffers a serious stroke in their left hemisphere will likely struggle with language, a condition called aphasia. Young children’s brains, on the other hand, can sometimes rewire in profound ways. An entire half of the brain can be damaged early on with minimal functional consequences. So if you’re going for a remodel? Better do it really early.

Plasticity in our brain allows us to learn, adjust, and thrive in our environments. It also gives neuroscientists like me something to study – since clearly I would fail as an electrician.

Halie Olson earned her bachelor’s degree in neurobiology from Harvard College in 2017. She is currently a graduate student in MIT’s Department of Brain and Cognitive Sciences working with John Gabrieli and Rebecca Saxe. She studies how early life experiences and environments impact brain development, particularly in the context of reading and language, and what this means for children’s educational outcomes.

_____

Do you have a question for The Brain? Ask it here.

Hearing through the clatter

In a busy coffee shop, our eardrums are inundated with sound waves – people chatting, the clatter of cups, music playing – yet our brains somehow manage to untangle relevant sounds, like a barista announcing that our “coffee is ready,” from insignificant noise. A new McGovern Institute study sheds light on how the brain accomplishes the task of extracting meaningful sounds from background noise – findings that could one day help to build artificial hearing systems and aid development of targeted hearing prosthetics.

“These findings reveal a neural correlate of our ability to listen in noise, and at the same time demonstrate functional differentiation between different stages of auditory processing in the cortex,” explains Josh McDermott, an associate professor of brain and cognitive sciences at MIT, a member of the McGovern Institute and the Center for Brains, Minds and Machines, and the senior author of the study.

The auditory cortex, a part of the brain that responds to sound, has long been known to have distinct anatomical subregions, but the role these areas play in auditory processing has remained a mystery. In their study published today in Nature Communications, McDermott and former graduate student Alex Kell, discovered that these subregions respond differently to the presence of background noise, suggesting that auditory processing occurs in steps that progressively hone in on and isolate a sound of interest.

Background check

Previous studies have shown that the primary and non-primary subregions of the auditory cortex respond to sound with different dynamics, but these studies were largely based on brain activity in response to speech or simple synthetic sounds (such as tones and clicks). Little was known about how these regions might work to subserve everyday auditory behavior.

To test these subregions under more realistic conditions, McDermott and Kell, who is now a postdoctoral researcher at Columbia University, assessed changes in human brain activity while subjects listened to natural sounds with and without background noise.

While lying in an MRI scanner, subjects listened to 30 different natural sounds, ranging from meowing cats to ringing phones, that were presented alone or embedded in real-world background noise such as heavy rain.

“When I started studying audition,” explains Kell, “I started just sitting around in my day-to-day life, just listening, and was astonished at the constant background noise that seemed to usually be filtered out by default. Most of these noises tended to be pretty stable over time, suggesting we could experimentally separate them. The project flowed from there.”

To their surprise, Kell and McDermott found that the primary and non-primary regions of the auditory cortex responded differently to natural sound depending upon whether background noise was present.

brain regions responding to sound
Primary auditory cortex (outlined in white) responses change (blue) when background noise is present, whereas non-primary activity is robust to background noise (yellow). Image: Alex Kell

They found that activity of the primary auditory cortex was altered when background noise is present, suggesting that this region has not yet differentiated between meaningful sounds and background noise. Non-primary regions, however, respond similarly to natural sounds irrespective of whether noise is present, suggesting that cortical signals generated by sound are transformed or “cleaned up” to remove background noise by the time they reach the non-primary auditory cortex.

“We were surprised by how big the difference was between primary and non-primary areas,” explained Kell, “so we ran a bunch more subjects but kept seeing the same thing. We had a ton of questions about what might be responsible for this difference, and that’s why we ended up running all these follow-up experiments.”

A general principle

Kell and McDermott went on to test whether these responses were specific to particular sounds, and discovered that the above effect remained stable no matter the source or type of sound activity. Music, speech, or a squeaky toy, all activated the non-primary cortex region similarly, whether or not background noise was present.

The authors also tested whether attention is relevant. Even when the researchers sneakily distracted subjects with a visual task in the scanner, the cortical subregions responded to meaningful sound and background noise in the same way, showing that attention is not driving this aspect of sound processing. In other words, even when we are focused on reading a book, our brain is diligently sorting the sound of our meowing cat from the patter of heavy rain outside.

Future directions

The McDermott lab is now building computational models of the so-called “noise robustness” found in the Nature Communications study and Kell is pursuing a finer-grained understanding of sound processing in his postdoctoral work at Columbia, by exploring the neural circuit mechanisms underlying this phenomenon.

By gaining a deeper understanding of how the brain processes sound, the researchers hope their work will contribute to improve diagnoses and treatment of hearing dysfunction. Such research could help to reveal the origins of listening difficulties that accompany developmental disorders or age-related hearing loss. For instance, if hearing loss results from dysfunction in sensory processing, this could be caused by abnormal noise robustness in the auditory cortex. Normal noise robustness might instead suggest that there are impairments elsewhere in the brain, for example a break down in higher executive function.

“In the future,” McDermott says, “we hope these noninvasive measures of auditory function may become valuable tools for clinical assessment.”

Call for Nominations: 2020 Scolnick Prize in Neuroscience

The McGovern Institute is now accepting nominations for the Scolnick Prize in Neuroscience, which recognizes an outstanding discovery or significant advance in any field of neuroscience, until December 15, 2019.

About the Scolnick Prize

The prize is named in honor of Edward M. Scolnick, who stepped down as president of Merck Research Laboratories in December 2002 after holding Merck’s top research post for 17 years. The prize, which is endowed through a gift from Merck to the McGovern Institute, consists of a $150,000 award, plus an inscribed gift. The recipient presents a public lecture at MIT, hosted by the McGovern Institute and followed by a dinner in Spring 2020.

Nomination Process

Candidates for the award must be nominated by individuals affiliated with universities, hospitals, medical schools, or research institutes, with a background in neuroscience. Self-nomination is not permitted. Each nomination should include a biosketch or CV of the nominee and a letter of nomination with a summary and analysis of the nominee’s major contributions to the field of neuroscience. Up to two representative reprints will be accepted. The winner, selected by a committee appointed by the director of the McGovern Institute, will be announced in January 2020.

More information about the Scolnick Prize, including details about the nomination process, selection committee, and past Scolnick Prize recipients, can be found on our website.

submit nomination

Finding the brain’s compass

The world is constantly bombarding our senses with information, but the ways in which our brain extracts meaning from this information remains elusive. How do neurons transform raw visual input into a mental representation of an object – like a chair or a dog?

In work published today in Nature Neuroscience, MIT neuroscientists have identified a brain circuit in mice that distills “high-dimensional” complex information about the environment into a simple abstract object in the brain.

“There are no degree markings in the external world, our current head direction has to be extracted, computed, and estimated by the brain,” explains Ila Fiete, an associate member of the McGovern Institute and senior author of the paper. “The approaches we used allowed us to demonstrate the emergence of a low-dimensional concept, essentially an abstract compass in the brain.”

This abstract compass, according to the researchers, is a one-dimensional ring that represents the current direction of the head relative to the external world.

Schooling fish

Trying to show that a data cloud has a simple shape, like a ring, is a bit like watching a school of fish. By tracking one or two sardines, you might not see a pattern. But if you could map all of the sardines, and transform the noisy dataset into points representing the positions of the whole school of sardines over time, and where each fish is relative to its neighbors, a pattern would emerge. This model would reveal a ring shape, a simple shape formed by the activity of hundreds of individual fish.

Fiete, who is also an associate professor in MIT’s Department of Brain and Cognitive Sciences, used a similar approach, called topological modeling, to transform the activity of large populations of noisy neurons into a data cloud the shape of a ring.

Simple and persistent ring

Previous work in fly brains revealed a physical ellipsoid ring of neurons representing changes in the direction of the fly’s head, and researchers suspected that such a system might also exist in mammals.

In this new mouse study, Fiete and her colleagues measured hours of neural activity from scores of neurons in the anterodorsal thalamic nucleus (ADN) – a region believed to play a role in spatial navigation – as the animals moved freely around their environment. They mapped how the neurons in the ADN circuit fired as the animal’s head changed direction.

Together these data points formed a cloud in the shape of a simple and persistent ring.

“In the absence of this ring,” Fiete explains, “we would be lost in the world.”

“This tells us a lot about how neural networks are organized in the brain,” explains Edvard Moser, Director of the Kavli Institute of Systems Neuroscience in Norway, who was not involved in the study. “Past data have indirectly pointed towards such a ring-like organization but only now has it been possible, with the right cell numbers and methods, to demonstrate it convincingly,” says Moser.

Their method for characterizing the shape of the data cloud allowed Fiete and colleagues to determine which variable the circuit was devoted to representing, and to decode this variable over time, using only the neural responses.

“The animal’s doing really complicated stuff,” explains Fiete, “but this circuit is devoted to integrating the animal’s speed along a one-dimensional compass that encodes head direction,” explains Fiete. “Without a manifold approach, which captures the whole state space, you wouldn’t know that this circuit of thousands of neurons is encoding only this one aspect of the complex behavior, and not encoding any other variables at the same time.”

Even during sleep, when the circuit is not being bombarded with external information, this circuit robustly traces out the same one-dimensional ring, as if dreaming of past head direction trajectories.

Further analysis revealed that the ring acts an attractor. If neurons stray off trajectory, they are drawn back to it, quickly correcting the system. This attractor property of the ring means that the representation of head direction in abstract space is reliably stable over time, a key requirement if we are to understand and maintain a stable sense of where our head is relative to the world around us.

“In the absence of this ring,” Fiete explains, “we would be lost in the world.”

Shaping the future

Fiete’s work provides a first glimpse into how complex sensory information is distilled into a simple concept in the mind, and how that representation autonomously corrects errors, making it exquisitely stable.

But the implications of this study go beyond coding of head direction.

“Similar organization is probably present for other cognitive functions so the paper is likely to inspire numerous new studies,” says Moser.

Fiete sees these analyses and related studies carried out by colleagues at the Norwegian University of Science and Technology, Princeton University, the Weitzman Institute, and elsewhere as fundamental to the future of neural decoding studies.

With this approach, she explains, it is possible to extract abstract representations of the mind from the brain, potentially even thoughts and dreams.

“We’ve found that the brain deconstructs and represents complex things in the world with simple shapes,” explains Fiete. “Manifold-level analysis can help us to find those shapes, and they almost certainly exist beyond head direction circuits.”

Do thoughts have mass?

As part of our Ask the Brain series, we received the question, “Do thoughts have mass?” The following is a guest blog post by Michal De-Medonsa, technical associate and manager of the Jazayeri lab, who tapped into her background in philosophy to answer this intriguing question.

_____

Portrat of Michal De-Medonsa
Jazayeri lab manager (and philosopher) Michal De-Medonsa.

To answer the question, “Do thoughts have mass?” we must, like any good philosopher, define something that already has a definition – “thoughts.”

Logically, we can assert that thoughts are either metaphysical or physical (beyond that, we run out of options). If our definition of thought is metaphysical, it is safe to say that metaphysical thoughts do not have mass since they are by definition not physical, and mass is a property of a physical things. However, if we define a thought as a physical thing, it becomes a little trickier to determine whether or not it has mass.

A physical definition of thoughts falls into (at least) two subgroups – physical processes and physical parts. Take driving a car, for example – a parts definition describes the doors, motor, etc. and has mass. A process definition of a car being driven, turning the wheel, moving from point A to point B, etc. does not have mass. The process of driving is a physical process that involves moving physical matter, but we wouldn’t say that the act of driving has mass. The car itself, however, is an example of physical matter, and as any cyclist in the city of Boston is well aware  – cars have mass. It’s clear that if we define a thought as a process, it does not have mass, and if we define a thought as physical parts, it does have mass – so, which one is it? In order to resolve our issue, we have to be incredibly precise with our definition. Is a thought a process or parts? That is, is a thought more like driving or more like a car?

In order to resolve our issue, we have to be incredibly precise with our definition of the word thought.

Both physical definitions (process and parts) have merit. For a parts definition, we can look at what is required for a thought – neurons, electrical signals, and neurochemicals, etc. This type of definition becomes quite imprecise and limiting. It doesn’t seem too problematic to say that the neurons, neurochemicals, etc. are themselves the thought, but this style of definition starts to fall apart when we try to include all the parts involved (e.g. blood flow, connective tissue, outside stimuli). When we look at a face, the stimuli received by the visual cortex is part of the thought – is the face part of a thought? When we look at our phone, is the phone itself part of a thought? A parts definition either needs an arbitrary limit, or we end up having to include all possible parts involved in the thought, ending up with an incredibly convoluted and effectively useless definition.

A process definition is more versatile and precise, and it allows us to include all the physical parts in a more elegant way. We can now say that all the moving parts are included in the process without saying that they themselves are the thought. That is, we can say blood flow is included in the process without saying that blood flow itself is part of the thought. It doesn’t sound ridiculous to say that a phone is part of the thought process. If we subscribe to the parts definition, however, we’re forced to say that part of the mass of a thought comes from the mass of a phone. A process definition allows us to be precise without being convoluted, and allows us to include outside influences without committing to absurd definitions.

Typical of a philosophical endeavor, we’re left with more questions and no simple answer. However, we can walk away with three conclusions.

  1. A process definition of “thought” allows for elegance and the involvement of factors outside the “vacuum” of our physical body, however, we lose out on some function by not describing a thought by its physical parts.
  2. The colloquial definition of “thought” breaks down once we invite a philosopher over to break it down, but this is to be expected – when we try to break something down, sometimes, it will break down. What we should be aware of is that if we want to use the word in a rigorous scientific framework, we need a rigorous scientific definition.
  3. Most importantly, it’s clear that we need to put a lot of work into defining exactly what we mean by “thought” – a job well suited to a scientifically-informed philosopher.

Michal De-Medonsa earned her bachelor’s degree in neuroscience and philosophy from Johns Hopkins University in 2012 and went on to receive her master’s degree in history and philosophy of science at the University of Pittsburgh in 2015. She joined the Jazayeri lab in 2018 as a lab manager/technician and spends most of her free time rock climbing, doing standup comedy, and woodworking at the MIT Hobby Shop. 

_____

Do you have a question for The Brain? Ask it here.

Ed Boyden wins premier Royal Society honor

Edward S. Boyden, the Y. Eva Tan Professor in Neurotechnology at MIT, has been awarded the 2019 Croonian Medal and Lecture by the Royal Society. Twenty-four medals and awards are announced by the Royal Society each year, honoring exceptional researchers who are making outstanding contributions to science.

“The Royal Society gives an array of medals and awards to scientists who have done exceptional, ground-breaking work,” explained Sir Venki Ramakrishnan, President of the Royal Society. “This year, it is again a pleasure to see these awards bestowed on scientists who have made such distinguished and far-reaching contributions in their fields. I congratulate and thank them for their efforts.”

Boyden wins the medal and lecture in recognition of his research that is expanding our understanding of the brain. This includes his critical role in the development of optogenetics, a technique for controlling brain activity with light, and his invention of expansion microscopy. Croonian Medal laureates include notable luminaries of science and neurobiology.

“It is a great honor to be selected to receive this medal, especially
since it was also given to people such as Santiago Ramon y Cajal, the
founder of modern neuroscience,” says Boyden. “This award reflects the great work of many fantastic students, postdocs, and collaborators who I’ve had the privilege to work with over the years.”

The award includes an invitation to deliver the premier British lecture in the biological sciences, given annually at the Royal Society in London. At the lecture, the winner is awarded a medal and a gift of £10,000. This announcement comes shortly after Boyden was co-awarded the Warren Alpert Prize for his role in developing optogenetics.

History of the Croonian Medal and Lecture

William Croone, pictured, envisioned an annual lecture that is the premier biological sciences medal and lecture at the Royal Society
William Croone, FRS Photo credit: Royal College of Physicians, London

The lectureship was conceived by William Croone FRS, one of the original Fellows of the Society based in London. Among the papers left on his death in 1684 were plans to endow two lectureships, one at the Royal Society and the other at the Royal College of Physicians. His widow later bequeathed the means to carry out the scheme. The lecture series began in 1738.

 

 

Ed Boyden holds the titles of Investigator, McGovern Institute; Y. Eva Tan Professor in Neurotechnology at MIT; Leader, Synthetic Neurobiology Group, MIT Media Lab; Professor, Biological Engineering, Brain and Cognitive Sciences, MIT Media Lab; Co-Director, MIT Center for Neurobiological Engineering; Member, MIT Center for Environmental Health Sciences, Computational and Systems Biology Initiative, and Koch Institute.