Tidying up deep neural networks

Visual art has found many ways of representing objects, from the ornate Baroque period to modernist simplicity. Artificial visual systems are somewhat analogous: from relatively simple beginnings inspired by key regions in the visual cortex, recent advances in performance have seen increasing complexity.

“Our overall goal has been to build an accurate, engineering-level model of the visual system, to ‘reverse engineer’ visual intelligence,” explains James DiCarlo, the head of MIT’s Department of Brain and Cognitive Sciences, an investigator in the McGovern Institute for Brain Research and the Center for Brains, Minds, and Machines (CBMM). “But very high-performing ANNs have started to drift away from brain architecture, with complex branching architectures that have no clear parallel in the brain.”

A new model from the DiCarlo lab has re-imposed a brain-like architecture on an object recognition network. The result is a shallow-network architecture with surprisingly high performance, indicating that we can simplify deeper– and more baroque– networks yet retain high performance in artificial learning systems.

“We’ve made two major advances,” explains graduate student Martin Schrimpf, who led the work with Jonas Kubilius at CBMM. “We’ve found a way of checking how well models match the brain, called Brain-Score, and developed a model, CORnet, that moves artificial object recognition, as well as machine learning architectures, forward.”

DiCarlo lab graduate student Martin Schrimpf in the lab. Photo: Kris Brewer

Back to the brain

Deep convolutional artificial neural networks were initially inspired by brain anatomy, and are the leading models in artificial object recognition. Training these feedforward systems on recognizing objects in ImageNet, a large database of images, has allowed performance of ANNs to vastly improve, but at the same time networks have literally branched out, become increasingly complex with hundreds of layers. In contrast, the visual ventral stream, a series of cortical brain regions that unpack object identity, contains a relatively minuscule four key regions. In addition, ANNs are entirely feedforward, while the primate cortical visual system has densely interconnected wiring, in other words, recurrent connectivity. While primate-like object recognition capabilities can be captured through feedforward-only networks, recurrent wiring in the brain has long been suspected, and recently shown in two DiCarlo lab papers led by Kar and Tang respectively, to be important.

DiCarlo and colleagues have now developed CORnet-S, inspired by very complex, state-of-the-art neural networks. CORnet-S has four computational areas, analogous to cortical visual areas (V1, V2, V4, and IT). In addition, CORnet-S contains repeated, or recurrent, connections.

“We really pre-defined layers in the ANN, defining V1, V2, and so on, and introduced feedback and repeated connections” explains Schrimpf. “As a result, we ended up with fewer layers, and less ‘dead space’ that cannot be mapped to the brain. In short, a simpler network.”

Keeping score

To optimize the system, the researchers incorporated quantitative assessment through a new system, Brain-Score.

“Until now, we’ve needed to qualitatively eyeball model performance relative to the brain,” says Schrimpf. “Brain-Score allows us to actually quantitatively evaluate and benchmark models.”

They found that CORnet-S ranks highly on Brain-Score, and is the best performer of all shallow ANNs. Indeed, the system, shallow as it is, rivals the complex, ultra-deep ANNs that currently perform at the highest level.

CORnet was also benchmarked against human performance. To test, for example, whether the system can predict human behavior, 1,472 humans were shown images for 100ms and then asked to identify objects in them. CORnet-S was able to predict the general accuracy of humans to make calls about what they had briefly glimpsed (bear vs. dog etc.). Indeed, CORnet-S is able to predict the behavior, as well as the neural dynamics, of the visual ventral stream, indicating that it is modeling primate-like behavior.

“We thought we’d lose performance by going to a wide, shallow network, but with recurrence, we hardly lost any,” says Schrimpf, “the message for machine learning more broadly, is you can get away without really deep networks.”

Such models of brain processing have benefits for both neuroscience and artificial systems, helping us to understand the elements of image processing by the brain. Neuroscience in turn informs us that features such as recurrence, can be used to improve performance in shallow networks, an important message for artificial intelligence systems more broadly.

“There are clear advantages to the high performing, complex deep networks,” explains DiCarlo, “but it’s possible to rein the network in, using the elegance of the primate brain as a model, and we think this will ultimately lead to other kinds of advantages.”

Explaining repetitive behavior linked to amphetamine use

Repetitive movements such as nail-biting and pacing are very often seen in humans and animals under the influence of habit-forming drugs. Studies at the McGovern Institute have found that these repetitive behaviors may be due to a breakdown in communication between neurons in the striatum – a deep brain region linked to habit and movement, among other functions.

The Graybiel lab has a long-standing interest in habit formation and the effects of addiction on brain circuits related to the striatum, a key part of the basal ganglia. The Graybiel lab previously found remarkably strong correlations between gene expression levels in specific parts of the striatum and exposure to psychomotor stimulants such as amphetamine and cocaine. The longer the exposure to stimulant, the more repetitive behavior in models, and the more brain circuits changed. These findings held across animal models.

The lab has found that if they train animals to develop habits, they can completely block these repetitive behaviors using targeted inhibition or excitation of the circuits. They even could block repetitive movement patterns in a mouse model of obsessive-compulsive disorder (OCD). These experiments mimicked situations in humans in which drugs or anxiety-inducing experiences can lead to habits and repetitive movement patterns—from nail-biting to much more dangerous habitual actions.

Ann Graybiel (right) at work in the lab with research scientist Jill Crittenden. Photo: Justin Knight

Why would these circuits exist in the brain if they so often produce “bad” habits and destructive behaviors, as seen in compulsive use of drugs such as opioids or even marijuana? One answer is that we have to be flexible and ready to switch our behavior if something dangerous occurs in the environment. Habits and addictions are, in a way, the extreme pushing of this flexible system in the other direction, toward the rigid and repetitive.

“One important clue is that for many of these habits and repetitive and addictive behaviors, the person isn’t even aware that they are doing the same thing again and again. And if they are not aware, they can’t control themselves and stop,” explains Ann Graybiel, an Institute Professor at MIT. “It is as though the ‘rational brain’ has great difficulty in controlling the ‘habit circuits’ of the brain.” Understanding loss of communication is a central theme in much of the Graybiel lab’s work.

Graybiel, who is also a founding member of the McGovern Institute, is now trying to understand the underlying circuits at the cellular level. The lab is examining the individual components of the striatal circuits linked to selecting actions and motivating movement, circuits that seem to be directly controlled by drugs of abuse.

In groundbreaking early work, Graybiel discovered that the striatum has distinct compartments, striosomes and matrix. These regions are spatially and functionally distinct and separately connect, through striatal projection neurons (SPNs), to motor-control centers or to neurons that release dopamine, a neurotransmitter linked to all drugs of abuse. It is in these components that Graybiel and colleagues have more recently found strong effects of drugs. Indeed opposite changes in gene expression in the striosome SPNs versus the matrix SPNs, raises the possibility that an imbalance in gene regulation leads to abnormally inflexible behaviors caused by drug use.

“It was known that cholinergic interneurons tend to reside along the borders of the two striatal compartments, but whether this cell type mediates communication between the compartments was unknown,” explains first author Jill Crittenden, a research scientist in the Graybiel lab. “We wanted to know whether cholinergic signaling to the two compartments is disrupted by drugs that induce abnormally repetitive behaviors.”

Amphetamine drives gene transcription in striosomes. The top panel shows striosomes (red) are disticnt from matrix (green). Amphetamine treatment activates lead to markers of activation (the immediate early gene c-Fos, red in 2 lower panels) in drug-treated animals (bottom panel), but not controls (middle panel). Image: Jill Crittenden

It was known that cholinergic interneurons are activated by important environmental cues and promote flexible rather than repetitive behavior, how this is related to interaction with SPNs in the striatum was unclear. “Using high-resolution microscopy,” explains Crittenden, “we could see for the first time that cholinergic interneurons send many connections to both striosome and matrix SPNs, well-placed to coordinate signaling directly across the two striatal compartments that appear otherwise isolated.”

Using a technique known as optogenetics, the Graybiel group stimulated mouse cholinergic interneurons and monitored the effects on striatal SPNs in brain tissue. They found that stimulating the interneurons inhibited the ongoing signaling activity that was induced by current injection in matrix and striatal SPNs. However, when examining the brains of animals on high doses of amphetamine and that were displaying repetitive behavior, stimulating the relevant interneurons failed to interrupt evoked activity in SPNs.

Using an inhibitor, the authors were able to show that these neural pathways depend on the nicotinic acetylcholine receptor. Inhibiting this cell-surface signaling receptor had a similar effect to drug intoxication on intercommunication among striatal neurons. Since break down of cholinergic interneuron signaling across striosome and matrix compartments under drug intoxication may reduce behavioral flexibility and cue responsiveness, the work suggests one mechanism for how drugs of abuse hijack action-selection systems of the brain and drive pathological habit-formation.

The Graybiel lab is excited that they can now manipulate these behaviors by manipulating very particular circuits components in the habit circuits. Most recently they have discovered that they can even fully block the effects of stress by manipulating cellular components of these circuits. They now hope to dive deep into these circuits to find out the mystery of how to control them.

“We hope that by pinpointing these circuit elements—which seem to have overlapping effects on habit formation, addiction and stress, we help to guide the development of better therapies for addiction,” explains Graybiel. “We hope to learn about what the use of drugs does to brain circuits with both short term use and long term use. This is an urgent need.”

CRISPR makes several Discovery of the Decade lists

As we reach milestones in time, it’s common to look back and review what we learned. A number of media outlets, including National Geographic, NPR, The Hill, Popular Mechanics, Smithsonian Magazine, Nature, Mental Floss, CNBC, and others, recognized the profound impact of genome editing, adding CRISPR to their discovery of the decade lists.

“In 2013, [CRISPR] was used for genome editing in a eukaryotic cell, forever altering the course of biotechnology and, ultimately our relationship with our DNA.”
— Popular Mechanics

It’s rare for a molecular system to become a household name, but in less than a decade, CRISPR has done just that. McGovern Investigator Feng Zhang played a key role in leveraging CRISPR, an immune system found originally in prokaryotic – bacterial and archaeal – cells, into a broadly customizable toolbox for genomic manipulation in eukaryotic (animal and plant) cells. CRISPR allows scientists to easily and quickly make changes to genomes, has revolutionized the biomedical sciences, and has major implications for control of infectious disease, agriculture, and treatment of genetic disorders.

Brain biomarkers predict mood and attention symptoms

Mood and attentional disorders amongst teens are an increasing concern, for parents, society, and for peers. A recent Pew research center survey found conditions such as depression and anxiety to be the number one concern that young students had about their friends, ranking above drugs or bullying.

“We’re seeing an epidemic in teen anxiety and depression,” explains McGovern Research Affiliate Susan Whitfield-Gabrieli.

“Scientists are finding a huge increase in suicide ideation and attempts, something that hit home for me as a mother of teens. Emergency rooms in hospitals now have guards posted outside doors of these teenagers that attempted suicide—this is a pressing issue,” explains Whitfield-Gabrieli who is also director of the Northeastern University Biomedical Imaging Center and a member of the Poitras Center for Psychiatric Disorders Research.

Finding new methods for discovering early biomarkers for risk of psychiatric disorders would allow early interventions and avoid reaching points of crisis such as suicide ideation or attempts. In research published recently in JAMA Psychiatry, Whitfield-Gabrieli and colleagues found that signatures predicting future development of depression and attentional symptoms can be detected in children as young as seven years old.

Long-term view

While previous work had suggested that there may be biomarkers that predict development of mood and attentional disorders, identifying early biomarkers prior to an onset of illness requires following a cohort of pre-teens from a young age, and monitoring them across years. This effort to have a proactive, rather than reactive, approach to the development of symptoms associated with mental disorders is exactly the route Whitfield-Gabrieli and colleagues took.

“One of the exciting aspects of this study is that the cohort is not pre-selected for already having symptoms of psychiatric disorders themselves or even in their family,” explained Whitfield-Gabrieli. “It’s an unbiased cohort that we followed over time.”

McGovern research affiliate Susan Whitfield-Gabrieli has discovered early brain biomarkers linked to psychiatric disorders.

In some past studies, children were pre-selected, for example a major depressive disorder diagnosis in the parents, but Whitfield-Gabrieli and colleagues, Silvia Bunge from Berkeley and Laurie Cutting from Vanderbilt, recruited a range of children without preconditions, and examined them at age 7, then again 4 years later. The researchers examined resting state functional connectivity, and compared this to scores on the child behavioral checklist (CBCL), allowing them to relate differences in the brain to a standardized analysis of behavior that can be linked to psychiatric disorders. The CBCL is used both in research and in the clinic and his highly predictive of disorders including ADHD, so that changes in the brain could be related to changes in a widely used clinical scoring system.

“Over the four years, some people got worse, some got better, and some stayed the same according the CBCL. We could relate this directly to differences in brain networks, and could identify at age 7 who would get worse,” explained Whitfield-Gabrieli.

Brain network changes

The authors analyzed differences in resting state network connectivity, regions across the brain that rise and fall in activity level together, as visualized using fMRI. Reduced connectivity between these regions may allow us to get a handle on reduced “top-down” control of neural circuits. The dorsolateral prefrontal region is linked to executive function, external attention, and emotional control. Increased connection with the medial prefrontal cortex is known to be present in attention deficit hyperactivity disorder (ADHD), while a reduced connection to a different brain region, the sgACC, is seen in major depressive disorder. The question remained as to whether these changes can be seen prior to the onset of diagnosable attentional or mood disorders.

Whitfield-Gabrieli and colleagues found that these resting state networks varied in the brains of children that would later develop anxiety/depression and ADHD symptoms. Weaker scores in connectivity between the dorsolateral and medial prefrontal cortical regions tended to be seen in children whose attention scores went on to improve. Analysis of the resting state networks above could differentiate those who would have typical attentional behavior by age 11 versus those that went on to develop ADHD.

Whitfield-Gabrieli has replicated this finding in an independent sample of children and she is continuing to expand the analysis and check the results, as well as follow this cohort into the future. Should changes in resting state networks be a consistent biomarker, the next step is to initiate interventions prior to the point of crisis.

“We’ve recently been able to use mindfulness interventions, and show these reduce self-perceived stress and amygdala activation in response to fear, and we are also testing the effect of exercise interventions,” explained Whitfield-Gabrieli. “The hope is that by using predictive biomarkers we can augment children’s lifestyles with healthy interventions that can prevent risk converting to a psychiatric disorder.”

Can fMRI reveal insights into addiction and treatments?

Many debilitating conditions like depression and addiction have biological signatures hidden in the brain well before symptoms appear.  What if brain scans could be used to detect these hidden signatures and determine the most optimal treatment for each individual? McGovern Investigator John Gabrieli is interested in this question and wrote about the use of imaging technologies as a predictive tool for brain disorders in a recent issue of Scientific American.

page from Scientific American article
McGovern Investigator John Gabrieli pens a story for Scientific American about the potential for brain imaging to predict the onset of mental illness.

“Brain scans show promise in predicting who will benefit from a given therapy,” says Gabrieli, who is also the Grover Hermann Professor in Brain and Cognitive Sciences at MIT. “Differences in neural activity may one day tell clinicians which depression treatment will be most effective for an individual or which abstinent alcoholics will relapse.”

Gabrieli cites research which has shown that half of patients treated for alcohol abuse go back to drinking within a year of treatment, and similar reversion rates occur for stimulants such as cocaine. Failed treatments may be a source of further anxiety and stress, Gabrieli notes, so any information we can glean from the brain to pinpoint treatments or doses that would help would be highly informative.

Current treatments rely on little scientific evidence to support the length of time needed in a rehabilitation facility, he says, but “a number suggest that brain measures might foresee who will succeed in abstaining after treatment has ended.”

Further data is needed to support this idea, but Gabrieli’s Scientific American piece makes the case that the use of such a technology may be promising for a range of addiction treatments including abuse of alcohol, nicotine, and illicit drugs.

Gabrieli also believes brain imaging has the potential to reshape education. For example, educational interventions targeting dyslexia might be more effective if personalized to specific differences in the brain that point to the source of the learning gap.

But for the prediction sciences to move forward in mental health and education, he concludes, the research community must design further rigorous studies to examine these important questions.

Single neurons can encode distinct landmarks

The organization of many neurons wired together in a complex circuit gives the brain its ability to perform powerful calculations. Work from the Harnett lab recently showed that even single neurons can process more information than previously thought, representing distinct variables at the subcellular level during behavior.

McGovern Investigator Mark Harnett and postdoc Jakob Voigts conducted an extremely delicate and intricate imaging experiment on different parts of the same neuron in the mouse retinosplenial cortex during 2-D navigation. Their set up allowed 2-photon imaging of neuronal sub-compartments during free 2-D navigation with head rotation, the latter being important to follow neural activity during naturalistic, complex behavior.

Recording computation by subcompartments in neurons.

 

In the work, published recently in Neuron, the authors used Ca2+-imaging to show that the soma in a single neuron was consistently active when mice were at particular landmarks as they navigated in an arena. The dendrites (tree-like antennas that receive input from other neurons) of exactly the same neuron were robustly active independent of the soma at distinct positions and orientations in the arena. This strongly suggests that the dendrites encode distinct information compared to their parent soma, in this case spatial variables during navigation, laying the foundation for studying sub-cellular processes during complex behaviors.

 

McGovern scientists named STAT Wunderkinds

McGovern researchers Sam Rodriques and Jonathan Strecker have been named to the class of 2019 STAT wunderkinds. This group of 22 researchers was selected from a national pool of hundreds of nominees, and aims to recognize trail-blazing scientists that are on the cusp of launching their careers but not yet fully independent.

“We were thrilled to receive this news,” said Robert Desimone, director of the McGovern Institute. “It’s great to see the remarkable progress being made by young scientists in McGovern labs be recognized in this way.”

Finding context

Sam Rodriques works in Ed Boyden’s lab at the McGovern Institute, where he develops new technologies that enable researchers to understand the behaviors of cells within their native spatial and temporal context.

“Psychiatric disease is a huge problem, but only a handful of first-in-class drugs for psychiatric diseases approved since the 1960s,” explains Rodriques, also affiliated with the MIT Media Lab and Broad Institute. “Coming up with novel cures is going to require new ways to generate hypotheses about the biological processes that underpin disease.”

Rodriques also works on several technologies within the Boyden lab, including preserving spatial information in molecular mapping technologies, finding ways of following neural connectivity in the brain, and Implosion Fabrication, or “Imp Fab.” This nanofabrication technology allows objects to be evenly shrunk to the nanoscale and has a wide range of potential applications, including building new miniature devices for examining neural function.

“I was very surprised, not expecting it at all!” explains Rodriques when asked about becoming a STAT Wunderkind, “I’m sure that all of the hundreds of applicants are very accomplished scientists, and so to be chosen like this is really an honor.”

New tools for gene editing

Jonathan Strecker is currently a postdoc working in Feng Zhang’s lab, and associated with both the McGovern Institute and Broad Institute. While CRISPR-Cas9 continues to have a profound effect and huge potential for research and biomedical, and agricultural applications, the ability to move entire genes into specific target locations remained out reach.

“Genome editing with CRISPR-Cas enzymes typically involves cutting and disrupting genes, or making certain base edits,” explains Strecker, “however, inserting large pieces of DNA is still hard to accomplish.”

As a postdoctoral researcher in the lab of CRISPR pioneer Feng Zhang, Strecker led research that showed how large sequences could be inserted into a genome at a given location.

“Nature often has interesting solutions to these problems and we were fortunate to identify and characterize a remarkable CRISPR system from cyanobacteria that functions as a programmable transposase.”

Importantly, the system he discovered, called CAST, doesn’t require cellular machinery to insert DNA. This is important as it means that CAST could work in many cell types, including those that have stopped dividing such as neurons, something that is being pursued.

By finding new sources of inspiration, be it nature or art, both Rodriques and Strecker join a stellar line up of young investigators being recognized for creativity and innovation.

 

Controlling our internal world

Olympic skaters can launch, perform multiple aerial turns, and land gracefully, anticipating imperfections and reacting quickly to correct course. To make such elegant movements, the brain must have an internal model of the body to control, predict, and make almost instantaneous adjustments to motor commands. So-called “internal models” are a fundamental concept in engineering and have long been suggested to underlie control of movement by the brain, but what about processes that occur in the absence of movement, such as contemplation, anticipation, planning?

Using a novel combination of task design, data analysis, and modeling, MIT neuroscientist Mehrdad Jazayeri and colleagues now provide compelling evidence that the core elements of an internal model also control purely mental processes in a study published in Nature Neuroscience.

“During my thesis I realized that I’m interested, not so much in how our senses react to sensory inputs, but instead in how my internal model of the world helps me make sense of those inputs,”says Jazayeri, the Robert A. Swanson Career Development Professor of Life Sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Indeed, understanding the building blocks exerting control of such mental processes could help to paint a better picture of disruptions in mental disorders, such as schizophrenia.

Internal models for mental processes

Scientists working on the motor system have long theorized that the brain overcomes noisy and slow signals using an accurate internal model of the body. This internal model serves three critical functions: it provides motor to control movement, simulates upcoming movement to overcome delays, and uses feedback to make real-time adjustments.

“The framework that we currently use to think about how the brain controls our actions is one that we have borrowed from robotics: we use controllers, simulators, and sensory measurements to control machines and train operators,” explains Reza Shadmehr, a professor at the Johns Hopkins School of Medicine who was not involved with the study. “That framework has largely influenced how we imagine our brain controlling our movements.”

Jazazyeri and colleagues wondered whether the same framework might explain the control principles governing mental states in the absence of any movement.

“When we’re simply sitting, thoughts and images run through our heads and, fundamental to intellect, we can control them,” explains lead author Seth Egger, a former postdoctoral associate in the Jazayeri lab and now at Duke University.

“We wanted to find out what’s happening between our ears when we are engaged in thinking,” says Egger.

Imagine, for example, a sign language interpreter keeping up with a fast speaker. To track speech accurately, the translator continuously anticipates where the speech is going, rapidly adjusting when the actual words deviate from the prediction. The interpreter could be using an internal model to anticipate upcoming words, and use feedback to make adjustments on the fly.

1-2-3…Go

Hypothesizing about how the components of an internal model function in scenarios such as translation is one thing. Cleanly measuring and proving the existence of these elements is much more complicated as the activity of the controller, simulator, and feedback are intertwined. To tackle this problem, Jazayeri and colleagues devised a clever task with primate models in which the controller, simulator, and feedback act at distinct times.

In this task, called “1-2-3-Go,” the animal sees three consecutive flashes (1, 2, and 3) that form a regular beat, and learns to make an eye movement (Go) when they anticipate the 4th flash should occur. During the task, researchers measured neural activity in a region of the frontal cortex they had previously linked to the timing of movement.

Jazayeri and colleagues had clear predictions about when the controller would act (between the third flash and “Go”) and when feedback would be engaged (with each flash of light). The key surprise came when researchers saw evidence for the simulator anticipating the third flash. This unexpected neural activity has dynamics that resemble the controller, but was not associated with a response. In other words, the researchers uncovered a covert plan that functions as the simulator, thus uncovering all three elements of an internal model for a mental process, the planning and anticipation of “Go” in the “1-2-3-Go” sequence.

“Jazayeri’s work is important because it demonstrates how to study mental simulation in animals,” explains Shadmehr, “and where in the brain that simulation is taking place.”

Having found how and where to measure an internal model in action, Jazayeri and colleagues now plan to ask whether these control strategies can explain how primates effortlessly generalize their knowledge from one behavioral context to another. For example, how does an interpreter rapidly adjust when someone with widely different speech habits takes the podium? This line of investigation promises to shed light on high-level mental capacities of the primate brain that simpler animals seem to lack, that go awry in mental disorders, and that designers of artificial intelligence systems so fondly seek.

Hearing through the clatter

In a busy coffee shop, our eardrums are inundated with sound waves – people chatting, the clatter of cups, music playing – yet our brains somehow manage to untangle relevant sounds, like a barista announcing that our “coffee is ready,” from insignificant noise. A new McGovern Institute study sheds light on how the brain accomplishes the task of extracting meaningful sounds from background noise – findings that could one day help to build artificial hearing systems and aid development of targeted hearing prosthetics.

“These findings reveal a neural correlate of our ability to listen in noise, and at the same time demonstrate functional differentiation between different stages of auditory processing in the cortex,” explains Josh McDermott, an associate professor of brain and cognitive sciences at MIT, a member of the McGovern Institute and the Center for Brains, Minds and Machines, and the senior author of the study.

The auditory cortex, a part of the brain that responds to sound, has long been known to have distinct anatomical subregions, but the role these areas play in auditory processing has remained a mystery. In their study published today in Nature Communications, McDermott and former graduate student Alex Kell, discovered that these subregions respond differently to the presence of background noise, suggesting that auditory processing occurs in steps that progressively hone in on and isolate a sound of interest.

Background check

Previous studies have shown that the primary and non-primary subregions of the auditory cortex respond to sound with different dynamics, but these studies were largely based on brain activity in response to speech or simple synthetic sounds (such as tones and clicks). Little was known about how these regions might work to subserve everyday auditory behavior.

To test these subregions under more realistic conditions, McDermott and Kell, who is now a postdoctoral researcher at Columbia University, assessed changes in human brain activity while subjects listened to natural sounds with and without background noise.

While lying in an MRI scanner, subjects listened to 30 different natural sounds, ranging from meowing cats to ringing phones, that were presented alone or embedded in real-world background noise such as heavy rain.

“When I started studying audition,” explains Kell, “I started just sitting around in my day-to-day life, just listening, and was astonished at the constant background noise that seemed to usually be filtered out by default. Most of these noises tended to be pretty stable over time, suggesting we could experimentally separate them. The project flowed from there.”

To their surprise, Kell and McDermott found that the primary and non-primary regions of the auditory cortex responded differently to natural sound depending upon whether background noise was present.

brain regions responding to sound
Primary auditory cortex (outlined in white) responses change (blue) when background noise is present, whereas non-primary activity is robust to background noise (yellow). Image: Alex Kell

They found that activity of the primary auditory cortex was altered when background noise is present, suggesting that this region has not yet differentiated between meaningful sounds and background noise. Non-primary regions, however, respond similarly to natural sounds irrespective of whether noise is present, suggesting that cortical signals generated by sound are transformed or “cleaned up” to remove background noise by the time they reach the non-primary auditory cortex.

“We were surprised by how big the difference was between primary and non-primary areas,” explained Kell, “so we ran a bunch more subjects but kept seeing the same thing. We had a ton of questions about what might be responsible for this difference, and that’s why we ended up running all these follow-up experiments.”

A general principle

Kell and McDermott went on to test whether these responses were specific to particular sounds, and discovered that the above effect remained stable no matter the source or type of sound activity. Music, speech, or a squeaky toy, all activated the non-primary cortex region similarly, whether or not background noise was present.

The authors also tested whether attention is relevant. Even when the researchers sneakily distracted subjects with a visual task in the scanner, the cortical subregions responded to meaningful sound and background noise in the same way, showing that attention is not driving this aspect of sound processing. In other words, even when we are focused on reading a book, our brain is diligently sorting the sound of our meowing cat from the patter of heavy rain outside.

Future directions

The McDermott lab is now building computational models of the so-called “noise robustness” found in the Nature Communications study and Kell is pursuing a finer-grained understanding of sound processing in his postdoctoral work at Columbia, by exploring the neural circuit mechanisms underlying this phenomenon.

By gaining a deeper understanding of how the brain processes sound, the researchers hope their work will contribute to improve diagnoses and treatment of hearing dysfunction. Such research could help to reveal the origins of listening difficulties that accompany developmental disorders or age-related hearing loss. For instance, if hearing loss results from dysfunction in sensory processing, this could be caused by abnormal noise robustness in the auditory cortex. Normal noise robustness might instead suggest that there are impairments elsewhere in the brain, for example a break down in higher executive function.

“In the future,” McDermott says, “we hope these noninvasive measures of auditory function may become valuable tools for clinical assessment.”

Ed Boyden wins premier Royal Society honor

Edward S. Boyden, the Y. Eva Tan Professor in Neurotechnology at MIT, has been awarded the 2019 Croonian Medal and Lecture by the Royal Society. Twenty-four medals and awards are announced by the Royal Society each year, honoring exceptional researchers who are making outstanding contributions to science.

“The Royal Society gives an array of medals and awards to scientists who have done exceptional, ground-breaking work,” explained Sir Venki Ramakrishnan, President of the Royal Society. “This year, it is again a pleasure to see these awards bestowed on scientists who have made such distinguished and far-reaching contributions in their fields. I congratulate and thank them for their efforts.”

Boyden wins the medal and lecture in recognition of his research that is expanding our understanding of the brain. This includes his critical role in the development of optogenetics, a technique for controlling brain activity with light, and his invention of expansion microscopy. Croonian Medal laureates include notable luminaries of science and neurobiology.

“It is a great honor to be selected to receive this medal, especially
since it was also given to people such as Santiago Ramon y Cajal, the
founder of modern neuroscience,” says Boyden. “This award reflects the great work of many fantastic students, postdocs, and collaborators who I’ve had the privilege to work with over the years.”

The award includes an invitation to deliver the premier British lecture in the biological sciences, given annually at the Royal Society in London. At the lecture, the winner is awarded a medal and a gift of £10,000. This announcement comes shortly after Boyden was co-awarded the Warren Alpert Prize for his role in developing optogenetics.

History of the Croonian Medal and Lecture

William Croone, pictured, envisioned an annual lecture that is the premier biological sciences medal and lecture at the Royal Society
William Croone, FRS Photo credit: Royal College of Physicians, London

The lectureship was conceived by William Croone FRS, one of the original Fellows of the Society based in London. Among the papers left on his death in 1684 were plans to endow two lectureships, one at the Royal Society and the other at the Royal College of Physicians. His widow later bequeathed the means to carry out the scheme. The lecture series began in 1738.

 

 

Ed Boyden holds the titles of Investigator, McGovern Institute; Y. Eva Tan Professor in Neurotechnology at MIT; Leader, Synthetic Neurobiology Group, MIT Media Lab; Professor, Biological Engineering, Brain and Cognitive Sciences, MIT Media Lab; Co-Director, MIT Center for Neurobiological Engineering; Member, MIT Center for Environmental Health Sciences, Computational and Systems Biology Initiative, and Koch Institute.