Joining the dots in large neural datasets

You might have played ‘join the dots’, a puzzle where numbers guide you to draw until a complete picture emerges. But imagine a complex underlying image with no numbers to guide the sequence of joining. This is a problem that challenges scientists who work with large amounts of neural data. Sometimes they can align data to a stereotyped behavior, and thus define a sequence of neuronal activity underlying navigation of a maze or singing of a song learned and repeated across generations of birds. But most natural behavior is not stereotyped, and when it comes to sleeping, imagining, and other higher order activities, there is not even a physical behavioral readout for alignment. Michale Fee and colleagues have now developed an algorithm, seqNMF, that can recognize relevant sequences of neural activity, even when there is no guide to align to, such as an overt sequence of behaviors or notes.

“This method allows you to extract structure from the internal life of the brain without being forced to make reference to inputs or output,” says Michale Fee, a neuroscientist at the McGovern Institute at MIT, Associate Department Head and Glen V. and Phyllis F. Dorflinger Professor of Neuroscience in the Department of Brain and Cognitive Sciences, and investigator with the Simons Collaboration on the Global Brain. Fee conducted the study in collaboration with Mark S. Goldman of the University of California, Davis.

In order to achieve this task, the authors of the study, co-led by Emily L. Mackevicius and Andrew H. Bahle of the McGovern Institute,  took a process called convolutional non-negative matrix factorization (NMF), a tool that allows extraction of sparse, but important, features from complex and noisy data, and developed it so that it can be used to extract sequences over time that are related to a learned behavior or song. The new algorithm also relies on repetition, but tell-tale repetitions of neural activity rather than simplistic repetitions in the animal’s behavior. seqNMF can follow repeated sequences of firing over time that are not tied to a specific external reference time framework, and can extract relevant sequences of neural firing in an unsupervised fashion without the researcher supplying prior information.

In the current study, the authors initially applied and honed the system on synthetic datasets. These datasets started to show them that the algorithm could “join the dots” without additional informational input. When seqNMF performed well in these tests, they applied it to available open source data from rats, finding that they could extract sequences of neural firing in the hippocampus that are relevant to finding a water reward in a maze.

Having passed these initial tests, the authors upped the ante and challenged seqNMF to find relevant neural activity sequences in a non-stereotypical behavior: improvised singing by zebra finches that have not learned the signature songs of their species (untutored birds). The authors analyzed neural data from the HVC, a region of the bird brain previously linked to song learning. Since normal adult bird songs are stereotyped, the researchers could align neural activity with features in the song itself for well-tutored birds. Fee and colleagues then turned to untutored birds and found that they still had repeated neural sequences related to the “improvised” song, that are reminiscent of the tutored birds, but the patterns are messier. Indeed, the brain of the untutored bird will even initiate two distinct neural signatures at the same time, but seqNMF is able to see past the resulting neural cacophony, and decipher that multiple patterns are present but overlapping. Being able to find these levels of order in such neural datasets is near impossible using previous methods of analysis.

seqNMF can be applied, potentially, to any neural activity, and the researchers are now testing whether the algorithm can indeed be generalized to extract information from other types of neural data. In other words, now that it’s clear that seqNMF can find a relevant sequence of neural activity for a non-stereotypical behavior, scientists can examine whether the neural basis of behaviors in other organisms and even for activities such as sleep and imagination can be extracted. Indeed, seqNMF is available on GitHub for researchers to apply to their own questions of interest.

Ila Fiete

Neural Coding and Dynamics

Ila Fiete builds tools and mathematical models to expand our knowledge of the brain’s computations. Specifically, her lab focuses on how the brain develops and reshapes its neural connections to perform high-level computations, like those involved in memory and learning. The Fiete lab applies cutting-edge theoretical and quantitative methods—wielding the vast capabilities of computational models, informed by mathematics, machine learning, and physics—digging deeper into how the brain represents and manipulates information. Through these strategies, Fiete hopes to shed new light onto the neural ensembles behind learning, integration of new information, inference-making, and spatial navigation.

Her lab’s findings are pushing the frontiers of neuroscience—while advancing the utility of computational tools in this space—and are building a more robust understanding of complex brain processes.

Josh McDermott

The Science of Hearing

Hearing enables us to make sense of our whereabouts, understand the emotional state of others, and enjoy musical experiences. Acoustic information relays vital cues about the world—yet much of the sophisticated brain system that decodes this information is poorly understood.

Josh McDermott’s research is in search of foundational principles of sound perception. Groundbreaking discoveries from the McDermott lab have clarified how people hear and process sounds. His research informs new treatments for those with hearing loss, and paves the way for machine systems that emulate the human ability to recognize and interpret sound. McDermott’s lab has also pioneered new approaches for understanding music perception. His lab deconstructs the neural ensembles that allow us to appreciate music, while also studying the often striking variation that can occur across cultures.

Virtual Tour of McDermott Lab

Rebecca Saxe

Mind Reading

How do we think about the thoughts of other people? How are some thoughts universal and others specific to a culture or an individual?

Rebecca Saxe is tackling these and other thorny questions surrounding human thought in adults, children, and infants. Leveraging behavioral testing, brain imaging, and computational modeling, her lab is focusing on a diverse set of research questions including what people learn from punishment, the role of generosity in social relationships, and the navigation and language abilities in toddlers. The team is also using computational models to deconstruct complex thought processes, such as how humans predict the emotions of others. This research not only expands the junction of sociology and neuroscience, but also unravels—and gives clarity to—the social threads that form the fabric of society.

Virtual Tour of Saxe Lab

Mehrdad Jazayeri

Neurobiology of Mental Computations

How does the brain give rise to the mind? How do neurons, circuits, and synapses in the brain encode knowledge about objects, events, and other structural and causal relationships in the environment? Research in Mehrdad Jazayeri’s lab brings together ideas from cognitive science, neuroscience, and machine learning with experimental data in humans, animals, and computer models to develop a computational understanding of how the brain create internal representations, or models, of the external world.

Nancy Kanwisher

Architecture of the Mind

What is the nature of the human mind? Philosophers have debated this question for centuries, but Nancy Kanwisher approaches this question empirically, using brain imaging to look for components of the human mind that reside in particular regions of the brain. Her lab has identified cortical regions that are selectively engaged in the perception of faces, places, and bodies, and other regions specialized for uniquely human functions including the music, language, and thinking about other people’s thoughts. More recently, her lab has begun using artificial neural networks to unpack these findings and examine why, from a computational standpoint, the brain exhibits functional specification in the first place.

Tomaso Poggio

Engineering Intelligence

Tomaso Poggio is one of the founders of computational neuroscience. He pioneered a model of the fly’s visual system as well as of human stereovision. His research has always been interdisciplinary, bridging brains and computers. It is now focused on the mathematics of deep learning and on the computational neuroscience of the visual cortex. Poggio also introduced using an approach called regularization theory to computational vision, made key contributions to the biophysics of computation and to learning theory, and developed an influential model of recognition in the visual cortex. Research in the Poggio lab is guided by the belief that understanding learning is at the heart of understanding both biological and artificial intelligence. Learning is therefore the route to understanding how the human brain works and for making intelligent machines.

Mark Harnett

Listening to Neurons

Mark Harnett studies how the biophysical features of individual neurons, including ion channels, receptors, and membrane electrical properties, endow neural circuits with the ability to process information and perform the complex computations that underlie behavior. As part of this work, the Harnett lab was the first to describe the physiological properties of human dendrites, the elaborate tree-like structures through which neurons receive the vast majority of their synaptic inputs. Harnett also examines how computations are instantiated in neural circuits to produce complex behaviors such as spatial navigation.

Virtual Tour of Harnett Lab

Satrajit Ghosh

Personalized Medicine

A fundamental problem in psychiatry is that there are no biological markers for diagnosing mental illness or for indicating how best to treat it. Treatment decisions are based entirely on symptoms, and doctors and their patients will typically try one treatment, then if it does not work, try another, and perhaps another. Satrajit Ghosh hopes to change this picture, and his research suggests that individual brain scans and speaking patterns can hold valuable information for guiding psychiatrists and patients. His research group develops novel analytic platforms that use such information to create robust, predictive models around human health. Current areas include depression, suicide, anxiety disorders, autism, Parkinson’s disease, and brain tumors.

James DiCarlo

Rapid Recognition

DiCarlo’s research goal is to reverse engineer the brain mechanisms that underlie human visual intelligence. He and his collaborators have revealed how population image transformations carried out by a deep stack of interconnected neocortical brain areas — called the primate ventral visual stream — are effortlessly able to extract object identity from visual images. His team uses a combination of large-scale neurophysiology, brain imaging, direct neural perturbation methods, and machine learning methods to build and test neurally-mechanistic computational models of the ventral visual stream and its support of cognition and behavior. Such an engineering-based understanding is likely to lead to new artificial vision and artificial intelligence approaches, new brain-machine interfaces to restore or augment lost senses, and a new foundation to ameliorate disorders of the mind.