3Q: The interface between art and neuroscience

CBMM postdoc Sarah Schwettman

Computational neuroscientist Sarah Schwettmann, who works in the Center for Brains, Minds, and Machines at the McGovern Institute, is one of three instructors behind the cross-disciplinary course 9.S52/9.S916 (Vision in Art and Neuroscience), which introduces students to core concepts in visual perception through the lenses of art and neuroscience.

Supported by a faculty grant from the Center for Art, Science and Technology at MIT (CAST) for the past two years, the class is led by Pawan Sinha, a professor of vision and computational neuroscience in the Department of Brain and Cognitive Sciences. They are joined in the course by Seth Riskin SM ’89, a light artist and the manager of the MIT Museum Studio and Compton Gallery, where the course is taught. Schwettman discussed the combination of art and science in an educational setting.

Q: How have the three of you approached this cross-disciplinary class in art and neuroscience?

A: Discussions around this intersection often consider what each field has to offer the other. We take a different approach, one I refer to as occupying the gap, or positioning ourselves between the two fields and asking what essential questions underlie them both. One question addresses the nature of the human relationship to the world. The course suggests one answer: This relationship is fundamentally creative, from the brain’s interpretation of incoming sensory data in perception, to the explicit construction of experiential worlds in art.

Neuroscience and art, therefore, each provide a set of tools for investigating different levels of the constructive process. Through neuroscience, we develop a specific understanding of the models of the world that the brain uses to make sense of incoming visual data. With articulation of those models, we can engineer types of inputs that interact with visual processing architecture in particularly exquisite ways, and do so reliably, giving artists a toolkit for remixing and modulating experience. In the studio component of the course, we experiment with this toolkit and collectively move it forward.

While designing the course, Pawan, Seth, and I found that we were each addressing a similar set of questions, the same that motivate the class, through our own research and practice. In parallel to computational vision research, Professor Sinha leads a humanitarian initiative called Project Prakash, which provides treatment to blind children in India and explores the development of vision following the restoration of sight. Where does structure in perception originate? As an artist in the MIT Museum Studio, Seth works with articulated light to sculpt structured visual worlds out of darkness. I also live on this interface where the brain meets the world — my research in the Department of Brain and Cognitive Sciences examines the neural basis of mental models for simulating physics. Linking our work in the course is an experiment in synthesis.

Q: What current research in vision, neuroscience, and art are being explored at MIT, and how does the class connect it to hands-on practice?

A: Our brains build a rich world of experience and expectation from limited and noisy sensory data with infinite potential interpretations. In perception research, we seek to discover how the brain finds more meaning in incoming data than is explained by the signal alone. Work being done at MIT around generative models addresses this, for instance in the labs of Josh Tenenbaum and Josh McDermott in the Department of Brain and Cognitive Sciences. Researchers present an ambiguous visual or auditory stimulus and by probing someone’s perceptual interpretation, they get a handle on the structures that the mind generates to interpret incoming data, and they can begin to build computational models of the process.

In Vision in Art and Neuroscience, we focus on the experiential as well as the experimental, probing the perceiver’s experience of structure-generating process—perceiving perception itself.

As instructors, we face the pedagogical question: what exercises, in the studio, can evoke so striking an experience of students’ own perception that cutting edge research takes on new meaning, understood in the immediacy of seeing? Later in the semester, students face a similar question as artists: How can one create visual environments where viewers experience their own perceptual processing at work? Done well, this experience becomes the artwork itself. Early in the course, students explore the Ganzfeld effect, popularized by artist James Turrell, where the viewer is exposed to an unstructured visual field of uniform illumination. In this experience, one feels the mind struggling to fit models of the world to unstructured input, and attempting this over and over again — an interpretation process which often goes unnoticed when input structure is expected by visual processing architecture. The progression of the course modules follows the hierarchy of visual processing in the brain, which builds increasingly complex interpretations of visual inputs, from brightness and edges to depth, color, and recognizable form.

MIT students first encounter those concepts in the seminar component of the course at the beginning of each week. Later in the week, students translate findings into experimental approaches in the studio. We work with light directly, from introducing a single pinpoint of light into an otherwise completely dark room, to building intricate environments using programmable electronics. Students begin to take this work into their own hands, in small groups and individually, culminating in final projects for exhibition. These exhibitions are truly a highlight of the course. They’re often one of the first times that students have built and shown artworks. That’s been a gift to share with the broader MIT community, and a great learning experience for students and instructors alike.

Q: How has that approach been received by the MIT community?

A: What we’re doing has resonated across disciplines: In addition to neuroscience, we have students and researchers joining us from computer science, mechanical engineering, mathematics, the Media Lab, and ACT (the Program in Art, Culture, and Technology). The course is growing into something larger, a community of practice interested in applying the scientific methodology we develop to study the world, to probe experience, and to articulate models for its generation and replication.

With a mix of undergraduates, graduates, faculty, and artists, we’ve put together installations and symposia — including three on campus so far. The first of these, “Perceiving Perception,” also led to a weekly open studio night where students and collaborators convene for project work. Our second exhibition, “Dessert of the Real,” is on display this spring in the Compton Gallery. This April we’re organizing a symposium in the studio featuring neuroscientists, computer scientists, artists and researchers from MIT and Harvard. We’re reaching beyond campus as well, through off-site installations, collaborations with museums — including the Metropolitan Museum of Art and the Peabody Essex Museum — and a partnership with the ZERO Group in Germany.

We’re eager to involve a broad network of collaborators. It’s an exciting moment in the fields of neuroscience and computing; there is great energy to build technologies that perceive the world like humans do. We stress on the first day of class that perception is a fundamentally creative act. We see the potential for models of perception to themselves be tools for scaling and translating creativity across domains, and for building a deeply creative relationship to our environment.

Halassa named Max Planck Fellow

Michael Halassa was just appointed as one of the newest Max Planck Fellows. His appointment comes through the Max Planck Florida Institute for Neuroscience (MPFI), which aims to forge collaborations between exceptional neuroscientists from around the world to answer fundamental questions about brain development and function. The Max Planck Society selects cutting edge, active researchers from other institutions to fellow positions for a five-year period to promote interactions and synergies. While the program is a longstanding feature of the Max Planck Society, Halassa, and fellow appointee Yi Guo of the University of California, Santa Cruz, are the first selected fellows that are based at U.S. institutions.

Michael Halassa is an associate investigator at the McGovern Institute and an assistant professor in the Department of Brain and Cognitive Sciences at MIT. Halassa’s research focuses on the neural architectures that underlie complex cognitive processes. He is particularly interested in goal-directed attention, our ability to rapidly switch attentional focus based on high level objectives. For example, when you are in a roomful of colleagues, the mention of your name in a distant conversation can quickly trigger your ‘mind’s ear’ to eavesdrop into that conversation. This contrasts with hearing a name that sounds like yours on television, which does not usually grab your attention in the same way. In certain mental disorders such as schizophrenia, the ability to generate such high-level objectives, while also accounting for context, is perturbed. Recent evidence strongly suggests that impaired function of the prefrontal cortex and its interactions with a region of the brain called the thalamus may be altered in such disorders. It is this thalamocortical network that Halassa has been studying in mice, where his group has uncovered how the thalamus supports the ability of the prefrontal cortex to generate context-appropriate attentional signals.

The fellowship will support extending Halassa’s work into the tree shrew (Tupaia belangeri), which has been shown to have advanced cognitive abilities compared to mice while also offering many of the circuit-interrogation tools that make the mouse an attractive experimental model.

The Max Planck Florida Institute for Neuroscience (MPFI), a not-for-profit research organization, is part of the world-renowned Max Planck Society, Germany’s most successful research organization. The Max Planck Society was founded in 1911, and comprises 84 institutes and research facilities. While primarily located in Germany, there are 4 institutes and one research facility located aboard, including the Florida Institute that Halassa will collaborate with. The fellow positions were created with the goal of increasing interactions between the Max Planck Society and its institutes with faculty engaged in active research at other universities and institutions, which with this appointment now include MIT.

How the brain decodes familiar faces

Our brains are incredibly good at processing faces, and even have specific regions specialized for this function. But what face dimensions are we observing? Do we observe general properties first, then look at the details? Or are dimensions such as gender or other identity details decoded interdependently? In a study published today in Nature Communications, the Kanwisher lab measured the response of the brain to faces in real time, and found that the brain first decodes properties such as gender and age before drilling down to the specific identity of the face itself.

While functional magnetic resonance imaging (fMRI) has revealed an incredible level of detail about which regions of the brain respond to faces, the technology is less effective at telling us when these brain regions become activated. This is because fMRI measures brain activity by detecting changes in blood flow; when neurons become active, local blood flow to those brain regions increases. However, fMRI works too slowly to keep up with the brain’s millisecond-by-millisecond dynamics. Enter magnetoencephalography (MEG), a technique developed by MIT physicist David Cohen that detects the minuscule fluctuations in magnetic field that occur with the electrical activity of neurons. This allows better temporal resolution of neural activity.

McGovern Investigator Nancy Kanwisher and postdoc Katharina Dobs, along with their co-authors Leyla Isik and Dimitrios Pantazis, selected this temporally precise approach to measure the time it takes for the brain to respond to different dimensional features of faces.

“From a brief glimpse of a face, we quickly extract all this rich multidimensional information about a person, such as their sex, age, and identity,” explains Dobs. “I wanted to understand how the brain accomplishes this impressive feat, and what the neural mechanisms are that underlie this effect, but no one had measured the time scales of responses to these features in the same study.”

Previous studies have shown that people with prosopagnosia, a condition characterized by the inability to identify familiar faces, have no trouble determining gender, suggesting these features may be independent. “But examining when the brain recognizes gender and identity, and whether these are interdependent features is less clear,” explains Dobs.

By recording the brain activity of subjects in the MEG, Dobs and her co-authors found that the brain responds to coarse features, such as the gender of a face, much faster than the identity of the face itself. Their data showed that, in as little as 60-70 milliseconds, the brain begins to decode the age and gender of a person. Roughly 30 milliseconds later — at around 90 milliseconds — the brain begins processing the identity of the face.

After establishing a paradigm for measuring responses to these face dimensions, the authors then decided to test the effect of familiarity. It’s generally understood that the brain processes information about “familiar faces” more robustly than unfamiliar faces. For example, our brains are adept at recognizing actress Scarlett Johansson across multiple photographs, even if her hairstyle is different in each picture. Our brains have a much harder time, however, recognizing two images of the same person if the face is unfamiliar.

“Actually, for unfamiliar faces the brain is easily fooled,” Dobs explains, “variations in images, shadows, changes in hair color or style, quickly lead us to think we are looking at a different person. Conversely, we have no problem if a familiar face is in shadow, or a friend changes their hair style. But we didn’t know why familiar face perception is much more robust, whether this is due to better feed forward processing, or based on later memory retrieval.”

Familiar and unfamiliar celebrity faces side by side
Perception of a familiar face, Scarlett Johansson, is more robust than for unfamiliar faces, in this study German celebrity Karoline Herfurth (images: Wikimedia commons).

To test the effect of familiarity, the authors measured brain responses while the subjects viewed familiar faces (American celebrities) and unfamiliar faces (German celebrities) in the MEG. Surprisingly, they found that subjects recognize gender more quickly in familiar faces than unfamiliar faces. For example our brains decode that actor Scarlett Johansson is female, before we even realize she is Scarlett Johansson. And for the less familiar German actor, Karoline Herfurth, our brains unpack the same information less well.

Dobs and co-authors argue that better gender and identity recognition is not “top-down” for familiar faces, meaning that improved responses to familiar faces is not about retrieval of information from memory, but rather, a feed-forward mechanism. They found that the brain responds to facial familiarity at a much slower time scale (400 milliseconds) than it responds to gender, suggesting that the brain may be remembering associations related to the face (Johansson = Lost in Translation movie) in that longer timeframe.

This is good news for artificial intelligence. “We are interested in whether feed-forward deep learning systems can learn faces using similar mechanisms,” explains Dobs, “and help us to understand how the brain can process faces it has seen before in the absence of pulling on memory.”

When it comes to immediate next steps, Dobs would like to explore where in the brain these facial dimensions are extracted, how prior experience affects the general processing of objects, and whether computational models of face processing can capture these complex human characteristics.

 

How does the brain focus?

This is a very interesting question, and one that researchers at the McGovern Institute for Brain Research are actively pursuing. It’s also important for understanding what happens in conditions such as ADHD. There are constant distractions in the world, a cacophony of noise and visual stimulation. How and where we focus our attention, and what the brain attends to vs. treating as background information, is a big question in neuroscience. Thanks to work from researchers, including Robert Desimone, we understand quite a bit about how this works in the visual system in particular. What his lab has found is that when we pay attention to something specific, neurons in the visual cortex responding to the object we’re focusing upon fire in synchrony, whereas those responding to irrelevant information become suppressed. It’s almost as if this synchrony “increases the volume” so that the responding neurons rise above general noise.

Synchronized activity of neurons occurs as they oscillate together at a particular frequency, but the frequency of oscillation really matters when it comes to attention and focus vs. inattention and distraction. To find out more about this, I asked a postdoc in the Desimone lab, Yasaman Bagherzadeh about the role of different “brainwaves,” or oscillations at different frequencies, in attention.

“Studies in humans have shown that enhanced synchrony between neurons in the alpha range –8–12 Hz— is actually associated with inattention and distracting information,” explains Bagherzadeh, “whereas enhanced gamma synchrony (about 30-150 Hz) is associated with attention and focus on a target. For example, when a stimulus (through the ears or eyes) or its location (left vs. right) is intentionally ignored, this is preceded by a relative increase in alpha power, while a stimulus you’re attending to is linked to an increase in gamma power.”

Attention in the Desimone lab (no pun intended) has also recently been focused on covert attention. This type of spatial attention was traditionally thought to occur through a mental shift without a glance, but the Desimone lab recently found that even during these mental shifts, animal sneakily glance at objects that attention becomes focused on. Think now of something you know is nearby (a cup of coffee for example), but not in the center of your field of vision. Chances are that you just sneakily glanced at that object.

Previously these sneaky glances/small eye movements, called microsaccades (MS for short), were considered to be involuntary movements without any functional role. However, in the recent Desimone lab study, it was found that a MS significantly modulates neural activity during the attention period. This means that when you glance at something, even sneakily, it is intimately linked to attention. In other words, when it comes to spatial attention, eye movements seem to play a significant role.

Various questions arise about the mechanisms of spatial attention as a result this study, as outlined by Karthik Srinivasan, a postdoctoral associate in the Desimone lab.

“How are eye movement signals and attentional processing coordinated? What’s the role of the different frequencies of oscillation for such coordination? Is there a role for them or are they just the frequency domain representation (i.e., an epiphenomenon) of a temporal/dynamical process? Is attention a sustained process or rhythmic or something more dynamic?” Srinivasan lists some of the questions that come out of his study and goes on to explain the implications of the study further. “It is hard to believe that covert attention is a sustained process (the so-called ‘spotlight theory of attention’), given that neural activity during the attention period can be modulated by covert glances. A few recent studies have supported the idea that attention is a rhythmic process that can be uncoupled from eye movements. While this is an idea made attractive by its simplicity, it’s clear that small glances can affect neural activity related to attention, and MS are not rhythmic. More work is thus needed to get to a more unified theory that accounts for all of the data out there related to eye movements and their close link to attention.”

Answering some of the questions that Bagherzadeh, Srinivasan, and others are pursuing in the Desimone lab, both experimentally and theoretically, will clear up some of the issues above, and improve our understanding of how the brain focuses attention.

Do you have a question for The Brain? Ask it here.

 

How motion conveys emotion in the face

While a static emoji can stand in for emotion, in real life we are constantly reading into the feelings of others through subtle facial movements. The lift of an eyebrow, the flicker around the lips as a smile emerges, a subtle change around the eyes (or the sudden rolling of the eyes), are all changes that feed into our ability to understand the emotional state, and the attitude, of others towards us. Ben Deen and Rebecca Saxe have now monitored changes in brain activity as subjects followed face movements in movies of avatars. Their findings argue that we can generalize across individual face part movements in other people, but that a particular cortical region, the face-responsive superior temporal sulcus (fSTS), is also responding to isolated movements of individual face parts. Indeed, the fSTS seems to be tied to kinematics, individual face part movement, more than the implied emotional cause of that movement.

We know that the brain responds to dynamic changes in facial expression, and that these are associated with activity in the fSTS, but how do calculations of these movements play out in the brain?

Do we understand emotional changes by adding up individual features (lifting of eyebrows + rounding of mouth= surprise), or are we assessing the entire face in a more holistic way that results in more generalized representations? McGovern Investigator Rebecca Saxe and her graduate student Ben Deen set out to answer this question using behavioral analysis and brain imaging, specifically fMRI.

“We had a good sense of what stimuli the fSTS responds strongly to,” explains Ben Deen, “but didn’t really have any sense of how those inputs are processed in the region – what sorts of features are represented, whether the representation is more abstract or more tied to visual features, etc. The hope was to use multivoxel pattern analysis, which has proven to be a remarkably useful method for characterizing representational content, to address these questions and get a better sense of what the region is actually doing.”

Facial movements were conveyed to subjects using animated “avatars.” By presenting avatars that made isolated eye and eyebrow movements (brow raise, eye closing, eye roll, scowl) or mouth movements (smile, frown, mouth opening, snarl), as well as composites of these movements, the researchers were able to assess whether our interpretation of the latter is distinct from the sum of its parts. To do this, Deen and Saxe first took a behavioral approach where people reported on what combinations of eye and mouth movements in a whole avatar face, or one where the top and bottom parts of the face were misaligned. What they found was that movement in the mouth region can influence perception of movement in the eye region, arguably due to some level of holistic processing. The authors then asked whether there were cortical differences upon viewing isolated vs. combined face part movements. They found that changes in fSTS, but not other brain regions, had patterns of activity that seemed to discriminate between different facial movements. Indeed, they could decode which part of the avatar’s face is being perceived as moving from fSTS activity. The researchers could even model the fSTS response to combined features linearly based on the response to individual face parts. In short, though the behavorial data indicate that there is holistic processing of complex facial movement, it is also clear that isolated parts-based representations are also present, a sort of intermediate state.

As part of this work, Deen and Saxe took the important step of pre-registering their experimental parameters, before collecting any data, at the Open Science Framework. This step allows others to more easily reproduce the analysis they conducted, since all parameters (the task that subjects are carrying out, the number of subjects needed, the rationale for this number, and the scripts used to analyze data) are openly available.

“Preregistration had a big impact on our workflow for the study,” explained Deen. “More of the work was done up front, in coming up with all of the analysis details and agonizing over whether we were choosing the right strategy, before seeing any of the data. When you tie your hands by making these decisions up front, you start thinking much more carefully about them.”

Pre-registration does remove post-hoc researcher subjectivity from the analysis. As an example, because Deen and Saxe predicted that the people would be accurately able to discriminate between faces per se, they decided ahead of the experiment to focus on analyzing reaction time, rather than looking at the collected data and deciding to focus on this number after the fact. This adds to the overall objectivity of the experiment and is increasingly seen as a robust way to conduct such experiments.

How do neurons communicate (so quickly)?

Neurons are the most fundamental unit of the nervous system, and yet, researchers are just beginning to understand how they perform the complex computations that underlie our behavior. We asked Boaz Barak, previously a postdoc in Guoping Feng’s lab at the McGovern Institute and now Senior Lecturer at the School of Psychological Sciences and Sagol School of Neuroscience at Tel Aviv University, to unpack the basics of neuron communication for us.

“Neurons communicate with each other through electrical and chemical signals,” explains Barak. “The electrical signal, or action potential, runs from the cell body area to the axon terminals, through a thin fiber called axon. Some of these axons can be very long and most of them are very short. The electrical signal that runs along the axon is based on ion movement. The speed of the signal transmission is influenced by an insulating layer called myelin,” he explains.

Myelin is a fatty layer formed, in the vertebrate central nervous system, by concentric wrapping of oligodendrocyte cell processes around axons. The term “myelin” was coined in 1854 by Virchow (whose penchant for Greek and for naming new structures also led to the terms amyloid, leukemia, and chromatin). In more modern images, the myelin sheath is beautifully visible as concentric spirals surrounding the “tube” of the axon itself. Neurons in the peripheral nervous system are also myelinated, but the cells responsible for myelination are Schwann cells, rather than oligodendrocytes.

“Neurons communicate with each other through electrical and chemical signals,” explains Boaz Barak.

“Myelin’s main purpose is to insulate the neuron’s axon,” Barak says. “It speeds up conductivity and the transmission of electrical impulses. Myelin promotes fast transmission of electrical signals mainly by affecting two factors: 1) increasing electrical resistance, or reducing leakage of the electrical signal and ions along the axon, “trapping” them inside the axon and 2) decreasing membrane capacitance by increasing the distance between conducting materials inside the axon (intracellular fluids) and outside of it (extracellular fluids).”

Adjacent sections of axon in a given neuron are each surrounded by a distinct myelin sheath. Unmyelinated gaps between adjacent ensheathed regions of the axon are called Nodes of Ranvier, and are critical to fast transmission of action potentials, in what is termed “saltatory conduction.” A useful analogy is that if the axon itself is like an electrical wire, myelin is like insulation that surrounds it, speeding up impulse propagation, and overcoming the decrease in action potential size that would occur during transmission along a naked axon due to electrical signal leakage, how the myelin sheath promotes fast transmission that allows neurons to transmit information long distances in a timely fashion in the vertebrate nervous system.

Myelin seems to be critical to healthy functioning of the nervous system; in fact, disruptions in the myelin sheath have been linked to a variety of disorders.

Former McGovern postdoc, Boaz Barak. Photo: Justin Knight

“Abnormal myelination can arise from abnormal development caused by genetic alterations,” Barak explains further. “Demyelination can even occur, due to an autoimmune response, trauma, and other causes. In neurological conditions in which myelin properties are abnormal, as in the case of lesions or plaques, signal transmission can be affected. For example, defects in myelin can lead to lack of neuronal communication, as there may be a delay or reduction in transmission of electrical and chemical signals. Also, in cases of abnormal myelination, it is possible that the synchronicity of brain region activity might be affected, for example, leading to improper actions and behaviors.”

Researchers are still working to fully understand the role of myelin in disorders. Myelin has a long history of being evasive though, with its origins in the central nervous system being unclear for many years. For a period of time, the origin of myelin was thought to be the axon itself, and it was only after initial discovery (by Robertson, 1899), re-discovery (Del Rio-Hortega, 1919), and skepticism followed by eventual confirmation, that the role of oligodendrocytes in forming myelin became clear. With modern imaging and genetic tools, we should be able to increasingly understand its role in the healthy, as well as a compromised, nervous system.

Do you have a question for The Brain? Ask it here.

2019 Scolnick Prize Awarded to Richard Huganir

The McGovern Institute announced today that the winner of the 2019 Edward M. Scolnick Prize in Neuroscience is Rick Huganir, the Bloomberg Distinguished Professor of Neuroscience and Psychological and Brain Sciences at the Johns Hopkins University School of Medicine. Huganir is being recognized for his role in understanding the molecular and biochemical underpinnings of “synaptic plasticity,” changes at synapses that are key to learning and memory formation. The Scolnick Prize is awarded annually by the McGovern Institute to recognize outstanding advances in any field of neuroscience.

“Rick Huganir has made a huge impact on our understanding of how neurons communicate with one another, and the award honors him for this ground-breaking research”, says Robert Desimone, director of the McGovern Institute and the chair of the committee.

“He conducts basic research on the synapses between neurons but his work has important implications for our understanding of many brain disorders that impair synaptic function.”

As the past president of the Society for Neuroscience, the world’s largest organization of researchers that study the brain and nervous system, Huganir is well-known in the global neuroscience community. He also directs the Kavli Neuroscience Discovery Institute and serves as director of the Solomon H. Snyder Department of Neuroscience at Johns Hopkins University School of Medicine and co-director of the Johns Hopkins Brain Science Institute.

From the beginning of his research career, Huganir was interested in neurotransmitter receptors, key to signaling at the synapse. He conducted his thesis work in the laboratory of Efraim Racker at Cornell University, where he first reconstituted one of these receptors, the nicotinic acetylcholine receptor, allowing its biochemical characterization. He went on to become a postdoctoral fellow in Paul Greengard’s lab at The Rockefeller University in New York. During this time, he made the first functional demonstration that phosphorylation, a reversible chemical modification, affects neurotransmitter receptor activity. Phosphorylation was shown to regulate desensitization, the process by which neurotransmitter receptors stop reacting during prolonged exposure to the neurotransmitter.

Upon arriving at Johns Hopkins University, Huganir broadened this concept, finding that the properties and functions of other key receptors and channels, including the GABAA, AMPA, and kainite receptors, could be controlled through phosphorylation. By understanding the sites of phosphorylation and the effects of this modification, Huganir was laying the foundation for the next major steps from his lab: showing that these modifications affect the strength of synaptic connections and transmission, i.e. synaptic plasticity, and in turn, behavior and memory. Huganir also uncovered proteins that interact with neurotransmitter receptors and influence synaptic transmission and plasticity, thus uncovering another layer of molecular regulation. He went on to define how these accessory factors have such influence, showing that they impact the subcellular targeting and cycling of neurotransmitter receptors to and from the synaptic membrane. These mechanisms influence the formation of, for example, fear memory, as well as its erasure. Indeed, Huganir found that a specific type of AMPA receptor is added to synapses in the amygdala after a traumatic event, and that specific removal results in fear erasure in a mouse model.

Among many awards and honors, Huganir received the Young Investigator Award and the Julius Axelrod Award of the Society for Neuroscience. He was also elected to the American Academy of Arts and Sciences, the US National Academy of Sciences, and the Institute of Medicine. He is also a fellow of the American Association for the Advancement of Science.

The Scolnick Prize was first awarded in 2004, and was established by Merck in honor of Edward M. Scolnick who was President of Merck Research Laboratories for 17 years. Scolnick is currently a core investigator at the Broad Institute, and chief scientist emeritus of the Stanley Center for Psychiatric Research at Broad Institute.

Huganir will deliver the Scolnick Prize lecture at the McGovern Institute on May 8, 2019 at 4:00pm in the Singleton Auditorium of MIT’s Brain and Cognitive Sciences Complex (Bldg 46-3002), 43 Vassar Street in Cambridge. The event is free and open to the public.

 

 

Ila Fiete joins the McGovern Institute

Ila Fiete, an associate professor in the Department of Brain and Cognitive Sciences at MIT recently joined the McGovern Institute as an associate investigator. Fiete is working to understand the circuits that underlie short-term memory, integration, and inference in the brain.

Think about the simple act of visiting a new town and getting to know its layout as you explore it. What places are reachable from others? Where are landmarks relative to each other? Where are you relative to these landmarks? How do you get from here to where you want to go next?

The process that occurs as your brain tries to transform the few routes you follow into a coherent map of the world is just one of myriad examples of hard computations that the brain is constantly performing. Fiete’s goal is to understand how the brain is able to carry out such computations, and she is developing and using multiple tools to this end. These approaches include pure theoretical approaches to examine neural codes, building numerical dynamical models of circuit operation, and techniques to extract information about the underlying circuit dynamics from neural data.

Spatial navigation is a particularly interesting nut to crack from a neural perspective: The mapping devices on your phone have access to global satellite data, previously constructed detailed maps of the town, various additional sensors, and excellent non-leaky memory. By contrast, the brain must build maps, plan routes, and determine goals all using noisy, local sensors, no externally provided maps, and with noisy, forgetful or leaky neurons. Fiete is particularly interested in elucidating how the brain deals with noisy and ambiguous cues from the world to arrive at robust estimates about the world that resolve ambiguity. She is also interested in how the networks that are important for memory and integration develop through plasticity, learning, and development in the brain.

Fiete earned a BS in mathematics and physics at the University of Michigan then obtained her PhD in 2004 at Harvard University in the Department of Physics. She held a postdoctoral appointment at the Kavli Institute for Theoretical Physics at the University of California, Santa Barbara from 2004 to 2006, while she was also a visiting member of the Center for Theoretical Biophysics at the University of California at San Diego. Fiete subsequently spent two years at Caltech as a Broad Fellow in brain circuitry, and in 2008 joined the faculty of the University of Texas at Austin. She is currently an HHMI faculty scholar.

Peering under the hood of fake-news detectors

New work from researchers at the McGovern Institute for Brain Research at MIT peers under the hood of an automated fake-news detection system, revealing how machine-learning models catch subtle but consistent differences in the language of factual and false stories. The research also underscores how fake-news detectors should undergo more rigorous testing to be effective for real-world applications.

Popularized as a concept in the United States during the 2016 presidential election, fake news is a form of propaganda created to mislead readers, in order to generate views on websites or steer public opinion.

Almost as quickly as the issue became mainstream, researchers began developing automated fake news detectors — so-called neural networks that “learn” from scores of data to recognize linguistic cues indicative of false articles. Given new articles to assess, these networks can, with fairly high accuracy, separate fact from fiction, in controlled settings.

One issue, however, is the “black box” problem — meaning there’s no telling what linguistic patterns the networks analyze during training. They’re also trained and tested on the same topics, which may limit their potential to generalize to new topics, a necessity for analyzing news across the internet.

In a paper presented at the Conference and Workshop on Neural Information Processing Systems, the researchers tackle both of those issues. They developed a deep-learning model that learns to detect language patterns of fake and real news. Part of their work “cracks open” the black box to find the words and phrases the model captures to make its predictions.

Additionally, they tested their model on a novel topic it didn’t see in training. This approach classifies individual articles based solely on language patterns, which more closely represents a real-world application for news readers. Traditional fake news detectors classify articles based on text combined with source information, such as a Wikipedia page or website.

“In our case, we wanted to understand what was the decision-process of the classifier based only on language, as this can provide insights on what is the language of fake news,” says co-author Xavier Boix, a postdoc in the lab of Eugene McDermott Professor Tomaso Poggio at the Center for Brains, Minds, and Machines (CBMM), a National Science Foundation-funded center housed within the McGovern Institute.

“A key issue with machine learning and artificial intelligence is that you get an answer and don’t know why you got that answer,” says graduate student and first author Nicole O’Brien ’17. “Showing these inner workings takes a first step toward understanding the reliability of deep-learning fake-news detectors.”

The model identifies sets of words that tend to appear more frequently in either real or fake news — some perhaps obvious, others much less so. The findings, the researchers say, points to subtle yet consistent differences in fake news — which favors exaggerations and superlatives — and real news, which leans more toward conservative word choices.

“Fake news is a threat for democracy,” Boix says. “In our lab, our objective isn’t just to push science forward, but also to use technologies to help society. … It would be powerful to have tools for users or companies that could provide an assessment of whether news is fake or not.”

The paper’s other co-authors are Sophia Latessa, an undergraduate student in CBMM; and Georgios Evangelopoulos, a researcher in CBMM, the McGovern Institute of Brain Research, and the Laboratory for Computational and Statistical Learning.

Limiting bias

The researchers’ model is a convolutional neural network that trains on a dataset of fake news and real news. For training and testing, the researchers used a popular fake news research dataset, called Kaggle, which contains around 12,000 fake news sample articles from 244 different websites. They also compiled a dataset of real news samples, using more than 2,000 from the New York Times and more than 9,000 from The Guardian.

In training, the model captures the language of an article as “word embeddings,” where words are represented as vectors — basically, arrays of numbers — with words of similar semantic meanings clustered closer together. In doing so, it captures triplets of words as patterns that provide some context — such as, say, a negative comment about a political party. Given a new article, the model scans the text for similar patterns and sends them over a series of layers. A final output layer determines the probability of each pattern: real or fake.

The researchers first trained and tested the model in the traditional way, using the same topics. But they thought this might create an inherent bias in the model, since certain topics are more often the subject of fake or real news. For example, fake news stories are generally more likely to include the words “Trump” and “Clinton.”

“But that’s not what we wanted,” O’Brien says. “That just shows topics that are strongly weighting in fake and real news. … We wanted to find the actual patterns in language that are indicative of those.”

Next, the researchers trained the model on all topics without any mention of the word “Trump,” and tested the model only on samples that had been set aside from the training data and that did contain the word “Trump.” While the traditional approach reached 93-percent accuracy, the second approach reached 87-percent accuracy. This accuracy gap, the researchers say, highlights the importance of using topics held out from the training process, to ensure the model can generalize what it has learned to new topics.

More research needed

To open the black box, the researchers then retraced their steps. Each time the model makes a prediction about a word triplet, a certain part of the model activates, depending on if the triplet is more likely from a real or fake news story. The researchers designed a method to retrace each prediction back to its designated part and then find the exact words that made it activate.

More research is needed to determine how useful this information is to readers, Boix says. In the future, the model could potentially be combined with, say, automated fact-checkers and other tools to give readers an edge in combating misinformation. After some refining, the model could also be the basis of a browser extension or app that alerts readers to potential fake news language.

“If I just give you an article, and highlight those patterns in the article as you’re reading, you could assess if the article is more or less fake,” he says. “It would be kind of like a warning to say, ‘Hey, maybe there is something strange here.’”

Joining the dots in large neural datasets

You might have played ‘join the dots’, a puzzle where numbers guide you to draw until a complete picture emerges. But imagine a complex underlying image with no numbers to guide the sequence of joining. This is a problem that challenges scientists who work with large amounts of neural data. Sometimes they can align data to a stereotyped behavior, and thus define a sequence of neuronal activity underlying navigation of a maze or singing of a song learned and repeated across generations of birds. But most natural behavior is not stereotyped, and when it comes to sleeping, imagining, and other higher order activities, there is not even a physical behavioral readout for alignment. Michale Fee and colleagues have now developed an algorithm, seqNMF, that can recognize relevant sequences of neural activity, even when there is no guide to align to, such as an overt sequence of behaviors or notes.

“This method allows you to extract structure from the internal life of the brain without being forced to make reference to inputs or output,” says Michale Fee, a neuroscientist at the McGovern Institute at MIT, Associate Department Head and Glen V. and Phyllis F. Dorflinger Professor of Neuroscience in the Department of Brain and Cognitive Sciences, and investigator with the Simons Collaboration on the Global Brain. Fee conducted the study in collaboration with Mark S. Goldman of the University of California, Davis.

In order to achieve this task, the authors of the study, co-led by Emily L. Mackevicius and Andrew H. Bahle of the McGovern Institute,  took a process called convolutional non-negative matrix factorization (NMF), a tool that allows extraction of sparse, but important, features from complex and noisy data, and developed it so that it can be used to extract sequences over time that are related to a learned behavior or song. The new algorithm also relies on repetition, but tell-tale repetitions of neural activity rather than simplistic repetitions in the animal’s behavior. seqNMF can follow repeated sequences of firing over time that are not tied to a specific external reference time framework, and can extract relevant sequences of neural firing in an unsupervised fashion without the researcher supplying prior information.

In the current study, the authors initially applied and honed the system on synthetic datasets. These datasets started to show them that the algorithm could “join the dots” without additional informational input. When seqNMF performed well in these tests, they applied it to available open source data from rats, finding that they could extract sequences of neural firing in the hippocampus that are relevant to finding a water reward in a maze.

Having passed these initial tests, the authors upped the ante and challenged seqNMF to find relevant neural activity sequences in a non-stereotypical behavior: improvised singing by zebra finches that have not learned the signature songs of their species (untutored birds). The authors analyzed neural data from the HVC, a region of the bird brain previously linked to song learning. Since normal adult bird songs are stereotyped, the researchers could align neural activity with features in the song itself for well-tutored birds. Fee and colleagues then turned to untutored birds and found that they still had repeated neural sequences related to the “improvised” song, that are reminiscent of the tutored birds, but the patterns are messier. Indeed, the brain of the untutored bird will even initiate two distinct neural signatures at the same time, but seqNMF is able to see past the resulting neural cacophony, and decipher that multiple patterns are present but overlapping. Being able to find these levels of order in such neural datasets is near impossible using previous methods of analysis.

seqNMF can be applied, potentially, to any neural activity, and the researchers are now testing whether the algorithm can indeed be generalized to extract information from other types of neural data. In other words, now that it’s clear that seqNMF can find a relevant sequence of neural activity for a non-stereotypical behavior, scientists can examine whether the neural basis of behaviors in other organisms and even for activities such as sleep and imagination can be extracted. Indeed, seqNMF is available on GitHub for researchers to apply to their own questions of interest.