Ed Boyden receives 2019 Warren Alpert Prize

The 2019 Warren Alpert Foundation Prize has been awarded to four scientists, including Ed Boyden, for pioneering work that launched the field of optogenetics, a technique that uses light-sensitive channels and pumps to control the activity of neurons in the brain with a flick of a switch. He receives the prize alongside Karl Deisseroth, Peter Hegemann, and Gero Miesenböck, as outlined by The Warren Alpert Foundation in their announcement.

Harnessing light and genetics, the approach illuminates and modulates the activity of neurons, enables study of brain function and behavior, and helps reveal activity patterns that can overcome brain diseases.

Boyden’s work was key to envisioning and developing optogenetics, now a core method in neuroscience. The method allows brain circuits linked to complex behavioral processes, such as those involved in decision-making, feeding, and sleep, to be unraveled in genetic models. It is also helping to elucidate the mechanisms underlying neuropsychiatric disorders, and has the potential to inspire new strategies to overcome brain disorders.

“It is truly an honor to be included among the extremely distinguished list of winners of the Alpert Award,” says Boyden, the Y. Eva Tan Professor in Neurotechnology at the McGovern Institute, MIT. “To me personally, it is exciting to see the relatively new field of neurotechnology recognized. The brain implements our thoughts and feelings. It makes us who we are. This mysteries and challenge requires new technologies to make the brain understandable and repairable. It is a great honor that our technology of optogenetics is being thus recognized.”

While they were students, Boyden, and fellow awardee Karl Deisseroth, brainstormed about how microbial opsins could be used to mediate optical control of neural activity. In mid-2004, the pair collaborated to show that microbial opsins can be used to optically control neural activity. Upon launching his lab at MIT, Boyden’s team developed the first optogenetic silencing tool, the first effective optogenetic silencing in live mammals, noninvasive optogenetic silencing, and single-cell optogenetic control.

“The discoveries made by this year’s four honorees have fundamentally changed the landscape of neuroscience,” said George Q. Daley, dean of Harvard Medical School. “Their work has enabled scientists to see, understand and manipulate neurons, providing the foundation for understanding the ultimate enigma—the human brain.”

Beyond optogenetics, Boyden has pioneered transformative technologies that image, record, and manipulate complex systems, including expansion microscopy, robotic patch clamping, and even shrinking objects to the nanoscale. He was elected this year to the ranks of the National Academy of Sciences, and selected as an HHMI Investigator. Boyden has received numerous awards for this work, including the 2018 Gairdner International Prize and the 2016 Breakthrough Prize in Life Sciences.

The Warren Alpert Foundation, in association with Harvard Medical School, honors scientists whose work has improved the understanding, prevention, treatment or cure of human disease. Prize recipients are selected by the foundation’s scientific advisory board, which is composed of distinguished biomedical scientists and chaired by the dean of Harvard Medical School. The honorees will share a $500,000 prize and will be recognized at a daylong symposium on Oct. 3 at Harvard Medical School.

Ed Boyden holds the titles of Investigator, McGovern Institute; Y. Eva Tan Professor in Neurotechnology at MIT; Leader, Synthetic Neurobiology Group, Media Lab; Associate Professor, Biological Engineering, Brain and Cognitive Sciences, Media Lab; Co-Director, MIT Center for Neurobiological Engineering; Member, MIT Center for Environmental Health Sciences, Computational and Systems Biology Initiative, and Koch Institute.

Speaking many languages

Ev Fedorenko studies the cognitive processes and brain regions underlying language, a signature cognitive skill that is uniquely and universally human. She investigates both people with linguistic impairments, and those that have exceptional language skills: hyperpolyglots, or people that are fluent in over a dozen languages. Indeed, she was recently interviewed for a BBC documentary about superlinguists as well as the New Yorker, for an article covering people with exceptional language skills.

When Fedorenko, an associate investigator at the McGovern Institute and assistant professor in the Department of Brain and Cognitive Sciences at MIT, came to the field, neuroscientists were still debating whether high-level cognitive skills such as language, are processed by multi-functional or dedicated brain regions. Using fMRI, Fedorenko and colleagues compared engagement of brain regions when individuals were engaged in linguistic vs. other high level cognitive tasks, such as arithmetic or music. Their data revealed a clear distinction between language and other cognitive processes, showing that our brains have dedicated language regions.

Here is my basic question. How do I get a thought from my mind into yours?

In the time since this key study, Fedorenko has continued to unpack language in the brain. How does the brain process the overarching rules and structure of language (syntax), as opposed to meanings of words? How do we construct complex meanings? What might underlie communicative difficulties in individuals diagnosed with autism? How does the aphasic brain recover language? Intriguingly, in contrast to individuals with linguistic difficulties, there are also individuals that stand out as being able to master many languages, so-called hyperpolyglots.

In 2013, she came across a young adult that had mastered over 30 languages, a prodigy in languages. To facilitate her analysis of processing of different languages Fedorenko has collected dozens of translations of Alice in Wonderland, for her ‘Alice in the language localizer Wonderland‘ project. She has already found that hyperpolyglots tend to show less activity in linguistic processing regions when reading in, or listening to, their native language, compared to carefully matched controls, perhaps indexing more efficient processing mechanisms. Fedorenko continues to study hyperpolyglots, along with other exciting new avenues of research. Stay tuned for upcoming advances in our understanding of the brain and language.

Mark Harnett receives a 2019 McKnight Scholar Award

McGovern Institute investigator Mark Harnett is one of six young researchers selected to receive a prestigious 2019 McKnight Scholar Award. The award supports his research “studying how dendrites, the antenna-like input structures of neurons, contribute to computation in neural networks.”

Harnett examines the biophysical properties of single neurons, ultimately aiming to understand how these relate to the complex computations that underlie behavior. His lab was the first to examine the biophysical properties of human dendrites. The Harnett lab found that human neurons have distinct properties, including increased dendritic compartmentalization that could allow more complex computations within single neurons. His lab recently discovered that such dendritic computations are not rare, or confined to specific behaviors, but are a widespread and general feature of neuronal activity.

“As a young investigator, it is hard to prioritize so many exciting directions and ideas,” explains Harnett. “I really want to thank the McKnight Foundation, both for the support, but also for the hard work the award committee puts into carefully thinking about and giving feedback on proposals. It means a lot to get this type of endorsement from a seriously committed and distinguished committee, and their support gives even stronger impetus to pursue this research direction.”

The McKnight Foundation has supported neuroscience research since 1977, and provides three prominent awards, with the Scholar award aimed at supporting young scientists, and drawing applications from the strongest young neuroscience faculty across the US. William L. McKnight (1887-1979) was an early leader of the 3M Company and had a personal interest in memory and brain diseases. The McKnight Foundation was established with this focus in mind, and the Scholar Award provides $75,000 per year for three years to support cutting edge neuroscience research.


Ed Boyden elected to National Academy of Sciences

Ed Boyden has been elected to join the National Academy of Sciences (NAS). The organization, established by an act of Congress during the height of the Civil War, was founded to provide independent and objective advice on scientific matters to the nation, and is actively engaged in furthering science in the United States. Each year NAS members recognize fellow scientists through election to the academy based on their distinguished and continuing achievements in original research.

“I’m very honored and grateful to have been elected to the NAS,” says Boyden. “This is a testament to the work of many graduate students, postdoctoral scholars, research scientists, and staff at MIT who have worked with me over the years, and many collaborators and friends at MIT and around the world who have helped our group on this mission to advance neuroscience through new tools and ways of thinking.”

Boyden’s research creates and applies technologies that aim to expand our understanding of the brain. He notably co-invented optogenetics as an independent side collaboration, conducted in parallel to his PhD studies, a game-changing technology that has revolutionized neurobiology. This technology uses targeted expression of light-sensitive channels and pumps to activate or suppress neuronal activity in vivo using light. Optogenetics quickly swept the field of neurobiology and has been leveraged to understand how specific neurons and brain regions contribute to behavior and to disease.

His research since has an overarching focus on understanding the brain. To this end, he and his lab have the ambitious goal of developing technologies that can map, record, and manipulate the brain. This has led, as selected examples, to the invention of expansion microscopy, a super-resolution imaging technology that can capture neuron’s microstructures and reveal their complex connections, even across large-scale neural circuits; voltage-sensitive fluorescent reporters that allow neural activity to be monitored in vivo; and temporal interference stimulation, a non-invasive brain stimulation technique that allows selective activation of subcortical brain regions.

“We are all incredibly happy to see Ed being elected to the academy,” says Robert Desimone, director of the McGovern Institute for Brain Research at MIT. “He has been consistently innovative, inventing new ways of manipulating and observing neurons that are revolutionizing the field of neuroscience.”

This year the NAS, an organization that includes over 500 Nobel Laureates, elected 100 new members and 25 foreign associates. Three MIT professors were elected this year, with Paula T. Hammond (David H. Koch (1962) Professor of Engineering and Department Head, Chemical Engineering) and Aviv Regev (HHMI Investigator and Professor in the Department of Biology) being elected alongside Boyden. Boyden becomes the seventh member of the McGovern Institute faculty to join the National Academy of Sciences.

The formal induction ceremony for new NAS members, during which they sign the ledger whose first signatory is Abraham Lincoln, will be held at the Academy’s annual meeting in Washington D.C. next spring.









Recurrent architecture enhances object recognition in brain and AI

Your ability to recognize objects is remarkable. If you see a cup under unusual lighting or from unexpected directions, there’s a good chance that your brain will still compute that it is a cup. Such precise object recognition is one holy grail for AI developers, such as those improving self-driving car navigation. While modeling primate object recognition in the visual cortex has revolutionized artificial visual recognition systems, current deep learning systems are simplified, and fail to recognize some objects that are child’s play for primates such as humans. In findings published in Nature Neuroscience, McGovern Investigator James DiCarlo and colleagues have found evidence that feedback improves recognition of hard-to-recognize objects in the primate brain, and that adding feedback circuitry also improves the performance of artificial neural network systems used for vision applications.

Deep convolutional neural networks (DCNN) are currently the most successful models for accurately recognizing objects on a fast timescale (<100 ms) and have a general architecture inspired by the primate ventral visual stream, cortical regions that progressively build an accessible and refined representation of viewed objects. Most DCNNs are simple in comparison to the primate ventral stream however.

“For a long period of time, we were far from an model-based understanding. Thus our field got started on this quest by modeling visual recognition as a feedforward process,” explains senior author DiCarlo, who is also the head of MIT’s Department of Brain and Cognitive Sciences and Research Co-Leader in the Center for Brains, Minds, and Machines (CBMM). “However, we know there are recurrent anatomical connections in brain regions linked to object recognition.”

Think of feedforward DCNNs and the portion of the visual system that first attempts to capture objects as a subway line that runs forward through a series of stations. The extra, recurrent brain networks are instead like the streets above, interconnected and not unidirectional. Because it only takes about 200 ms for the brain to recognize an object quite accurately, it was unclear if these recurrent interconnections in the brain had any role at all in core object recognition. For example, perhaps those recurrent connections are only in place to keep the visual system in tune over long periods of time. For example, the return gutters of the streets help slowly clear it of water and trash, but are not strictly needed to quickly move people from one end of town to the other. DiCarlo, along with lead author and CBMM postdoc Kohitij Kar, set out to test whether a subtle role of recurrent operations in rapid visual object recognition was being overlooked.

Challenging recognition

The authors first needed to identify objects that are trivially decoded by the primate brain, but are challenging for artificial systems. Rather than trying to guess why deep learning was having problems recognizing an object (is it due to clutter in the image? a misleading shadow?), the authors took an unbiased approach that turned out to be critical.

Kar explained further that “we realized that AI-models actually don’t have problems with every image where an object is occluded or in clutter. Humans trying to guess why AI models were challenged turned out to be holding us back.”

Instead, the authors presented the deep learning system, as well as monkeys and humans, with images, homing in on “challenge images” where the primates could easily recognize the objects in those images, but a feed forward DCNN ran into problems. When they, and others, added appropriate recurrent processing to these DCNNs, object recognition in challenge images suddenly became a breeze.

Processing times

Kar used neural recording methods with very high spatial and temporal precision to whether these images were really so trivial for primates. Remarkably, they found that though challenge images had initially appeared to be child’s play to the human brain, they actually involve extra neural processing time (about additional 30 milliseconds), suggesting that recurrent loops operate in our brain too.

 “What the computer vision community has recently achieved by stacking more and more layers onto artificial neural networks, evolution has achieved through a brain architecture with recurrent connections.” — Kohitij Kar

Diane Beck, Professor of Psychology and Co-chair of the Intelligent Systems Theme at the Beckman Institute and not an author on the study, explained further. “Since entirely feed forward deep convolutional nets are now remarkably good at predicting primate brain activity, it raised questions about the role of feedback connections in the primate brain. This study shows that, yes, feedback connections are very likely playing a role in object recognition after all.”

What does this mean for a self-driving car? It shows that deep learning architectures involved in object recognition need recurrent components if they are to match the primate brain, and also indicates how to operationalize this procedure for the next generation of intelligent machines.

“Recurrent models offer predictions of neural activity and behavior over time,” says Kar. “We may now be able to model more involved tasks. Perhaps one day, the systems will not only recognize an object, such as a person, but also perform cognitive tasks that the human brain so easily manages, such as understanding the emotions of other people.”

This work was supported by the Office of Naval Research grant MURI-114407 (J.J.D.). Center for Brains, Minds, and Machines (CBMM) funded by NSF STC award CCF-1231216 (K.K.).

How our gray matter tackles gray areas

When Katie O’Nell’s high school biology teacher showed a NOVA video on epigenetics after the AP exam, he was mostly trying to fill time. But for O’Nell, the video sparked a whole new area of curiosity.

She was fascinated by the idea that certain genes could be turned on and off, controlling what traits or processes were expressed without actually editing the genetic code itself. She was further excited about what this process could mean for the human mind.

But upon starting at MIT, she realized that she was less interested in the cellular level of neuroscience and more fascinated by bigger questions, such as, what makes certain people generous toward certain others? What’s the neuroscience behind morality?

“College is a time you can learn about anything you want, and what I want to know is why humans are really, really wacky,” she says. “We’re dumb, we make super irrational decisions, it makes no sense. Sometimes it’s beautiful, sometimes it’s awful.”

O’Nell, a senior majoring in brain and cognitive sciences, is one of five MIT students to have received a Marshall Scholarship this year. Her quest to understand the intricacies of the wacky human brain will not be limited to any one continent. She will be using the funding to earn her master’s in experimental psychology at Oxford University.

Chocolate milk and the mouse brain

O’Nell’s first neuroscience-related research experience at MIT took place during her sophomore and junior year, in the lab of Institute Professor Ann Graybiel at the McGovern Institute.

The research studied the neurological components of risk-vs-reward decision making, using a key ingredient: chocolate milk. In the experiments, mice were given two options — they could go toward the richer, sweeter chocolate milk, but they would also have to endure a brighter light. Or, they could go toward a more watered-down chocolate milk, with the benefit of a softer light. All the while, a fluorescence microscope tracked when certain cell types were being activated.

“I think that’s probably the closest thing I’ve ever had to a spiritual experience … watching this mouse in this maze deciding what to do, and watching the cells light up on the screen. You can see single-cell evidence of cognition going on. That’s just the coolest thing.”

In her junior spring, O’Nell delved even deeper into questions of morality in the lab of Professor Rebecca Saxe. Her research there centers on how the human brain parses people’s identities and emotional states from their faces alone, and how those computations are related to each other. Part of what interests O’Nell is the fact that we are constantly making decisions, about ourselves and others, with limited information.

“We’re always solving under uncertainty,” she says. “And our brain does it so well, in so many ways.”

International intrigue

Outside of class, O’Nell has no shortage of things to do. For starters, she has been serving as an associate advisor for a first-year seminar since the fall of her sophomore year.

“Basically it’s my job to sit in on a seminar and bully them into not taking seven classes at a time, and reminding them that yes, your first 8.01 exam is tomorrow,” she says with a laugh.

She has also continued an activity she was passionate about in high school — Model United Nations. One of the most fun parts for her is serving on the Historical Crisis Committee, in which delegates must try to figure out a way to solve a real historical problem, like the Cuban Missile Crisis or the French and Indian War.

“This year they failed and the world was a nuclear wasteland,” she says. “Last year, I don’t entirely know how this happened, but France decided that they wanted to abandon the North American theater entirely and just took over all of Britain’s holdings in India.”

She’s also part of an MIT program called the Addir Interfaith Fellowship, in which a small group of people meet each week and discuss a topic related to religion and spirituality. Before joining, she didn’t think it was something she’d be interested in — but after being placed in a first-year class about science and spirituality, she has found discussing religion to be really stimulating. She’s been a part of the group ever since.

O’Nell has also been heavily involved in writing and producing a Mystery Dinner Theater for Campus Preview Weekend, on behalf of her living group J Entry, in MacGregor House. The plot, generally, is MIT-themed — a physics professor might get killed by a swarm of CRISPR nanobots, for instance. When she’s not cooking up murder mysteries, she might be running SAT classes for high school students, playing piano, reading, or spending time with friends. Or, when she needs to go grocery shopping, she’ll be stopping by the Trader Joe’s on Boylston Avenue, as an excuse to visit the Boston Public Library across the street.

Quite excited for the future

O’Nell is excited that the Marshall Scholarship will enable her to live in the country that produced so many of the books she cherished as a kid, like “The Hobbit.” She’s also thrilled to further her research there. However, she jokes that she still needs to get some of the lingo down.

“I need to learn how to use the word ‘quite’ correctly. Because I overuse it in the American way,” she says.

Her master’s research will largely expand on the principles she’s been examining in the Saxe lab. Questions of morality, processing, and social interaction are where she aims to focus her attention.

“My master’s project is going to be basically taking a look at whether how difficult it is for you to determine someone else’s facial expression changes how generous you are with people,” she explains.

After that, she hopes to follow the standard research track of earning a PhD, doing postdoctoral research, and then entering academia as a professor and researcher. Teaching and researching, she says, are two of her favorite things — she’s excited to have the chance to do both at the same time. But that’s a few years ahead. Right now, she hopes to use her time in England to learn all she can about the deeper functions of the brain, with or without chocolate milk.

3Q: The interface between art and neuroscience

CBMM postdoc Sarah Schwettman

Computational neuroscientist Sarah Schwettmann, who works in the Center for Brains, Minds, and Machines at the McGovern Institute, is one of three instructors behind the cross-disciplinary course 9.S52/9.S916 (Vision in Art and Neuroscience), which introduces students to core concepts in visual perception through the lenses of art and neuroscience.

Supported by a faculty grant from the Center for Art, Science and Technology at MIT (CAST) for the past two years, the class is led by Pawan Sinha, a professor of vision and computational neuroscience in the Department of Brain and Cognitive Sciences. They are joined in the course by Seth Riskin SM ’89, a light artist and the manager of the MIT Museum Studio and Compton Gallery, where the course is taught. Schwettman discussed the combination of art and science in an educational setting.

Q: How have the three of you approached this cross-disciplinary class in art and neuroscience?

A: Discussions around this intersection often consider what each field has to offer the other. We take a different approach, one I refer to as occupying the gap, or positioning ourselves between the two fields and asking what essential questions underlie them both. One question addresses the nature of the human relationship to the world. The course suggests one answer: This relationship is fundamentally creative, from the brain’s interpretation of incoming sensory data in perception, to the explicit construction of experiential worlds in art.

Neuroscience and art, therefore, each provide a set of tools for investigating different levels of the constructive process. Through neuroscience, we develop a specific understanding of the models of the world that the brain uses to make sense of incoming visual data. With articulation of those models, we can engineer types of inputs that interact with visual processing architecture in particularly exquisite ways, and do so reliably, giving artists a toolkit for remixing and modulating experience. In the studio component of the course, we experiment with this toolkit and collectively move it forward.

While designing the course, Pawan, Seth, and I found that we were each addressing a similar set of questions, the same that motivate the class, through our own research and practice. In parallel to computational vision research, Professor Sinha leads a humanitarian initiative called Project Prakash, which provides treatment to blind children in India and explores the development of vision following the restoration of sight. Where does structure in perception originate? As an artist in the MIT Museum Studio, Seth works with articulated light to sculpt structured visual worlds out of darkness. I also live on this interface where the brain meets the world — my research in the Department of Brain and Cognitive Sciences examines the neural basis of mental models for simulating physics. Linking our work in the course is an experiment in synthesis.

Q: What current research in vision, neuroscience, and art are being explored at MIT, and how does the class connect it to hands-on practice?

A: Our brains build a rich world of experience and expectation from limited and noisy sensory data with infinite potential interpretations. In perception research, we seek to discover how the brain finds more meaning in incoming data than is explained by the signal alone. Work being done at MIT around generative models addresses this, for instance in the labs of Josh Tenenbaum and Josh McDermott in the Department of Brain and Cognitive Sciences. Researchers present an ambiguous visual or auditory stimulus and by probing someone’s perceptual interpretation, they get a handle on the structures that the mind generates to interpret incoming data, and they can begin to build computational models of the process.

In Vision in Art and Neuroscience, we focus on the experiential as well as the experimental, probing the perceiver’s experience of structure-generating process—perceiving perception itself.

As instructors, we face the pedagogical question: what exercises, in the studio, can evoke so striking an experience of students’ own perception that cutting edge research takes on new meaning, understood in the immediacy of seeing? Later in the semester, students face a similar question as artists: How can one create visual environments where viewers experience their own perceptual processing at work? Done well, this experience becomes the artwork itself. Early in the course, students explore the Ganzfeld effect, popularized by artist James Turrell, where the viewer is exposed to an unstructured visual field of uniform illumination. In this experience, one feels the mind struggling to fit models of the world to unstructured input, and attempting this over and over again — an interpretation process which often goes unnoticed when input structure is expected by visual processing architecture. The progression of the course modules follows the hierarchy of visual processing in the brain, which builds increasingly complex interpretations of visual inputs, from brightness and edges to depth, color, and recognizable form.

MIT students first encounter those concepts in the seminar component of the course at the beginning of each week. Later in the week, students translate findings into experimental approaches in the studio. We work with light directly, from introducing a single pinpoint of light into an otherwise completely dark room, to building intricate environments using programmable electronics. Students begin to take this work into their own hands, in small groups and individually, culminating in final projects for exhibition. These exhibitions are truly a highlight of the course. They’re often one of the first times that students have built and shown artworks. That’s been a gift to share with the broader MIT community, and a great learning experience for students and instructors alike.

Q: How has that approach been received by the MIT community?

A: What we’re doing has resonated across disciplines: In addition to neuroscience, we have students and researchers joining us from computer science, mechanical engineering, mathematics, the Media Lab, and ACT (the Program in Art, Culture, and Technology). The course is growing into something larger, a community of practice interested in applying the scientific methodology we develop to study the world, to probe experience, and to articulate models for its generation and replication.

With a mix of undergraduates, graduates, faculty, and artists, we’ve put together installations and symposia — including three on campus so far. The first of these, “Perceiving Perception,” also led to a weekly open studio night where students and collaborators convene for project work. Our second exhibition, “Dessert of the Real,” is on display this spring in the Compton Gallery. This April we’re organizing a symposium in the studio featuring neuroscientists, computer scientists, artists and researchers from MIT and Harvard. We’re reaching beyond campus as well, through off-site installations, collaborations with museums — including the Metropolitan Museum of Art and the Peabody Essex Museum — and a partnership with the ZERO Group in Germany.

We’re eager to involve a broad network of collaborators. It’s an exciting moment in the fields of neuroscience and computing; there is great energy to build technologies that perceive the world like humans do. We stress on the first day of class that perception is a fundamentally creative act. We see the potential for models of perception to themselves be tools for scaling and translating creativity across domains, and for building a deeply creative relationship to our environment.

Halassa named Max Planck Fellow

Michael Halassa was just appointed as one of the newest Max Planck Fellows. His appointment comes through the Max Planck Florida Institute for Neuroscience (MPFI), which aims to forge collaborations between exceptional neuroscientists from around the world to answer fundamental questions about brain development and function. The Max Planck Society selects cutting edge, active researchers from other institutions to fellow positions for a five-year period to promote interactions and synergies. While the program is a longstanding feature of the Max Planck Society, Halassa, and fellow appointee Yi Guo of the University of California, Santa Cruz, are the first selected fellows that are based at U.S. institutions.

Michael Halassa is an associate investigator at the McGovern Institute and an assistant professor in the Department of Brain and Cognitive Sciences at MIT. Halassa’s research focuses on the neural architectures that underlie complex cognitive processes. He is particularly interested in goal-directed attention, our ability to rapidly switch attentional focus based on high level objectives. For example, when you are in a roomful of colleagues, the mention of your name in a distant conversation can quickly trigger your ‘mind’s ear’ to eavesdrop into that conversation. This contrasts with hearing a name that sounds like yours on television, which does not usually grab your attention in the same way. In certain mental disorders such as schizophrenia, the ability to generate such high-level objectives, while also accounting for context, is perturbed. Recent evidence strongly suggests that impaired function of the prefrontal cortex and its interactions with a region of the brain called the thalamus may be altered in such disorders. It is this thalamocortical network that Halassa has been studying in mice, where his group has uncovered how the thalamus supports the ability of the prefrontal cortex to generate context-appropriate attentional signals.

The fellowship will support extending Halassa’s work into the tree shrew (Tupaia belangeri), which has been shown to have advanced cognitive abilities compared to mice while also offering many of the circuit-interrogation tools that make the mouse an attractive experimental model.

The Max Planck Florida Institute for Neuroscience (MPFI), a not-for-profit research organization, is part of the world-renowned Max Planck Society, Germany’s most successful research organization. The Max Planck Society was founded in 1911, and comprises 84 institutes and research facilities. While primarily located in Germany, there are 4 institutes and one research facility located aboard, including the Florida Institute that Halassa will collaborate with. The fellow positions were created with the goal of increasing interactions between the Max Planck Society and its institutes with faculty engaged in active research at other universities and institutions, which with this appointment now include MIT.

How the brain decodes familiar faces

Our brains are incredibly good at processing faces, and even have specific regions specialized for this function. But what face dimensions are we observing? Do we observe general properties first, then look at the details? Or are dimensions such as gender or other identity details decoded interdependently? In a study published today in Nature Communications, the Kanwisher lab measured the response of the brain to faces in real time, and found that the brain first decodes properties such as gender and age before drilling down to the specific identity of the face itself.

While functional magnetic resonance imaging (fMRI) has revealed an incredible level of detail about which regions of the brain respond to faces, the technology is less effective at telling us when these brain regions become activated. This is because fMRI measures brain activity by detecting changes in blood flow; when neurons become active, local blood flow to those brain regions increases. However, fMRI works too slowly to keep up with the brain’s millisecond-by-millisecond dynamics. Enter magnetoencephalography (MEG), a technique developed by MIT physicist David Cohen that detects the minuscule fluctuations in magnetic field that occur with the electrical activity of neurons. This allows better temporal resolution of neural activity.

McGovern Investigator Nancy Kanwisher and postdoc Katharina Dobs, along with their co-authors Leyla Isik and Dimitrios Pantazis, selected this temporally precise approach to measure the time it takes for the brain to respond to different dimensional features of faces.

“From a brief glimpse of a face, we quickly extract all this rich multidimensional information about a person, such as their sex, age, and identity,” explains Dobs. “I wanted to understand how the brain accomplishes this impressive feat, and what the neural mechanisms are that underlie this effect, but no one had measured the time scales of responses to these features in the same study.”

Previous studies have shown that people with prosopagnosia, a condition characterized by the inability to identify familiar faces, have no trouble determining gender, suggesting these features may be independent. “But examining when the brain recognizes gender and identity, and whether these are interdependent features is less clear,” explains Dobs.

By recording the brain activity of subjects in the MEG, Dobs and her co-authors found that the brain responds to coarse features, such as the gender of a face, much faster than the identity of the face itself. Their data showed that, in as little as 60-70 milliseconds, the brain begins to decode the age and gender of a person. Roughly 30 milliseconds later — at around 90 milliseconds — the brain begins processing the identity of the face.

After establishing a paradigm for measuring responses to these face dimensions, the authors then decided to test the effect of familiarity. It’s generally understood that the brain processes information about “familiar faces” more robustly than unfamiliar faces. For example, our brains are adept at recognizing actress Scarlett Johansson across multiple photographs, even if her hairstyle is different in each picture. Our brains have a much harder time, however, recognizing two images of the same person if the face is unfamiliar.

“Actually, for unfamiliar faces the brain is easily fooled,” Dobs explains, “variations in images, shadows, changes in hair color or style, quickly lead us to think we are looking at a different person. Conversely, we have no problem if a familiar face is in shadow, or a friend changes their hair style. But we didn’t know why familiar face perception is much more robust, whether this is due to better feed forward processing, or based on later memory retrieval.”

Familiar and unfamiliar celebrity faces side by side
Perception of a familiar face, Scarlett Johansson, is more robust than for unfamiliar faces, in this study German celebrity Karoline Herfurth (images: Wikimedia commons).

To test the effect of familiarity, the authors measured brain responses while the subjects viewed familiar faces (American celebrities) and unfamiliar faces (German celebrities) in the MEG. Surprisingly, they found that subjects recognize gender more quickly in familiar faces than unfamiliar faces. For example our brains decode that actor Scarlett Johansson is female, before we even realize she is Scarlett Johansson. And for the less familiar German actor, Karoline Herfurth, our brains unpack the same information less well.

Dobs and co-authors argue that better gender and identity recognition is not “top-down” for familiar faces, meaning that improved responses to familiar faces is not about retrieval of information from memory, but rather, a feed-forward mechanism. They found that the brain responds to facial familiarity at a much slower time scale (400 milliseconds) than it responds to gender, suggesting that the brain may be remembering associations related to the face (Johansson = Lost in Translation movie) in that longer timeframe.

This is good news for artificial intelligence. “We are interested in whether feed-forward deep learning systems can learn faces using similar mechanisms,” explains Dobs, “and help us to understand how the brain can process faces it has seen before in the absence of pulling on memory.”

When it comes to immediate next steps, Dobs would like to explore where in the brain these facial dimensions are extracted, how prior experience affects the general processing of objects, and whether computational models of face processing can capture these complex human characteristics.


How does the brain focus?

This is a very interesting question, and one that researchers at the McGovern Institute for Brain Research are actively pursuing. It’s also important for understanding what happens in conditions such as ADHD. There are constant distractions in the world, a cacophony of noise and visual stimulation. How and where we focus our attention, and what the brain attends to vs. treating as background information, is a big question in neuroscience. Thanks to work from researchers, including Robert Desimone, we understand quite a bit about how this works in the visual system in particular. What his lab has found is that when we pay attention to something specific, neurons in the visual cortex responding to the object we’re focusing upon fire in synchrony, whereas those responding to irrelevant information become suppressed. It’s almost as if this synchrony “increases the volume” so that the responding neurons rise above general noise.

Synchronized activity of neurons occurs as they oscillate together at a particular frequency, but the frequency of oscillation really matters when it comes to attention and focus vs. inattention and distraction. To find out more about this, I asked a postdoc in the Desimone lab, Yasaman Bagherzadeh about the role of different “brainwaves,” or oscillations at different frequencies, in attention.

“Studies in humans have shown that enhanced synchrony between neurons in the alpha range –8–12 Hz— is actually associated with inattention and distracting information,” explains Bagherzadeh, “whereas enhanced gamma synchrony (about 30-150 Hz) is associated with attention and focus on a target. For example, when a stimulus (through the ears or eyes) or its location (left vs. right) is intentionally ignored, this is preceded by a relative increase in alpha power, while a stimulus you’re attending to is linked to an increase in gamma power.”

Attention in the Desimone lab (no pun intended) has also recently been focused on covert attention. This type of spatial attention was traditionally thought to occur through a mental shift without a glance, but the Desimone lab recently found that even during these mental shifts, animal sneakily glance at objects that attention becomes focused on. Think now of something you know is nearby (a cup of coffee for example), but not in the center of your field of vision. Chances are that you just sneakily glanced at that object.

Previously these sneaky glances/small eye movements, called microsaccades (MS for short), were considered to be involuntary movements without any functional role. However, in the recent Desimone lab study, it was found that a MS significantly modulates neural activity during the attention period. This means that when you glance at something, even sneakily, it is intimately linked to attention. In other words, when it comes to spatial attention, eye movements seem to play a significant role.

Various questions arise about the mechanisms of spatial attention as a result this study, as outlined by Karthik Srinivasan, a postdoctoral associate in the Desimone lab.

“How are eye movement signals and attentional processing coordinated? What’s the role of the different frequencies of oscillation for such coordination? Is there a role for them or are they just the frequency domain representation (i.e., an epiphenomenon) of a temporal/dynamical process? Is attention a sustained process or rhythmic or something more dynamic?” Srinivasan lists some of the questions that come out of his study and goes on to explain the implications of the study further. “It is hard to believe that covert attention is a sustained process (the so-called ‘spotlight theory of attention’), given that neural activity during the attention period can be modulated by covert glances. A few recent studies have supported the idea that attention is a rhythmic process that can be uncoupled from eye movements. While this is an idea made attractive by its simplicity, it’s clear that small glances can affect neural activity related to attention, and MS are not rhythmic. More work is thus needed to get to a more unified theory that accounts for all of the data out there related to eye movements and their close link to attention.”

Answering some of the questions that Bagherzadeh, Srinivasan, and others are pursuing in the Desimone lab, both experimentally and theoretically, will clear up some of the issues above, and improve our understanding of how the brain focuses attention.

Do you have a question for The Brain? Ask it here.