Neuroscientists reverse some behavioral symptoms of Williams Syndrome

Williams Syndrome, a rare neurodevelopmental disorder that affects about 1 in 10,000 babies born in the United States, produces a range of symptoms including cognitive impairments, cardiovascular problems, and extreme friendliness, or hypersociability.

In a study of mice, MIT neuroscientists have garnered new insight into the molecular mechanisms that underlie this hypersociability. They found that loss of one of the genes linked to Williams Syndrome leads to a thinning of the fatty layer that insulates neurons and helps them conduct electrical signals in the brain.

The researchers also showed that they could reverse the symptoms by boosting production of this coating, known as myelin. This is significant, because while Williams Syndrome is rare, many other neurodevelopmental disorders and neurological conditions have been linked to myelination deficits, says Guoping Feng, the James W. and Patricia Poitras Professor of Neuroscience and a member of MIT’s McGovern Institute for Brain Research.

“The importance is not only for Williams Syndrome,” says Feng, who is one of the senior authors of the study. “In other neurodevelopmental disorders, especially in some of the autism spectrum disorders, this could be potentially a new direction to look into, not only the pathology but also potential treatments.”

Zhigang He, a professor of neurology and ophthalmology at Harvard Medical School, is also a senior author of the paper, which appears in the April 22 issue of Nature Neuroscience. Former MIT postdoc Boaz Barak, currently a principal investigator at Tel Aviv University in Israel, is the lead author and a senior author of the paper.

Impaired myelination

Williams Syndrome, which is caused by the loss of one of the two copies of a segment of chromosome 7, can produce learning impairments, especially for tasks that require visual and motor skills, such as solving a jigsaw puzzle. Some people with the disorder also exhibit poor concentration and hyperactivity, and they are more likely to experience phobias.

In this study, the researchers decided to focus on one of the 25 genes in that segment, known as Gtf2i. Based on studies of patients with a smaller subset of the genes deleted, scientists have linked the Gtf2i gene to the hypersociability seen in Williams Syndrome.

Working with a mouse model, the researchers devised a way to knock out the gene specifically from excitatory neurons in the forebrain, which includes the cortex, the hippocampus, and the amygdala (a region important for processing emotions). They found that these mice did show increased levels of social behavior, measured by how much time they spent interacting with other mice. The mice also showed deficits in fine motor skills and increased nonsocial related anxiety, which are also symptoms of Williams Syndrome.

Next, the researchers sequenced the messenger RNA from the cortex of the mice to see which genes were affected by loss of Gtf2i. Gtf2i encodes a transcription factor, so it controls the expression of many other genes. The researchers found that about 70 percent of the genes with significantly reduced expression levels were involved in the process of myelination.

“Myelin is the insulation layer that wraps the axons that extend from the cell bodies of neurons,” Barak says. “When they don’t have the right properties, it will lead to faster or slower electrical signal transduction, which affects the synchronicity of brain activity.”

Further studies revealed that the mice had only about half the normal number of mature oligodendrocytes — the brain cells that produce myelin. However, the number of oligodendrocyte precursor cells was normal, so the researchers suspect that the maturation and differentiation processes of these cells are somehow impaired when Gtf2i is missing in the neurons.

This was surprising because Gtf2i was not knocked out in oligodendrocytes or their precursors. Thus, knocking out the gene in neurons may somehow influence the maturation process of oligodendrocytes, the researchers suggest. It is still unknown how this interaction might work.

“That’s a question we are interested in, but we don’t know whether it’s a secreted factor, or another kind of signal or activity,” Feng says.

In addition, the researchers found that the myelin surrounding axons of the forebrain was significantly thinner than in normal mice. Furthermore, electrical signals were smaller, and took more time to cross the brain in mice with Gtf2i missing.

The study is an example of pioneering research into the contribution of glial cells, which include oligodendrocytes, to neuropsychiatric disorders, says Doug Fields, chief of the nervous system development and plasticity section of the Eunice Kennedy Shriver National Institute of Child Health and Human Development.

“Traditionally myelin was only considered in the context of diseases that destroy myelin, such as multiple sclerosis, which prevents transmission of neural impulses. More recently it has become apparent that more subtle defects in myelin can impair neural circuit function, by causing delays in communication between neurons,” says Fields, who was not involved in the research.

Symptom reversal

It remains to be discovered precisely how this reduction in myelination leads to hypersociability. The researchers suspect that the lack of myelin affects brain circuits that normally inhibit social behaviors, making the mice more eager to interact with others.

“That’s probably the explanation, but exactly which circuits and how does it work, we still don’t know,” Feng says.

The researchers also found that they could reverse the symptoms by treating the mice with drugs that improve myelination. One of these drugs, an FDA-approved antihistamine called clemastine fumarate, is now in clinical trials to treat multiple sclerosis, which affects myelination of neurons in the brain and spinal cord. The researchers believe it would be worthwhile to test these drugs in Williams Syndrome patients because they found thinner myelin and reduced numbers of mature oligodendrocytes in brain samples from human subjects who had Williams Syndrome, compared to typical human brain samples.

“Mice are not humans, but the pathology is similar in this case, which means this could be translatable,” Feng says. “It could be that in these patients, if you improve their myelination early on, it could at least improve some of the conditions. That’s our hope.”

Such drugs would likely help mainly the social and fine-motor issues caused by Williams Syndrome, not the symptoms that are produced by deletion of other genes, the researchers say. They may also help treat other disorders, such as autism spectrum disorders, in which myelination is impaired in some cases, Feng says.

“We think this can be expanded into autism and other neurodevelopmental disorders. For these conditions, improved myelination may be a major factor in treatment,” he says. “We are now checking other animal models of neurodevelopmental disorders to see whether they have myelination defects, and whether improved myelination can improve some of the pathology of the defects.”

The research was funded by the Simons Foundation, the Poitras Center for Affective Disorders Research at MIT, the Stanley Center for Psychiatric Research at the Broad Institute of MIT and Harvard, and the Simons Center for the Social Brain at MIT.

How our gray matter tackles gray areas

When Katie O’Nell’s high school biology teacher showed a NOVA video on epigenetics after the AP exam, he was mostly trying to fill time. But for O’Nell, the video sparked a whole new area of curiosity.

She was fascinated by the idea that certain genes could be turned on and off, controlling what traits or processes were expressed without actually editing the genetic code itself. She was further excited about what this process could mean for the human mind.

But upon starting at MIT, she realized that she was less interested in the cellular level of neuroscience and more fascinated by bigger questions, such as, what makes certain people generous toward certain others? What’s the neuroscience behind morality?

“College is a time you can learn about anything you want, and what I want to know is why humans are really, really wacky,” she says. “We’re dumb, we make super irrational decisions, it makes no sense. Sometimes it’s beautiful, sometimes it’s awful.”

O’Nell, a senior majoring in brain and cognitive sciences, is one of five MIT students to have received a Marshall Scholarship this year. Her quest to understand the intricacies of the wacky human brain will not be limited to any one continent. She will be using the funding to earn her master’s in experimental psychology at Oxford University.

Chocolate milk and the mouse brain

O’Nell’s first neuroscience-related research experience at MIT took place during her sophomore and junior year, in the lab of Institute Professor Ann Graybiel at the McGovern Institute.

The research studied the neurological components of risk-vs-reward decision making, using a key ingredient: chocolate milk. In the experiments, mice were given two options — they could go toward the richer, sweeter chocolate milk, but they would also have to endure a brighter light. Or, they could go toward a more watered-down chocolate milk, with the benefit of a softer light. All the while, a fluorescence microscope tracked when certain cell types were being activated.

“I think that’s probably the closest thing I’ve ever had to a spiritual experience … watching this mouse in this maze deciding what to do, and watching the cells light up on the screen. You can see single-cell evidence of cognition going on. That’s just the coolest thing.”

In her junior spring, O’Nell delved even deeper into questions of morality in the lab of Professor Rebecca Saxe. Her research there centers on how the human brain parses people’s identities and emotional states from their faces alone, and how those computations are related to each other. Part of what interests O’Nell is the fact that we are constantly making decisions, about ourselves and others, with limited information.

“We’re always solving under uncertainty,” she says. “And our brain does it so well, in so many ways.”

International intrigue

Outside of class, O’Nell has no shortage of things to do. For starters, she has been serving as an associate advisor for a first-year seminar since the fall of her sophomore year.

“Basically it’s my job to sit in on a seminar and bully them into not taking seven classes at a time, and reminding them that yes, your first 8.01 exam is tomorrow,” she says with a laugh.

She has also continued an activity she was passionate about in high school — Model United Nations. One of the most fun parts for her is serving on the Historical Crisis Committee, in which delegates must try to figure out a way to solve a real historical problem, like the Cuban Missile Crisis or the French and Indian War.

“This year they failed and the world was a nuclear wasteland,” she says. “Last year, I don’t entirely know how this happened, but France decided that they wanted to abandon the North American theater entirely and just took over all of Britain’s holdings in India.”

She’s also part of an MIT program called the Addir Interfaith Fellowship, in which a small group of people meet each week and discuss a topic related to religion and spirituality. Before joining, she didn’t think it was something she’d be interested in — but after being placed in a first-year class about science and spirituality, she has found discussing religion to be really stimulating. She’s been a part of the group ever since.

O’Nell has also been heavily involved in writing and producing a Mystery Dinner Theater for Campus Preview Weekend, on behalf of her living group J Entry, in MacGregor House. The plot, generally, is MIT-themed — a physics professor might get killed by a swarm of CRISPR nanobots, for instance. When she’s not cooking up murder mysteries, she might be running SAT classes for high school students, playing piano, reading, or spending time with friends. Or, when she needs to go grocery shopping, she’ll be stopping by the Trader Joe’s on Boylston Avenue, as an excuse to visit the Boston Public Library across the street.

Quite excited for the future

O’Nell is excited that the Marshall Scholarship will enable her to live in the country that produced so many of the books she cherished as a kid, like “The Hobbit.” She’s also thrilled to further her research there. However, she jokes that she still needs to get some of the lingo down.

“I need to learn how to use the word ‘quite’ correctly. Because I overuse it in the American way,” she says.

Her master’s research will largely expand on the principles she’s been examining in the Saxe lab. Questions of morality, processing, and social interaction are where she aims to focus her attention.

“My master’s project is going to be basically taking a look at whether how difficult it is for you to determine someone else’s facial expression changes how generous you are with people,” she explains.

After that, she hopes to follow the standard research track of earning a PhD, doing postdoctoral research, and then entering academia as a professor and researcher. Teaching and researching, she says, are two of her favorite things — she’s excited to have the chance to do both at the same time. But that’s a few years ahead. Right now, she hopes to use her time in England to learn all she can about the deeper functions of the brain, with or without chocolate milk.

3Q: The interface between art and neuroscience

CBMM postdoc Sarah Schwettman

Computational neuroscientist Sarah Schwettmann, who works in the Center for Brains, Minds, and Machines at the McGovern Institute, is one of three instructors behind the cross-disciplinary course 9.S52/9.S916 (Vision in Art and Neuroscience), which introduces students to core concepts in visual perception through the lenses of art and neuroscience.

Supported by a faculty grant from the Center for Art, Science and Technology at MIT (CAST) for the past two years, the class is led by Pawan Sinha, a professor of vision and computational neuroscience in the Department of Brain and Cognitive Sciences. They are joined in the course by Seth Riskin SM ’89, a light artist and the manager of the MIT Museum Studio and Compton Gallery, where the course is taught. Schwettman discussed the combination of art and science in an educational setting.

Q: How have the three of you approached this cross-disciplinary class in art and neuroscience?

A: Discussions around this intersection often consider what each field has to offer the other. We take a different approach, one I refer to as occupying the gap, or positioning ourselves between the two fields and asking what essential questions underlie them both. One question addresses the nature of the human relationship to the world. The course suggests one answer: This relationship is fundamentally creative, from the brain’s interpretation of incoming sensory data in perception, to the explicit construction of experiential worlds in art.

Neuroscience and art, therefore, each provide a set of tools for investigating different levels of the constructive process. Through neuroscience, we develop a specific understanding of the models of the world that the brain uses to make sense of incoming visual data. With articulation of those models, we can engineer types of inputs that interact with visual processing architecture in particularly exquisite ways, and do so reliably, giving artists a toolkit for remixing and modulating experience. In the studio component of the course, we experiment with this toolkit and collectively move it forward.

While designing the course, Pawan, Seth, and I found that we were each addressing a similar set of questions, the same that motivate the class, through our own research and practice. In parallel to computational vision research, Professor Sinha leads a humanitarian initiative called Project Prakash, which provides treatment to blind children in India and explores the development of vision following the restoration of sight. Where does structure in perception originate? As an artist in the MIT Museum Studio, Seth works with articulated light to sculpt structured visual worlds out of darkness. I also live on this interface where the brain meets the world — my research in the Department of Brain and Cognitive Sciences examines the neural basis of mental models for simulating physics. Linking our work in the course is an experiment in synthesis.

Q: What current research in vision, neuroscience, and art are being explored at MIT, and how does the class connect it to hands-on practice?

A: Our brains build a rich world of experience and expectation from limited and noisy sensory data with infinite potential interpretations. In perception research, we seek to discover how the brain finds more meaning in incoming data than is explained by the signal alone. Work being done at MIT around generative models addresses this, for instance in the labs of Josh Tenenbaum and Josh McDermott in the Department of Brain and Cognitive Sciences. Researchers present an ambiguous visual or auditory stimulus and by probing someone’s perceptual interpretation, they get a handle on the structures that the mind generates to interpret incoming data, and they can begin to build computational models of the process.

In Vision in Art and Neuroscience, we focus on the experiential as well as the experimental, probing the perceiver’s experience of structure-generating process—perceiving perception itself.

As instructors, we face the pedagogical question: what exercises, in the studio, can evoke so striking an experience of students’ own perception that cutting edge research takes on new meaning, understood in the immediacy of seeing? Later in the semester, students face a similar question as artists: How can one create visual environments where viewers experience their own perceptual processing at work? Done well, this experience becomes the artwork itself. Early in the course, students explore the Ganzfeld effect, popularized by artist James Turrell, where the viewer is exposed to an unstructured visual field of uniform illumination. In this experience, one feels the mind struggling to fit models of the world to unstructured input, and attempting this over and over again — an interpretation process which often goes unnoticed when input structure is expected by visual processing architecture. The progression of the course modules follows the hierarchy of visual processing in the brain, which builds increasingly complex interpretations of visual inputs, from brightness and edges to depth, color, and recognizable form.

MIT students first encounter those concepts in the seminar component of the course at the beginning of each week. Later in the week, students translate findings into experimental approaches in the studio. We work with light directly, from introducing a single pinpoint of light into an otherwise completely dark room, to building intricate environments using programmable electronics. Students begin to take this work into their own hands, in small groups and individually, culminating in final projects for exhibition. These exhibitions are truly a highlight of the course. They’re often one of the first times that students have built and shown artworks. That’s been a gift to share with the broader MIT community, and a great learning experience for students and instructors alike.

Q: How has that approach been received by the MIT community?

A: What we’re doing has resonated across disciplines: In addition to neuroscience, we have students and researchers joining us from computer science, mechanical engineering, mathematics, the Media Lab, and ACT (the Program in Art, Culture, and Technology). The course is growing into something larger, a community of practice interested in applying the scientific methodology we develop to study the world, to probe experience, and to articulate models for its generation and replication.

With a mix of undergraduates, graduates, faculty, and artists, we’ve put together installations and symposia — including three on campus so far. The first of these, “Perceiving Perception,” also led to a weekly open studio night where students and collaborators convene for project work. Our second exhibition, “Dessert of the Real,” is on display this spring in the Compton Gallery. This April we’re organizing a symposium in the studio featuring neuroscientists, computer scientists, artists and researchers from MIT and Harvard. We’re reaching beyond campus as well, through off-site installations, collaborations with museums — including the Metropolitan Museum of Art and the Peabody Essex Museum — and a partnership with the ZERO Group in Germany.

We’re eager to involve a broad network of collaborators. It’s an exciting moment in the fields of neuroscience and computing; there is great energy to build technologies that perceive the world like humans do. We stress on the first day of class that perception is a fundamentally creative act. We see the potential for models of perception to themselves be tools for scaling and translating creativity across domains, and for building a deeply creative relationship to our environment.

Guoping Feng elected to American Academy of Arts and Sciences

Four MIT faculty members are among more than 200 leaders from academia, business, public affairs, the humanities, and the arts elected to the American Academy of Arts and Sciences, the academy announced today.

One of the nation’s most prestigious honorary societies, the academy is also a leading center for independent policy research. Members contribute to academy publications, as well as studies of science and technology policy, energy and global security, social policy and American institutions, the humanities and culture, and education.

Those elected from MIT this year are:

  • Dimitri A. Antoniadis, Ray and Maria Stata Professor of Electrical Engineering;
  • Anantha P. Chandrakasan, dean of the School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science;
  • Guoping Feng, the James W. (1963) and Patricia T. Poitras Professor of Brain and Cognitive Sciences; and
  • David R. Karger, professor of electrical engineering.

“We are pleased to recognize the excellence of our new members, celebrate their compelling accomplishments, and invite them to join the academy and contribute to its work,” said David W. Oxtoby, president of the American Academy of Arts and Sciences. “With the election of these members, the academy upholds the ideals of research and scholarship, creativity and imagination, intellectual exchange and civil discourse, and the relentless pursuit of knowledge in all its forms.”

The new class will be inducted at a ceremony in October in Cambridge, Massachusetts.

Since its founding in 1780, the academy has elected leading “thinkers and doers” from each generation, including George Washington and Benjamin Franklin in the 18th century, Maria Mitchell and Daniel Webster in the 19th century, and Toni Morrison and Albert Einstein in the 20th century. The current membership includes more than 200 Nobel laureates and 100 Pulitzer Prize winners.

Halassa named Max Planck Fellow

Michael Halassa was just appointed as one of the newest Max Planck Fellows. His appointment comes through the Max Planck Florida Institute for Neuroscience (MPFI), which aims to forge collaborations between exceptional neuroscientists from around the world to answer fundamental questions about brain development and function. The Max Planck Society selects cutting edge, active researchers from other institutions to fellow positions for a five-year period to promote interactions and synergies. While the program is a longstanding feature of the Max Planck Society, Halassa, and fellow appointee Yi Guo of the University of California, Santa Cruz, are the first selected fellows that are based at U.S. institutions.

Michael Halassa is an associate investigator at the McGovern Institute and an assistant professor in the Department of Brain and Cognitive Sciences at MIT. Halassa’s research focuses on the neural architectures that underlie complex cognitive processes. He is particularly interested in goal-directed attention, our ability to rapidly switch attentional focus based on high level objectives. For example, when you are in a roomful of colleagues, the mention of your name in a distant conversation can quickly trigger your ‘mind’s ear’ to eavesdrop into that conversation. This contrasts with hearing a name that sounds like yours on television, which does not usually grab your attention in the same way. In certain mental disorders such as schizophrenia, the ability to generate such high-level objectives, while also accounting for context, is perturbed. Recent evidence strongly suggests that impaired function of the prefrontal cortex and its interactions with a region of the brain called the thalamus may be altered in such disorders. It is this thalamocortical network that Halassa has been studying in mice, where his group has uncovered how the thalamus supports the ability of the prefrontal cortex to generate context-appropriate attentional signals.

The fellowship will support extending Halassa’s work into the tree shrew (Tupaia belangeri), which has been shown to have advanced cognitive abilities compared to mice while also offering many of the circuit-interrogation tools that make the mouse an attractive experimental model.

The Max Planck Florida Institute for Neuroscience (MPFI), a not-for-profit research organization, is part of the world-renowned Max Planck Society, Germany’s most successful research organization. The Max Planck Society was founded in 1911, and comprises 84 institutes and research facilities. While primarily located in Germany, there are 4 institutes and one research facility located aboard, including the Florida Institute that Halassa will collaborate with. The fellow positions were created with the goal of increasing interactions between the Max Planck Society and its institutes with faculty engaged in active research at other universities and institutions, which with this appointment now include MIT.

How the brain decodes familiar faces

Our brains are incredibly good at processing faces, and even have specific regions specialized for this function. But what face dimensions are we observing? Do we observe general properties first, then look at the details? Or are dimensions such as gender or other identity details decoded interdependently? In a study published today in Nature Communications, the Kanwisher lab measured the response of the brain to faces in real time, and found that the brain first decodes properties such as gender and age before drilling down to the specific identity of the face itself.

While functional magnetic resonance imaging (fMRI) has revealed an incredible level of detail about which regions of the brain respond to faces, the technology is less effective at telling us when these brain regions become activated. This is because fMRI measures brain activity by detecting changes in blood flow; when neurons become active, local blood flow to those brain regions increases. However, fMRI works too slowly to keep up with the brain’s millisecond-by-millisecond dynamics. Enter magnetoencephalography (MEG), a technique developed by MIT physicist David Cohen that detects the minuscule fluctuations in magnetic field that occur with the electrical activity of neurons. This allows better temporal resolution of neural activity.

McGovern Investigator Nancy Kanwisher and postdoc Katharina Dobs, along with their co-authors Leyla Isik and Dimitrios Pantazis, selected this temporally precise approach to measure the time it takes for the brain to respond to different dimensional features of faces.

“From a brief glimpse of a face, we quickly extract all this rich multidimensional information about a person, such as their sex, age, and identity,” explains Dobs. “I wanted to understand how the brain accomplishes this impressive feat, and what the neural mechanisms are that underlie this effect, but no one had measured the time scales of responses to these features in the same study.”

Previous studies have shown that people with prosopagnosia, a condition characterized by the inability to identify familiar faces, have no trouble determining gender, suggesting these features may be independent. “But examining when the brain recognizes gender and identity, and whether these are interdependent features is less clear,” explains Dobs.

By recording the brain activity of subjects in the MEG, Dobs and her co-authors found that the brain responds to coarse features, such as the gender of a face, much faster than the identity of the face itself. Their data showed that, in as little as 60-70 milliseconds, the brain begins to decode the age and gender of a person. Roughly 30 milliseconds later — at around 90 milliseconds — the brain begins processing the identity of the face.

After establishing a paradigm for measuring responses to these face dimensions, the authors then decided to test the effect of familiarity. It’s generally understood that the brain processes information about “familiar faces” more robustly than unfamiliar faces. For example, our brains are adept at recognizing actress Scarlett Johansson across multiple photographs, even if her hairstyle is different in each picture. Our brains have a much harder time, however, recognizing two images of the same person if the face is unfamiliar.

“Actually, for unfamiliar faces the brain is easily fooled,” Dobs explains, “variations in images, shadows, changes in hair color or style, quickly lead us to think we are looking at a different person. Conversely, we have no problem if a familiar face is in shadow, or a friend changes their hair style. But we didn’t know why familiar face perception is much more robust, whether this is due to better feed forward processing, or based on later memory retrieval.”

Familiar and unfamiliar celebrity faces side by side
Perception of a familiar face, Scarlett Johansson, is more robust than for unfamiliar faces, in this study German celebrity Karoline Herfurth (images: Wikimedia commons).

To test the effect of familiarity, the authors measured brain responses while the subjects viewed familiar faces (American celebrities) and unfamiliar faces (German celebrities) in the MEG. Surprisingly, they found that subjects recognize gender more quickly in familiar faces than unfamiliar faces. For example our brains decode that actor Scarlett Johansson is female, before we even realize she is Scarlett Johansson. And for the less familiar German actor, Karoline Herfurth, our brains unpack the same information less well.

Dobs and co-authors argue that better gender and identity recognition is not “top-down” for familiar faces, meaning that improved responses to familiar faces is not about retrieval of information from memory, but rather, a feed-forward mechanism. They found that the brain responds to facial familiarity at a much slower time scale (400 milliseconds) than it responds to gender, suggesting that the brain may be remembering associations related to the face (Johansson = Lost in Translation movie) in that longer timeframe.

This is good news for artificial intelligence. “We are interested in whether feed-forward deep learning systems can learn faces using similar mechanisms,” explains Dobs, “and help us to understand how the brain can process faces it has seen before in the absence of pulling on memory.”

When it comes to immediate next steps, Dobs would like to explore where in the brain these facial dimensions are extracted, how prior experience affects the general processing of objects, and whether computational models of face processing can capture these complex human characteristics.

 

How does the brain focus?

This is a very interesting question, and one that researchers at the McGovern Institute for Brain Research are actively pursuing. It’s also important for understanding what happens in conditions such as ADHD. There are constant distractions in the world, a cacophony of noise and visual stimulation. How and where we focus our attention, and what the brain attends to vs. treating as background information, is a big question in neuroscience. Thanks to work from researchers, including Robert Desimone, we understand quite a bit about how this works in the visual system in particular. What his lab has found is that when we pay attention to something specific, neurons in the visual cortex responding to the object we’re focusing upon fire in synchrony, whereas those responding to irrelevant information become suppressed. It’s almost as if this synchrony “increases the volume” so that the responding neurons rise above general noise.

Synchronized activity of neurons occurs as they oscillate together at a particular frequency, but the frequency of oscillation really matters when it comes to attention and focus vs. inattention and distraction. To find out more about this, I asked a postdoc in the Desimone lab, Yasaman Bagherzadeh about the role of different “brainwaves,” or oscillations at different frequencies, in attention.

“Studies in humans have shown that enhanced synchrony between neurons in the alpha range –8–12 Hz— is actually associated with inattention and distracting information,” explains Bagherzadeh, “whereas enhanced gamma synchrony (about 30-150 Hz) is associated with attention and focus on a target. For example, when a stimulus (through the ears or eyes) or its location (left vs. right) is intentionally ignored, this is preceded by a relative increase in alpha power, while a stimulus you’re attending to is linked to an increase in gamma power.”

Attention in the Desimone lab (no pun intended) has also recently been focused on covert attention. This type of spatial attention was traditionally thought to occur through a mental shift without a glance, but the Desimone lab recently found that even during these mental shifts, animal sneakily glance at objects that attention becomes focused on. Think now of something you know is nearby (a cup of coffee for example), but not in the center of your field of vision. Chances are that you just sneakily glanced at that object.

Previously these sneaky glances/small eye movements, called microsaccades (MS for short), were considered to be involuntary movements without any functional role. However, in the recent Desimone lab study, it was found that a MS significantly modulates neural activity during the attention period. This means that when you glance at something, even sneakily, it is intimately linked to attention. In other words, when it comes to spatial attention, eye movements seem to play a significant role.

Various questions arise about the mechanisms of spatial attention as a result this study, as outlined by Karthik Srinivasan, a postdoctoral associate in the Desimone lab.

“How are eye movement signals and attentional processing coordinated? What’s the role of the different frequencies of oscillation for such coordination? Is there a role for them or are they just the frequency domain representation (i.e., an epiphenomenon) of a temporal/dynamical process? Is attention a sustained process or rhythmic or something more dynamic?” Srinivasan lists some of the questions that come out of his study and goes on to explain the implications of the study further. “It is hard to believe that covert attention is a sustained process (the so-called ‘spotlight theory of attention’), given that neural activity during the attention period can be modulated by covert glances. A few recent studies have supported the idea that attention is a rhythmic process that can be uncoupled from eye movements. While this is an idea made attractive by its simplicity, it’s clear that small glances can affect neural activity related to attention, and MS are not rhythmic. More work is thus needed to get to a more unified theory that accounts for all of the data out there related to eye movements and their close link to attention.”

Answering some of the questions that Bagherzadeh, Srinivasan, and others are pursuing in the Desimone lab, both experimentally and theoretically, will clear up some of the issues above, and improve our understanding of how the brain focuses attention.

Do you have a question for The Brain? Ask it here.

 

Elephant or chair? How the brain IDs objects

As visual information flows into the brain through the retina, the visual cortex transforms the sensory input into coherent perceptions. Neuroscientists have long hypothesized that a part of the visual cortex called the inferotemporal (IT) cortex is necessary for the key task of recognizing individual objects, but the evidence has been inconclusive.

In a new study, MIT neuroscientists have found clear evidence that the IT cortex is indeed required for object recognition; they also found that subsets of this region are responsible for distinguishing different objects.

In addition, the researchers have developed computational models that describe how these neurons transform visual input into a mental representation of an object. They hope such models will eventually help guide the development of brain-machine interfaces (BMIs) that could be used for applications such as generating images in the mind of a blind person.

“We don’t know if that will be possible yet, but this is a step on the pathway toward those kinds of applications that we’re thinking about,” says James DiCarlo, the head of MIT’s Department of Brain and Cognitive Sciences, a member of the McGovern Institute for Brain Research, and the senior author of the new study.

Rishi Rajalingham, a postdoc at the McGovern Institute, is the lead author of the paper, which appears in the March 13 issue of Neuron.

Distinguishing objects

In addition to its hypothesized role in object recognition, the IT cortex also contains “patches” of neurons that respond preferentially to faces. Beginning in the 1960s, neuroscientists discovered that damage to the IT cortex could produce impairments in recognizing non-face objects, but it has been difficult to determine precisely how important the IT cortex is for this task.

The MIT team set out to find more definitive evidence for the IT cortex’s role in object recognition, by selectively shutting off neural activity in very small areas of the cortex and then measuring how the disruption affected an object discrimination task. In animals that had been trained to distinguish between objects such as elephants, bears, and chairs, they used a drug called muscimol to temporarily turn off subregions about 2 millimeters in diameter. Each of these subregions represents about 5 percent of the entire IT cortex.

These experiments, which represent the first time that researchers have been able to silence such small regions of IT cortex while measuring behavior over many object discriminations, revealed that the IT cortex is not only necessary for distinguishing between objects, but it is also divided into areas that handle different elements of object recognition.

The researchers found that silencing each of these tiny patches produced distinctive impairments in the animals’ ability to distinguish between certain objects. For example, one subregion might be involved in distinguishing chairs from cars, but not chairs from dogs. Each region was involved in 25 to 30 percent of the tasks that the researchers tested, and regions that were closer to each other tended to have more overlap between their functions, while regions far away from each other had little overlap.

“We might have thought of it as a sea of neurons that are completely mixed together, except for these islands of “face patches.” But what we’re finding, which many other studies had pointed to, is that there is large-scale organization over the entire region,” Rajalingham says.

The features that each of these regions are responding to are difficult to classify, the researchers say. The regions are not specific to objects such as dogs, nor easy-to-describe visual features such as curved lines.

“It would be incorrect to say that because we observed a deficit in distinguishing cars when a certain neuron was inhibited, this is a ‘car neuron,’” Rajalingham says. “Instead, the cell is responding to a feature that we can’t explain that is useful for car discriminations. There has been work in this lab and others that suggests that the neurons are responding to complicated nonlinear features of the input image. You can’t say it’s a curve, or a straight line, or a face, but it’s a visual feature that is especially helpful in supporting that particular task.”

Bevil Conway, a principal investigator at the National Eye Institute, says the new study makes significant progress toward answering the critical question of how neural activity in the IT cortex produces behavior.

“The paper makes a major step in advancing our understanding of this connection, by showing that blocking activity in different small local regions of IT has a different selective deficit on visual discrimination. This work advances our knowledge not only of the causal link between neural activity and behavior but also of the functional organization of IT: How this bit of brain is laid out,” says Conway, who was not involved in the research.

Brain-machine interface

The experimental results were consistent with computational models that DiCarlo, Rajalingham, and others in their lab have created to try to explain how IT cortex neuron activity produces specific behaviors.

“That is interesting not only because it says the models are good, but because it implies that we could intervene with these neurons and turn them on and off,” DiCarlo says. “With better tools, we could have very large perceptual effects and do real BMI in this space.”

The researchers plan to continue refining their models, incorporating new experimental data from even smaller populations of neurons, in hopes of developing ways to generate visual perception in a person’s brain by activating a specific sequence of neuronal activity. Technology to deliver this kind of input to a person’s brain could lead to new strategies to help blind people see certain objects.

“This is a step in that direction,” DiCarlo says. “It’s still a dream, but that dream someday will be supported by the models that are built up by this kind of work.”

The research was funded by the National Eye Institute, the Office of Naval Research, and the Simons Foundation.

How motion conveys emotion in the face

While a static emoji can stand in for emotion, in real life we are constantly reading into the feelings of others through subtle facial movements. The lift of an eyebrow, the flicker around the lips as a smile emerges, a subtle change around the eyes (or the sudden rolling of the eyes), are all changes that feed into our ability to understand the emotional state, and the attitude, of others towards us. Ben Deen and Rebecca Saxe have now monitored changes in brain activity as subjects followed face movements in movies of avatars. Their findings argue that we can generalize across individual face part movements in other people, but that a particular cortical region, the face-responsive superior temporal sulcus (fSTS), is also responding to isolated movements of individual face parts. Indeed, the fSTS seems to be tied to kinematics, individual face part movement, more than the implied emotional cause of that movement.

We know that the brain responds to dynamic changes in facial expression, and that these are associated with activity in the fSTS, but how do calculations of these movements play out in the brain?

Do we understand emotional changes by adding up individual features (lifting of eyebrows + rounding of mouth= surprise), or are we assessing the entire face in a more holistic way that results in more generalized representations? McGovern Investigator Rebecca Saxe and her graduate student Ben Deen set out to answer this question using behavioral analysis and brain imaging, specifically fMRI.

“We had a good sense of what stimuli the fSTS responds strongly to,” explains Ben Deen, “but didn’t really have any sense of how those inputs are processed in the region – what sorts of features are represented, whether the representation is more abstract or more tied to visual features, etc. The hope was to use multivoxel pattern analysis, which has proven to be a remarkably useful method for characterizing representational content, to address these questions and get a better sense of what the region is actually doing.”

Facial movements were conveyed to subjects using animated “avatars.” By presenting avatars that made isolated eye and eyebrow movements (brow raise, eye closing, eye roll, scowl) or mouth movements (smile, frown, mouth opening, snarl), as well as composites of these movements, the researchers were able to assess whether our interpretation of the latter is distinct from the sum of its parts. To do this, Deen and Saxe first took a behavioral approach where people reported on what combinations of eye and mouth movements in a whole avatar face, or one where the top and bottom parts of the face were misaligned. What they found was that movement in the mouth region can influence perception of movement in the eye region, arguably due to some level of holistic processing. The authors then asked whether there were cortical differences upon viewing isolated vs. combined face part movements. They found that changes in fSTS, but not other brain regions, had patterns of activity that seemed to discriminate between different facial movements. Indeed, they could decode which part of the avatar’s face is being perceived as moving from fSTS activity. The researchers could even model the fSTS response to combined features linearly based on the response to individual face parts. In short, though the behavorial data indicate that there is holistic processing of complex facial movement, it is also clear that isolated parts-based representations are also present, a sort of intermediate state.

As part of this work, Deen and Saxe took the important step of pre-registering their experimental parameters, before collecting any data, at the Open Science Framework. This step allows others to more easily reproduce the analysis they conducted, since all parameters (the task that subjects are carrying out, the number of subjects needed, the rationale for this number, and the scripts used to analyze data) are openly available.

“Preregistration had a big impact on our workflow for the study,” explained Deen. “More of the work was done up front, in coming up with all of the analysis details and agonizing over whether we were choosing the right strategy, before seeing any of the data. When you tie your hands by making these decisions up front, you start thinking much more carefully about them.”

Pre-registration does remove post-hoc researcher subjectivity from the analysis. As an example, because Deen and Saxe predicted that the people would be accurately able to discriminate between faces per se, they decided ahead of the experiment to focus on analyzing reaction time, rather than looking at the collected data and deciding to focus on this number after the fact. This adds to the overall objectivity of the experiment and is increasingly seen as a robust way to conduct such experiments.

How do neurons communicate (so quickly)?

Neurons are the most fundamental unit of the nervous system, and yet, researchers are just beginning to understand how they perform the complex computations that underlie our behavior. We asked Boaz Barak, previously a postdoc in Guoping Feng’s lab at the McGovern Institute and now Senior Lecturer at the School of Psychological Sciences and Sagol School of Neuroscience at Tel Aviv University, to unpack the basics of neuron communication for us.

“Neurons communicate with each other through electrical and chemical signals,” explains Barak. “The electrical signal, or action potential, runs from the cell body area to the axon terminals, through a thin fiber called axon. Some of these axons can be very long and most of them are very short. The electrical signal that runs along the axon is based on ion movement. The speed of the signal transmission is influenced by an insulating layer called myelin,” he explains.

Myelin is a fatty layer formed, in the vertebrate central nervous system, by concentric wrapping of oligodendrocyte cell processes around axons. The term “myelin” was coined in 1854 by Virchow (whose penchant for Greek and for naming new structures also led to the terms amyloid, leukemia, and chromatin). In more modern images, the myelin sheath is beautifully visible as concentric spirals surrounding the “tube” of the axon itself. Neurons in the peripheral nervous system are also myelinated, but the cells responsible for myelination are Schwann cells, rather than oligodendrocytes.

“Neurons communicate with each other through electrical and chemical signals,” explains Boaz Barak.

“Myelin’s main purpose is to insulate the neuron’s axon,” Barak says. “It speeds up conductivity and the transmission of electrical impulses. Myelin promotes fast transmission of electrical signals mainly by affecting two factors: 1) increasing electrical resistance, or reducing leakage of the electrical signal and ions along the axon, “trapping” them inside the axon and 2) decreasing membrane capacitance by increasing the distance between conducting materials inside the axon (intracellular fluids) and outside of it (extracellular fluids).”

Adjacent sections of axon in a given neuron are each surrounded by a distinct myelin sheath. Unmyelinated gaps between adjacent ensheathed regions of the axon are called Nodes of Ranvier, and are critical to fast transmission of action potentials, in what is termed “saltatory conduction.” A useful analogy is that if the axon itself is like an electrical wire, myelin is like insulation that surrounds it, speeding up impulse propagation, and overcoming the decrease in action potential size that would occur during transmission along a naked axon due to electrical signal leakage, how the myelin sheath promotes fast transmission that allows neurons to transmit information long distances in a timely fashion in the vertebrate nervous system.

Myelin seems to be critical to healthy functioning of the nervous system; in fact, disruptions in the myelin sheath have been linked to a variety of disorders.

Former McGovern postdoc, Boaz Barak. Photo: Justin Knight

“Abnormal myelination can arise from abnormal development caused by genetic alterations,” Barak explains further. “Demyelination can even occur, due to an autoimmune response, trauma, and other causes. In neurological conditions in which myelin properties are abnormal, as in the case of lesions or plaques, signal transmission can be affected. For example, defects in myelin can lead to lack of neuronal communication, as there may be a delay or reduction in transmission of electrical and chemical signals. Also, in cases of abnormal myelination, it is possible that the synchronicity of brain region activity might be affected, for example, leading to improper actions and behaviors.”

Researchers are still working to fully understand the role of myelin in disorders. Myelin has a long history of being evasive though, with its origins in the central nervous system being unclear for many years. For a period of time, the origin of myelin was thought to be the axon itself, and it was only after initial discovery (by Robertson, 1899), re-discovery (Del Rio-Hortega, 1919), and skepticism followed by eventual confirmation, that the role of oligodendrocytes in forming myelin became clear. With modern imaging and genetic tools, we should be able to increasingly understand its role in the healthy, as well as a compromised, nervous system.

Do you have a question for The Brain? Ask it here.