Nancy Kanwisher to receive George A. Miller Prize in Cognitive Neuroscience

Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience at MIT, has been named this year’s winner of the George A. Miller Prize in Cognitive Neuroscience. The award, given annually by the Cognitive Neuroscience Society (CNS), recognizes individuals “whose distinguished research is at the cutting-edge of their discipline with realized or future potential, to revolutionize cognitive neuroscience.”

Kanwisher studies the functional organization of the human mind and, over the last 20 years, her lab has played a central role in the identification of several dozen regions of the cortex in humans that are engaged in particular components of perception and cognition. She is perhaps best known for identifying brain regions specialized for recognizing faces.

Kanwisher will deliver her prize lecture, “Functional imaging of the human brain: A window into the architecture of the mind” at the 2020 CNS annual meeting in Boston this March.

Brain biomarkers predict mood and attention symptoms

Mood and attentional disorders amongst teens are an increasing concern, for parents, society, and for peers. A recent Pew research center survey found conditions such as depression and anxiety to be the number one concern that young students had about their friends, ranking above drugs or bullying.

“We’re seeing an epidemic in teen anxiety and depression,” explains McGovern Research Affiliate Susan Whitfield-Gabrieli.

“Scientists are finding a huge increase in suicide ideation and attempts, something that hit home for me as a mother of teens. Emergency rooms in hospitals now have guards posted outside doors of these teenagers that attempted suicide—this is a pressing issue,” explains Whitfield-Gabrieli who is also director of the Northeastern University Biomedical Imaging Center and a member of the Poitras Center for Psychiatric Disorders Research.

Finding new methods for discovering early biomarkers for risk of psychiatric disorders would allow early interventions and avoid reaching points of crisis such as suicide ideation or attempts. In research published recently in JAMA Psychiatry, Whitfield-Gabrieli and colleagues found that signatures predicting future development of depression and attentional symptoms can be detected in children as young as seven years old.

Long-term view

While previous work had suggested that there may be biomarkers that predict development of mood and attentional disorders, identifying early biomarkers prior to an onset of illness requires following a cohort of pre-teens from a young age, and monitoring them across years. This effort to have a proactive, rather than reactive, approach to the development of symptoms associated with mental disorders is exactly the route Whitfield-Gabrieli and colleagues took.

“One of the exciting aspects of this study is that the cohort is not pre-selected for already having symptoms of psychiatric disorders themselves or even in their family,” explained Whitfield-Gabrieli. “It’s an unbiased cohort that we followed over time.”

McGovern research affiliate Susan Whitfield-Gabrieli has discovered early brain biomarkers linked to psychiatric disorders.

In some past studies, children were pre-selected, for example a major depressive disorder diagnosis in the parents, but Whitfield-Gabrieli and colleagues, Silvia Bunge from Berkeley and Laurie Cutting from Vanderbilt, recruited a range of children without preconditions, and examined them at age 7, then again 4 years later. The researchers examined resting state functional connectivity, and compared this to scores on the child behavioral checklist (CBCL), allowing them to relate differences in the brain to a standardized analysis of behavior that can be linked to psychiatric disorders. The CBCL is used both in research and in the clinic and his highly predictive of disorders including ADHD, so that changes in the brain could be related to changes in a widely used clinical scoring system.

“Over the four years, some people got worse, some got better, and some stayed the same according the CBCL. We could relate this directly to differences in brain networks, and could identify at age 7 who would get worse,” explained Whitfield-Gabrieli.

Brain network changes

The authors analyzed differences in resting state network connectivity, regions across the brain that rise and fall in activity level together, as visualized using fMRI. Reduced connectivity between these regions may allow us to get a handle on reduced “top-down” control of neural circuits. The dorsolateral prefrontal region is linked to executive function, external attention, and emotional control. Increased connection with the medial prefrontal cortex is known to be present in attention deficit hyperactivity disorder (ADHD), while a reduced connection to a different brain region, the sgACC, is seen in major depressive disorder. The question remained as to whether these changes can be seen prior to the onset of diagnosable attentional or mood disorders.

Whitfield-Gabrieli and colleagues found that these resting state networks varied in the brains of children that would later develop anxiety/depression and ADHD symptoms. Weaker scores in connectivity between the dorsolateral and medial prefrontal cortical regions tended to be seen in children whose attention scores went on to improve. Analysis of the resting state networks above could differentiate those who would have typical attentional behavior by age 11 versus those that went on to develop ADHD.

Whitfield-Gabrieli has replicated this finding in an independent sample of children and she is continuing to expand the analysis and check the results, as well as follow this cohort into the future. Should changes in resting state networks be a consistent biomarker, the next step is to initiate interventions prior to the point of crisis.

“We’ve recently been able to use mindfulness interventions, and show these reduce self-perceived stress and amygdala activation in response to fear, and we are also testing the effect of exercise interventions,” explained Whitfield-Gabrieli. “The hope is that by using predictive biomarkers we can augment children’s lifestyles with healthy interventions that can prevent risk converting to a psychiatric disorder.”

Can fMRI reveal insights into addiction and treatments?

Many debilitating conditions like depression and addiction have biological signatures hidden in the brain well before symptoms appear.  What if brain scans could be used to detect these hidden signatures and determine the most optimal treatment for each individual? McGovern Investigator John Gabrieli is interested in this question and wrote about the use of imaging technologies as a predictive tool for brain disorders in a recent issue of Scientific American.

page from Scientific American article
McGovern Investigator John Gabrieli pens a story for Scientific American about the potential for brain imaging to predict the onset of mental illness.

“Brain scans show promise in predicting who will benefit from a given therapy,” says Gabrieli, who is also the Grover Hermann Professor in Brain and Cognitive Sciences at MIT. “Differences in neural activity may one day tell clinicians which depression treatment will be most effective for an individual or which abstinent alcoholics will relapse.”

Gabrieli cites research which has shown that half of patients treated for alcohol abuse go back to drinking within a year of treatment, and similar reversion rates occur for stimulants such as cocaine. Failed treatments may be a source of further anxiety and stress, Gabrieli notes, so any information we can glean from the brain to pinpoint treatments or doses that would help would be highly informative.

Current treatments rely on little scientific evidence to support the length of time needed in a rehabilitation facility, he says, but “a number suggest that brain measures might foresee who will succeed in abstaining after treatment has ended.”

Further data is needed to support this idea, but Gabrieli’s Scientific American piece makes the case that the use of such a technology may be promising for a range of addiction treatments including abuse of alcohol, nicotine, and illicit drugs.

Gabrieli also believes brain imaging has the potential to reshape education. For example, educational interventions targeting dyslexia might be more effective if personalized to specific differences in the brain that point to the source of the learning gap.

But for the prediction sciences to move forward in mental health and education, he concludes, the research community must design further rigorous studies to examine these important questions.

Differences between deep neural networks and human perception

When your mother calls your name, you know it’s her voice — no matter the volume, even over a poor cell phone connection. And when you see her face, you know it’s hers — if she is far away, if the lighting is poor, or if you are on a bad FaceTime call. This robustness to variation is a hallmark of human perception. On the other hand, we are susceptible to illusions: We might fail to distinguish between sounds or images that are, in fact, different. Scientists have explained many of these illusions, but we lack a full understanding of the invariances in our auditory and visual systems.

Deep neural networks also have performed speech recognition and image classification tasks with impressive robustness to variations in the auditory or visual stimuli. But are the invariances learned by these models similar to the invariances learned by human perceptual systems? A group of MIT researchers has discovered that they are different. They presented their findings yesterday at the 2019 Conference on Neural Information Processing Systems.

The researchers made a novel generalization of a classical concept: “metamers” — physically distinct stimuli that generate the same perceptual effect. The most famous examples of metamer stimuli arise because most people have three different types of cones in their retinae, which are responsible for color vision. The perceived color of any single wavelength of light can be matched exactly by a particular combination of three lights of different colors — for example, red, green, and blue lights. Nineteenth-century scientists inferred from this observation that humans have three different types of bright-light detectors in our eyes. This is the basis for electronic color displays on all of the screens we stare at every day. Another example in the visual system is that when we fix our gaze on an object, we may perceive surrounding visual scenes that differ at the periphery as identical. In the auditory domain, something analogous can be observed. For example, the “textural” sound of two swarms of insects might be indistinguishable, despite differing in the acoustic details that compose them, because they have similar aggregate statistical properties. In each case, the metamers provide insight into the mechanisms of perception, and constrain models of the human visual or auditory systems.

In the current work, the researchers randomly chose natural images and sound clips of spoken words from standard databases, and then synthesized sounds and images so that deep neural networks would sort them into the same classes as their natural counterparts. That is, they generated physically distinct stimuli that are classified identically by models, rather than by humans. This is a new way to think about metamers, generalizing the concept to swap the role of computer models for human perceivers. They therefore called these synthesized stimuli “model metamers” of the paired natural stimuli. The researchers then tested whether humans could identify the words and images.

“Participants heard a short segment of speech and had to identify from a list of words which word was in the middle of the clip. For the natural audio this task is easy, but for many of the model metamers humans had a hard time recognizing the sound,” explains first-author Jenelle Feather, a graduate student in the MIT Department of Brain and Cognitive Sciences (BCS) and a member of the Center for Brains, Minds, and Machines (CBMM). That is, humans would not put the synthetic stimuli in the same class as the spoken word “bird” or the image of a bird. In fact, model metamers generated to match the responses of the deepest layers of the model were generally unrecognizable as words or images by human subjects.

Josh McDermott, associate professor in BCS and investigator in CBMM, makes the following case: “The basic logic is that if we have a good model of human perception, say of speech recognition, then if we pick two sounds that the model says are the same and present these two sounds to a human listener, that human should also say that the two sounds are the same. If the human listener instead perceives the stimuli to be different, this is a clear indication that the representations in our model do not match those of human perception.”

Joining Feather and McDermott on the paper are Alex Durango, a post-baccalaureate student, and Ray Gonzalez, a research assistant, both in BCS.

There is another type of failure of deep networks that has received a lot of attention in the media: adversarial examples (see, for example, “Why did my classifier just mistake a turtle for a rifle?“). These are stimuli that appear similar to humans but are misclassified by a model network (by design — they are constructed to be misclassified). They are complementary to the stimuli generated by Feather’s group, which sound or appear different to humans but are designed to be co-classified by the model network. The vulnerabilities of model networks exposed to adversarial attacks are well-known — face-recognition software might mistake identities; automated vehicles might not recognize pedestrians.

The importance of this work lies in improving models of perception beyond deep networks. Although the standard adversarial examples indicate differences between deep networks and human perceptual systems, the new stimuli generated by the McDermott group arguably represent a more fundamental model failure — they show that generic examples of stimuli classified as the same by a deep network produce wildly different percepts for humans.

The team also figured out ways to modify the model networks to yield metamers that were more plausible sounds and images to humans. As McDermott says, “This gives us hope that we may be able to eventually develop models that pass the metamer test and better capture human invariances.”

“Model metamers demonstrate a significant failure of present-day neural networks to match the invariances in the human visual and auditory systems,” says Feather, “We hope that this work will provide a useful behavioral measuring stick to improve model representations and create better models of human sensory systems.”

Brain science in the Bolivian rainforest

Malinda McPherson headshot
Graduate student Malinda McPherson. Photo: Caitlin Cunningham

Malinda McPherson is a graduate student in Josh McDermott‘s lab, studying how people hear pitch (how high or low a sound is) in both speech and music.

To test the extent to which human audition varies across cultures, McPherson travels with the McDermott lab to Bolivia to study the Tsimane’ — a native Amazonian society with minimal exposure to Western culture.

Their most recent study, published in the journal Current Biology, found a striking variation in perception of musical pitch across cultures.

In this Q&A, we ask McPherson what motivates her research and to describe some of the challenges she has experienced working in the Bolivian rainforest. 

What are you working on now?

Right now, I’m particularly excited about a project that involves working with children; we are trying to better understand how the ability to hear pitch develops with age and experience. Difficulty hearing pitch is one of the first issues that most people with poor or corrected hearing find discouraging, so in addition to simply being an interesting basic component of audition, understanding how pitch perception develops may be useful in engineering assistive hearing devices.

How has your personal background inspired your research?

I’ve been an avid violist for over twenty years and still perform with the Chamber Music Society at MIT. When I was an undergraduate and deciding between a career as a professional musician and a career in science, I found a way to merge the two by working as a research assistant in a lab studying musical creativity. I worked in that lab for three years and was completely hooked. My musical training has definitely helped me design a few experiments!

What was your most challenging experience in Bolivia?  Most rewarding?

The most challenging aspect of our fieldwork in Bolivia is sustaining our intensity over a period of 4-5 weeks.  Every moment is precious, and the pace of work is both exhilarating and exhausting. Despite the long hours of work and travel (by canoe or by truck over very bumpy roads), it is an incredible privilege to meet with and to learn from the Tsimane’. I’ve been picking up some Tsimane’ phrases from the translators with whom we work, and can now have basic conversations with participants and make kids laugh, so that’s a lot of fun. A few children I met my first year greeted me by name when we went back this past year. That was a very special moment!

Translator Manuel Roca Moye (left) with Malinda McPherson and Josh McDermott in a fully loaded canoe. Photo: McDermott lab

What single scientific question do you hope to answer?

I’d be curious to figure out the overlaps and distinctions between how we perceive music versus speech, but I think one of the best aspects of science is that many of the important future questions haven’t been thought of yet!

Single neurons can encode distinct landmarks

The organization of many neurons wired together in a complex circuit gives the brain its ability to perform powerful calculations. Work from the Harnett lab recently showed that even single neurons can process more information than previously thought, representing distinct variables at the subcellular level during behavior.

McGovern Investigator Mark Harnett and postdoc Jakob Voigts conducted an extremely delicate and intricate imaging experiment on different parts of the same neuron in the mouse retinosplenial cortex during 2-D navigation. Their set up allowed 2-photon imaging of neuronal sub-compartments during free 2-D navigation with head rotation, the latter being important to follow neural activity during naturalistic, complex behavior.

Recording computation by subcompartments in neurons.

 

In the work, published recently in Neuron, the authors used Ca2+-imaging to show that the soma in a single neuron was consistently active when mice were at particular landmarks as they navigated in an arena. The dendrites (tree-like antennas that receive input from other neurons) of exactly the same neuron were robustly active independent of the soma at distinct positions and orientations in the arena. This strongly suggests that the dendrites encode distinct information compared to their parent soma, in this case spatial variables during navigation, laying the foundation for studying sub-cellular processes during complex behaviors.

 

Shrinking CRISPR tools

Before CRISPR gene-editing tools can be used to treat brain disorders, scientists must find safe ways to deliver the tools to the brain. One promising method involves harnessing viruses that are benign, and replacing non-essential genetic cargo with therapeutic CRISPR tools. But there is limited room for additional tools in a vector already stuffed with essential gear.

Squeezing all the tools that are needed to edit the genome into a single delivery vector is a challenge. Soumya Kannan is addressing this capacity problem in Feng Zhang’s lab with fellow graduate student Han Altae-Tran, by developing smaller CRISPR tools that can be more easily packaged into viral vectors for delivery. She is focused on RNA editors, members of the Cas13 family that can fix small mutations in RNA without making changes to the genome itself.

“The limitation is that RNA editors are large. At this point though, we know that editing works, we understand the mechanism by which it works, and there’s feasible packaging in AAV. We’re now trying to shrink systems such as RESCUE and REPAIR so that they fit into the packaging for delivery.”

One of many avenues the Zhang lab has taken to tool-finding in the past is to explore biodiversity for new versions of tools, and this is an approach that intrigues Soumya.

“Metagenomics projects are literally sequencing life from the Antarctic ice cores to hot sea vents. It fascinates me that the CRISPR tools of ancient organisms and those that live in extreme conditions.”

Researchers continue to search these troves of sequencing data for new tools.

 

Two CRISPR scientists on the future of gene editing

As part of our Ask the Brain series, Martin Wienisch and Jonathan Wilde of the Feng lab look into the crystal ball to predict the future of CRISPR tech.

_____

Where will CRISPR be in five years?

Jonathan: We’ll definitely have more efficient, more precise, and safer editing tools. An immediate impact on human health may be closer than we think through more nutritious and resilient crops. Also, I think we will have more viable tools available for repairing disease-causing mutations in the brain, which is something that the field is really lacking right now.

Martin: And we can use these technologies with new disease models to help us understand brain disorders such as Huntington’s disease.

Jonathan: There are also incredible tools being discovered in nature: exotic CRISPR systems from newly discovered bacteria and viruses. We could use these to attack disease-causing bacteria.

Martin: We would then be using CRISPR systems for the reason they evolved. Also improved gene drives, CRISPR-systems that can wipe out disease-carrying organisms such as mosquitoes, could impact human health in that time frame.

What will move gene therapy forward?

Martin: A breakthrough on delivery. That’s when therapy will exponentially move forward. Therapy will be tailored to different diseases and disorders, depending on relevant cell types or the location of mutations for example.

Jonathan: Also panning biodiversity even faster: we’ve only looked at one small part of the tree of life for tools. Sequencing and computational advances can help: a future where we collect and analyze genomes in the wild using portable sequencers and laptops can only quicken the pace of new discoveries.

_____

Do you have a question for The Brain? Ask it here.

CRISPR: From toolkit to therapy

Think of the human body as a community of cells with specialized roles. Each cell carries the same blueprint, an array of genes comprising the genome, but different cell types have unique functions — immune cells fight invading bacteria, while neurons transmit information.

But when something goes awry, the specialization of these cells becomes a challenge for treatment. For example, neurons lack active cell repair systems required for promising gene editing techniques like CRISPR.

Can current gene editing tools be modified to work in neurons? Can we reach neurons without impacting healthy cells nearby? McGovern Institute researchers are trying to answer these questions by developing gene editing tools and delivery systems that can target — and repair — faulty brain cells.

Expanding the toolkit

Feng Zhang with folded arms in lab
McGovern Investigator Feng Zhang in his lab.

Natural CRISPR systems help bacteria fend off would-be attackers. Our first glimpse of the impact of such systems was the use of CRISPR-Cas9 to edit human cells.

“Harnessing Cas9 was a major game-changer in the life sciences,” explains Feng Zhang, an investigator at the McGovern Institute and the James and Patricia Poitras Professor of Neuroscience at MIT. “But Cas9 is just one flavor of one kind of bacterial defense system — there is a treasure trove of natural systems that may have enormous potential, just waiting to be unlocked.”

By finding and optimizing new molecular tools, the Zhang lab and others have developed CRISPR tools that can now potentially target neurons and fix diverse mutation types, bringing gene therapy within reach.

Precise in space and time

A single letter change to a gene can be devastating. These genes may function only briefly during development, so a temporary “fix” during this window could be beneficial. For such cases, the Zhang lab and others have engineered tools that target short-lived RNAs. These molecules act as messengers, carrying information from DNA to be converted into functional factors in the cell.

“RNA editing is powerful from an ethical and safety standpoint,” explains Soumya Kannan, a graduate student in the Zhang lab working on these tools. “By targeting RNA molecules, which are only present for a short time, we can avoid permanent changes to the genetic material, and we can make these changes in any type of cell.”

Soumya Kannan in the lab
Graduate student Soumya Kannan is developing smaller CRISPR tools that can be more easily packaged into viral vectors for delivery. Photo: Caitlin Cunningham

Zhang’s team has developed twin RNA-editing tools, REPAIR and RESCUE, which can fix single RNA bases by bringing together a base editor with the CRISPR protein Cas13. These RNA-editing tools can be used in neurons because they do not rely on cellular machinery to make the targeted changes. They also have the potential to tackle a wide array of diseases in other tissue types.

CAST addition

If a gene is severely disrupted, more radical help may be needed: insertion of a normal gene. For this situation, Zhang’s lab recently identified CRISPR-associated transposases (CASTs) from cyanobacteria. CASTs combine Cas12k, which is targeted by a guide RNA to a precise genome location, with an enzyme that can insert gene-sized pieces of DNA.

“With traditional CRISPR you can make simple changes, similar to changing a few letters or words in a Word document. The new system can ‘copy and paste’ entire genes.” – Alim Ladha

Transposases were originally identified as enzymes that help rogue genes “jump” from one place to another in the genome. CAST uses a similar activity to insert entire genes self-sufficiently without help from the target cell so, like REPAIR and RESCUE, it can potentially be used in neurons.

“Our initial work was to fully characterize how this new system works, and test whether it can actually insert genes,” explains Alim Ladha, a graduate fellow in the Tan-Yang Center for Autism Research, who worked on CAST with Jonathan Strecker, a postdoctoral fellow in the Zhang lab.

The goal is now to use CAST to precisely target neurons and other specific cell types affected by disease.

Toward delivery

As the gene-editing toolbox expands, McGovern labs are working on precise delivery systems.Adeno-associated virus (AAV) is an FDA-approved virus for delivering genes, but has limited room to carry the necessary cargo — CRISPR machinery plus templates — to fix genes.

To tackle this problem, McGovern Investigators Guoping Feng and Feng Zhang are working on reducing the cargo needed for therapy. In addition, the Zhang, Gootenberg and Abudayyeh labs are working on methods to precisely deliver the therapeutic packages to neurons, such as new tissue-specific viruses that can carry bigger payloads. Finally, entirely new modalities for delivery are being explored in the effort to develop gene therapy to a point where it can be safely delivered to patients.

“Cas9 has been a very useful tool for the life sciences,” says Zhang. “And it’ll be exciting to see continued progress with the broadening toolkit and delivery systems, as we make further progress toward safe gene therapies.

Controlling attention with brain waves

Having trouble paying attention? MIT neuroscientists may have a solution for you: Turn down your alpha brain waves. In a new study, the researchers found that people can enhance their attention by controlling their own alpha brain waves based on neurofeedback they receive as they perform a particular task.

The study found that when subjects learned to suppress alpha waves in one hemisphere of their parietal cortex, they were able to pay better attention to objects that appeared on the opposite side of their visual field. This is the first time that this cause-and-effect relationship has been seen, and it suggests that it may be possible for people to learn to improve their attention through neurofeedback.

Desimone lab study shows that people can boost attention by manipulating their own alpha brain waves with neurofeedback training.

“There’s a lot of interest in using neurofeedback to try to help people with various brain disorders and behavioral problems,” says Robert Desimone, director of MIT’s McGovern Institute for Brain Research. “It’s a completely noninvasive way of controlling and testing the role of different types of brain activity.”

It’s unknown how long these effects might last and whether this kind of control could be achieved with other types of brain waves, such as beta waves, which are linked to Parkinson’s disease. The researchers are now planning additional studies of whether this type of neurofeedback training might help people suffering from attentional or other neurological disorders.

Desimone is the senior author of the paper, which appears in Neuron on Dec. 4. McGovern Institute postdoc Yasaman Bagherzadeh is the lead author of the study. Daniel Baldauf, a former McGovern Institute research scientist, and Dimitrios Pantazis, a McGovern Institute principal research scientist, are also authors of the paper.

Alpha and attention

There are billions of neurons in the brain, and their combined electrical signals generate oscillations known as brain waves. Alpha waves, which oscillate in the frequency of 8 to 12 hertz, are believed to play a role in filtering out distracting sensory information.

Previous studies have shown a strong correlation between attention and alpha brain waves, particularly in the parietal cortex. In humans and in animal studies, a decrease in alpha waves has been linked to enhanced attention. However, it was unclear if alpha waves control attention or are just a byproduct of some other process that governs attention, Desimone says.

To test whether alpha waves actually regulate attention, the researchers designed an experiment in which people were given real-time feedback on their alpha waves as they performed a task. Subjects were asked to look at a grating pattern in the center of a screen, and told to use mental effort to increase the contrast of the pattern as they looked at it, making it more visible.

During the task, subjects were scanned using magnetoencephalography (MEG), which reveals brain activity with millisecond precision. The researchers measured alpha levels in both the left and right hemispheres of the parietal cortex and calculated the degree of asymmetry between the two levels. As the asymmetry between the two hemispheres grew, the grating pattern became more visible, offering the participants real-time feedback.

McGovern postdoc Yasaman sits in a magnetoencephalography (MEG) scanner. Photo: Justin Knight

Although subjects were not told anything about what was happening, after about 20 trials (which took about 10 minutes), they were able to increase the contrast of the pattern. The MEG results indicated they had done so by controlling the asymmetry of their alpha waves.

“After the experiment, the subjects said they knew that they were controlling the contrast, but they didn’t know how they did it,” Bagherzadeh says. “We think the basis is conditional learning — whenever you do a behavior and you receive a reward, you’re reinforcing that behavior. People usually don’t have any feedback on their brain activity, but when we provide it to them and reward them, they learn by practicing.”

Although the subjects were not consciously aware of how they were manipulating their brain waves, they were able to do it, and this success translated into enhanced attention on the opposite side of the visual field. As the subjects looked at the pattern in the center of the screen, the researchers flashed dots of light on either side of the screen. The participants had been told to ignore these flashes, but the researchers measured how their visual cortex responded to them.

One group of participants was trained to suppress alpha waves in the left side of the brain, while the other was trained to suppress the right side. In those who had reduced alpha on the left side, their visual cortex showed a larger response to flashes of light on the right side of the screen, while those with reduced alpha on the right side responded more to flashes seen on the left side.

“Alpha manipulation really was controlling people’s attention, even though they didn’t have any clear understanding of how they were doing it,” Desimone says.

Persistent effect

After the neurofeedback training session ended, the researchers asked subjects to perform two additional tasks that involve attention, and found that the enhanced attention persisted. In one experiment, subjects were asked to watch for a grating pattern, similar to what they had seen during the neurofeedback task, to appear. In some of the trials, they were told in advance to pay attention to one side of the visual field, but in others, they were not given any direction.

When the subjects were told to pay attention to one side, that instruction was the dominant factor in where they looked. But if they were not given any cue in advance, they tended to pay more attention to the side that had been favored during their neurofeedback training.

In another task, participants were asked to look at an image such as a natural outdoor scene, urban scene, or computer-generated fractal shape. By tracking subjects’ eye movements, the researchers found that people spent more time looking at the side that their alpha waves had trained them to pay attention to.

“It is promising that the effects did seem to persist afterwards,” says Desimone, though more study is needed to determine how long these effects might last.

The research was funded by the McGovern Institute.