New center for autism research established at the McGovern Institute

The McGovern Institute is pleased to announce the establishment of a new center dedicated to autism research. The center is made possible by a kick-off commitment of $20 million, made by Lisa Yang and MIT alumnus Hock Tan ’75.

The Hock E. Tan and K. Lisa Yang Center for Autism Research will support research on the genetic, biological and neural bases of autism spectrum disorders, a developmental disability estimated to affect 1 in 68 individuals in the United States. Tan and Yang hope their initial investment will stimulate additional support and help foster collaborative research efforts to erase the devastating effects of this disorder on individuals, their families and the broader autism community.

“With the Tan-Yang Center for Autism Research, we can imagine a world in which medical science understands and supports those with autism — and we can focus MIT’s distinctive strengths on making that dream a reality. Lisa and Hock’s gift reminds us of the impact we envision for the MIT Campaign for a Better World.  I am grateful for their leadership and generosity, and inspired by the possibilities ahead,” says MIT President L. Rafael Reif.

“I am thrilled to be investing in an institution that values a multidisciplinary collaborative approach to solving complex problems such as autism,” says Hock Tan, who graduated from MIT in 1975 with a bachelor’s degree and master’s degree in mechanical engineering. “We expect that successful research originating from our Center will have a significant impact on the autism community.”

Originally from Penang, Malaysia, Tan has held several high-level finance and executive positions since leaving MIT. Tan is currently CEO of chipmaker Broadcom, Ltd.

Research at the Tan-Yang Center will focus on four major lines of investigation: genetics, neural circuits, novel autism models and the translation of basic research to the clinical setting.  By focusing research efforts on the origins of autism in our genes, in the womb and in the first years of life, the Tan-Yang Center aims to develop methods to better detect and potentially prevent autism spectrum disorders entirely. To help meet this challenge, the Center will support collaborations across multiple disciplines—from genes to neural circuits—both within and beyond MIT.

“MIT has some of the world’s leading scientists studying autism,” says McGovern Institute director Robert Desimone. “Support from the Tan-Yang Center will enable us to pursue exciting new directions that could not be funded by traditional sources. We will exploit revolutionary new tools, such as CRISPR and optogenetics, that are transforming research in neuroscience. We hope to not only identify new targets for medicines, but also develop novel treatments that are not based on standard pharmacological approaches. By supporting cutting-edge autism research here at MIT as well as our collaborative institutions, the Center holds great promise to accelerate our basic understanding of this complex disorder.”

“Millions of families have been impacted by autism,” says Yang, a longtime advocate for the rights of individuals with disabilities and learning differences. “I am profoundly hopeful that the discoveries made at the Tan-Yang Center will have a long-term impact on the field of autism research and will provide fresh answers and potential new treatments for individuals affected by this disorder.”

Sensor traces dopamine released by single cells

MIT chemical engineers have developed an extremely sensitive detector that can track single cells’ secretion of dopamine, a brain chemical responsible for carrying messages involved in reward-motivated behavior, learning, and memory.

Using arrays of up to 20,000 tiny sensors, the researchers can monitor dopamine secretion of single neurons, allowing them to explore critical questions about dopamine dynamics. Until now, that has been very difficult to do.

“Now, in real-time, and with good spatial resolution, we can see exactly where dopamine is being released,” says Michael Strano, the Carbon P. Dubbs Professor of Chemical Engineering and the senior author of a paper describing the research, which appears in the Proceedings of the National Academy of Sciences the week of Feb. 6.

Strano and his colleagues have already demonstrated that dopamine release occurs differently than scientists expected in a type of neural progenitor cell, helping to shed light on how dopamine may exert its effects in the brain.

The paper’s lead author is Sebastian Kruss, a former MIT postdoc who is now at Göttingen University, in Germany. Other authors are Daniel Salem and Barbara Lima, both MIT graduate students; Edward Boyden, an associate professor of biological engineering and brain and cognitive sciences, as well as a member of the MIT Media Lab and the McGovern Institute for Brain Research; Lela Vukovic, an assistant professor of chemistry at the University of Texas at El Paso; and Emma Vander Ende, a graduate student at Northwestern University.

“A global effect”

Dopamine is a neurotransmitter that plays important roles in learning, memory, and feelings of reward, which reinforce positive experiences.

Neurotransmitters allow neurons to relay messages to nearby neurons through connections known as synapses. However, unlike most other neurotransmitters, dopamine can exert its effects beyond the synapse: Not all dopamine released into a synapse is taken up by the target cell, allowing some of the chemical to diffuse away and affect other nearby cells.

“It has a local effect, which controls the signaling through the neurons, but also it has a global effect,” Strano says. “If dopamine is in the region, it influences all the neurons nearby.”

Tracking this dopamine diffusion in the brain has proven difficult. Neuroscientists have tried using electrodes that are specialized to detect dopamine, but even using the smallest electrodes available, they can place only about 20 near any given cell.

“We’re at the infancy of really understanding how these packets of chemicals move and their directionality,” says Strano, who decided to take a different approach.

Strano’s lab has previously developed sensors made from arrays of carbon nanotubes — hollow, nanometer-thick cylinders made of carbon, which naturally fluoresce when exposed to laser light. By wrapping these tubes in different proteins or DNA strands, scientists can customize them to bind to different types of molecules.

The carbon nanotube sensors used in this study are coated with a DNA sequence that makes the sensors interact with dopamine. When dopamine binds to the carbon nanotubes, they fluoresce more brightly, allowing the researchers to see exactly where the dopamine was released. The researchers deposited more than 20,000 of these nanotubes on a glass slide, creating an array that detects any dopamine secreted by a cell placed on the slide.

Dopamine diffusion

In the new PNAS study, the researchers used these dopamine sensors to explore a longstanding question about dopamine release in the brain: From which part of the cell is dopamine secreted?

To help answer that question, the researchers placed individual neural progenitor cells known as PC-12 cells onto the sensor arrays. PC-12 cells, which develop into neuron-like cells under the right conditions, have a starfish-like shape with several protrusions that resemble axons, which form synapses with other cells.

After stimulating the cells to release dopamine, the researchers found that certain dopamine sensors near the cells lit up immediately, while those farther away turned on later as the dopamine diffused away. Tracking those patterns over many seconds allowed the researchers to trace how dopamine spreads away from the cells.

Strano says one might expect to see that most of the dopamine would be released from the tips of the arms extending out from the cells. However, the researchers found that in fact more dopamine came from the sides of the arms.

“We have falsified the notion that dopamine should only be released at these regions that will eventually become the synapses,” Strano says. “This observation is counterintuitive, and it’s a new piece of information you can only obtain with a nanosensor array like this one.”

The team also showed that most of the dopamine traveled away from the cell, through protrusions extending in opposite directions. “Even though dopamine is not necessarily being released only at the tip of these protrusions, the direction of release is associated with them,” Salem says.

Other questions that could be explored using these sensors include how dopamine release is affected by the direction of input to the cell, and how the presence of nearby cells influences each cell’s dopamine release.

The research was funded by the National Science Foundation, the National Institutes of Health, a University of Illinois Center for the Physics of Living Cells Postdoctoral Fellowship, the German Research Foundation, and a Liebig Fellowship.

Rethinking mental illness treatment

McGovern researchers are finding neural markers that could help improve treatment for psychiatric patients.

Ten years ago, Jim and Pat Poitras committed $20M to the McGovern Institute to establish the Poitras Center for Affective Disorders Research. The Poitras family had been longtime supporters of MIT, and because they had seen mental illness in their own family, they decided to support an ambitious new program at the McGovern Institute, with the goal of understanding the fundamental biological basis of depression, bipolar disorder, schizophrenia and other major psychiatric disorders.

The gift came at an opportune time, as the field was entering a new phase of discovery, with rapid advances in psychiatric genomics and brain imaging, and with the emergence of new technologies for genome editing and for the development of animal models. Over the past ten years, the Poitras Center has supported work in each of these areas, including Feng Zhang’s work on CRISPR-based genome editing, and Guoping Feng’s work on animal models for autism, schizophrenia and other psychiatric disorders.

This reflects a long-term strategy, says Robert Desimone, director of the McGovern Institute who oversees the Poitras Center. “But we must not lose sight of the overall goal, which is to benefit human patients. Insights from animal models and genomic medicine have the potential to transform the treatments of the future, but we are also interested in the nearer term, and in what we can do right now.”

One area where technology can have a near-term impact is human brain imaging, and in collaboration with clinical researchers at McLean Hospital, Massachusetts General Hospital and other institutions, the Poitras Center has supported an ambitious program to bring human neuroimaging closer to the clinic.

Discovering psychiatry’s crystal ball

A fundamental problem in psychiatry is that there are no biological markers for diagnosing mental illness or for indicating how best to treat it. Treatment decisions are based entirely on symptoms, and doctors and their patients will typically try one treatment, then if it does not work, try another, and perhaps another. The success rates for the first treatments are often less than 50%, and finding what works for an individual patient often means a long and painful process of trial and error.

“Someday, a person will be able to go to a hospital, get a brain scan, charge it to their insurance, and know that it helped the doctor select the best treatment,” says Satra Ghosh.

McGovern research scientist Susan Whitfield-Gabrieli and her colleagues are hoping to change this picture, with the help of brain imaging. Their findings suggest that brain scans can hold valuable information for psychiatrists and their patients. “We need a paradigm shift in how we use imaging. It can be used for more than research,” says Whitfield-Gabrieli, who is a member of McGovern Investigator John Gabrieli’s lab. “It would be a really big boost to be able use it to personalize psychiatric medicine.”

One of Whitfield-Gabrieli’s goals is to find markers that can predict which treatments will work for which patients. Another is to find markers that can predict the likely risk of disease in the future, allowing doctors to intervene before symptoms first develop. All of these markers need further validation before they are ready for the clinic, but they have the potential to meet a dire need to improve treatment for psychiatric disease.

A brain at rest

For Whitfield-Gabrieli, who both collaborates with and is married to Gabrieli, that paradigm shift began when she started to study the resting brain using functional magnetic resonance imaging (fMRI). Most brain imaging studies require the subject to perform a mental task in the scanner, but these are time-consuming and often hard to replicate in a clinical setting.In contrast, resting state imaging requires no task. The subject simply lies in the scanner and lets the mind wander. The patterns of activity can reveal functional connections within the brain, and are reliably consistent from study to study.

Whitfield-Gabrieli thought resting state scanning had the potential to help patients because it is simple and easy to perform.

“Even a 5-minute scan can contain useful information that could help people,” says Satrajit Ghosh, a principal research scientist in the Gabrieli lab who works closely with Whitfield-Gabrieli.

Whitfield-Gabrieli and her clinical collaborator Larry Seidman at Harvard Medical School decided to study resting state activity in patients with schizophrenia. They found a pattern of activity strikingly different from that of typical brains. The patients showed unusually strong activity in a set of interconnected brain regions known as the default mode network, which is typically activated during introspection. It is normally suppressed when a person attends to the outside world, but schizophrenia patients failed to show this suppression.

“The patient isn’t able to toggle between internal processing and external processing the way a typical individual can,” says Whitfield-Gabrieli, whose work is supported by the Poitras Center for Affective Disorders Research.

Since then, the team has observed similar disturbances in the default network in other disorders, including depression, anxiety, bipolar disorder, and ADHD. “We knew we were onto something interesting,” says Whitfield-Gabrieli. “But we kept coming back to the question: how can brain imaging help patients?”

fMRI on patients

Many imaging studies aim to understand the biological basis of disease and ultimately to guide the development of new drugs or other treatments. But this is a long-term goal, and Whitfield-Gabrieli wanted to find ways that brain imaging could have a more immediate impact. So she and Ghosh decided to use fMRI to look at differences among individual patients, and to focus on differences in how they responded to treatment.

“It gave us something objective to measure,” explains Ghosh. “Someone goes through a treatment, and they either get better or they don’t.” The project also had appeal for Ghosh because it was an opportunity for him to use his expertise in machine learning and other computational tools to build systems-level models of the brain.

For the first study, the team decided to focus on social anxiety disorder (SAD), which is typically treated with either prescription drugs or cognitive behavioral therapy (CBT). Both are moderately effective, but many patients do not respond to the first treatment they try.

The team began with a small study to test whether scans performed before the onset of treatment could predict who would respond best to the treatment. Working with Stefan Hofmann, a clinical psychologist at Boston University, they scanned 38 SAD patients before they began a 12-week course of CBT. At the end of their treatment, the patients were evaluated for clinical improvement, and the researchers examined the scans for patterns of activity that correlated with the improvement. The results were very encouraging; it turned out that predictions based on scan data were 5-fold better than the existing methods based on severity of symptoms at the time of diagnosis.

The researchers then turned to another condition, ADHD, which presents a similar clinical challenge, in that commonly used drugs—such as Adderall or Ritalin—work well, but not for everyone. So the McGovern team began a collaboration with psychiatrist Joseph Biederman, Chief of Clinical and Research Programs in Pediatric Psychopharmacology and Adult ADHD
at Massachusetts General Hospital, on a similar study, looking for markers of treatment response.

The study is still ongoing, and it will be some time before results emerge, but the researchers are optimistic. “If we could predict who would respond to which treatment and avoid months of trial and error, it would be totally transformative for ADHD,” says Biederman.

Another goal is to predict in advance who is likely to develop a given disease in the future. The researchers have scanned children who have close relatives with schizophrenia or depression, and who are therefore at increased risk of developing these disorders themselves. Surprisingly, the children show patterns of resting state connectivity similar to those of patients.

“I was really intrigued by this,” says Whitfield-Gabrieli. “Even though these children are not sick, they have the same profile as adults who are.”

Whitfield-Gabrieli and Seidman are now expanding their study through a collaboration with clinical researchers at the Shanghai Mental Institute in China, who plan to image and then follow 225 people who are showing early risk signs for schizophrenia. They hope to find markers that predict who will develop the disease and who will not.

“While there are no drugs available to prevent schizophrenia, it may be possible to reduce the risk or severity of the disorder through CBT, or through interventions that reduce stress and improve sleep and well-being,” says Whitfield-Gabrieli. “One likely key to success is early identification of those at highest risk. If we could diagnose early, we could do early interventions
and potentially prevent disorders.”

From association to prediction

The search for predictive markers represents a departure from traditional psychiatric imaging studies, in which a group of patients is compared with a control group of healthy subjects. Studies of this type can reveal average differences between the groups, which may provide clues to the underlying biology of the disease. But they don’t provide information about individual patients, and so they have not been incorporated into clinical practice.

The difference is critical for clinicians, says Biederman. “I treat individuals, not groups. To bring predictive scans to the clinic, we need to be sure the individual scan is informative for the person you are treating.”

To develop these predictions, Whitfield-Gabrieli and Ghosh must first use sophisticated computational methods such as ‘deep learning’ to identify patterns in their data and to build models that relate the patterns to the clinical outcomes. They must then show that these models can generalize beyond the original study population—for example, that predictions based on patients from Boston can be applied to patients from Shanghai. The eventual goal is a model that can analyze a previously unseen brain scan from any individual, and predict with high confidence whether that person will (for example) develop schizophrenia or respond successfully to a particular therapy.

Achieving this will be challenging, because it will require scanning and following large numbers of subjects from diverse demographic groups—thousands of people, not just tens or hundreds
as in most clinical studies. Collaborations with large hospitals, such as the one in Shanghai, can help. Whitfield-Gabrieli has also received funding to collect imaging, clinical, and behavioral
data from over 200 adolescents with depression and anxiety, as part of the National Institutes of Health’s Human Connectome effort. These data, collected in collaboration with clinicians at
McLean Hospital, MGH and Boston University, will be available not only for the Gabrieli team, but for researchers anywhere to analyze. This is important, because no one team or center can
do it alone, says Ghosh. “Data must be collected by many and shared by all.”

The ultimate goal is to study as many patients as possible now so that the tools can help many more later. “Someday, a person will be able to go to a hospital, get a brain scan, charge it to their insurance, and know that it helped the doctor select the best treatment,” says Ghosh. “We’re still far away from that. But that is what we want to work towards.”

Feng Zhang named James and Patricia Poitras Professor in Neuroscience

The McGovern Institute for Brain Research at MIT has announced the appointment of Feng Zhang as the inaugural chairholder of the James and Patricia Poitras (1963) Professorship in Neuroscience. This new endowed professorship was made possible through a generous gift by Patricia and James Poitras ’63. The professorship is the second endowed chair Mr. and Mrs. Poitras have established at MIT, and extends their longtime support for mental health research.

“This newly created chair further enhances all that Jim and Pat have done for mental illness research at MIT,” said Robert Desimone, director of the McGovern Institute. “The Poitras Center for Affective Disorders Research has galvanized psychiatric research in multiple labs at MIT, and this new professorship will grant critical support to Professor Zhang’s genome engineering technologies, which continue to significantly advance mental illness research in labs worldwide.”

James and Patricia Poitras founded the Poitras Center for Affective Disorders Research at MIT in 2007. The Center has enabled dozens of advances in mental illness research, including the development of new disease models and novel technologies. Partnerships between the center and McLean Hospital have also resulted in improved methods for predicting and treating psychiatric disorders. In 2003, the Poitras Family established the James W. (1963) and Patricia T. Poitras Professor of Neuroscience in MIT’s Department of Brain and Cognitive Sciences, currently held by Guoping Feng.

“Providing support for high-risk, high-reward projects that have the potential to significantly impact individuals living with mental illness has been immensely rewarding to us,” Mr. and Mrs. Poitras say. “We are most interested in bringing basic scientific research to bear on new treatment options for psychiatric diseases. The work of Feng Zhang and his team is immeasurably promising to us and to the field of brain disorders research.”

Zhang joined MIT in 2011 as an investigator in the McGovern Institute for Brain Research and an assistant professor in the departments of Brain and Cognitive Sciences and Biological Engineering. In 2013, he was named the W.M. Keck Career Development Professor in Biomedical Engineering, and in 2016 he was awarded tenure. In addition to his roles at MIT, Zhang is a core member of the Broad Institute of Harvard and MIT.

“I am deeply honored to be named the first James and Patricia Poitras Professor in Neuroscience,” says Zhang. “The Poitras Family and I share a passion for researching, treating, and eventually curing major mental illness. This chair is a terrific recognition of my group’s dedication to advancing genomic and molecular tools to research and one day solve psychiatric illness.”

Zhang earned his BA in chemistry and physics from Harvard College and his PhD in chemistry from Stanford University. Zhang has received numerous awards for his work in genome editing, especially the CRISPR gene editing system, and optogenetics. These include the Perl-UNC Neuroscience Prize, the National Science Foundation’s Alan T. Waterman Award, the Jacob Heskel Gabbay Award in Biotechnology and Medicine, the Society for Neuroscience’s Young Investigator Award, the Okazaki Award, the Canada Gairdner International Award, and the Tang Prize. Zhang is a founder of Editas Medicine, a genome editing company founded by world leaders in the fields of genome editing, protein engineering, and molecular and structural biology.

Neuroscientists get a glimpse into the workings of the baby brain

In adults, certain regions of the brain’s visual cortex respond preferentially to specific types of input, such as faces or objects — but how and when those preferences arise has long puzzled neuroscientists.

One way to help answer that question is to study the brains of very young infants and compare them to adult brains. However, scanning the brains of awake babies in an MRI machine has proven difficult.

Now, neuroscientists at MIT have overcome that obstacle, adapting their MRI scanner to make it easier to scan infants’ brains as the babies watch movies featuring different types of visual input. Using these data, the team found that in some ways, the organization of infants’ brains is surprisingly similar to that of adults. Specifically, brain regions that respond to faces in adults do the same in babies, as do regions that respond to scenes.

“It suggests that there’s a stronger biological predisposition than I would have guessed for specific cortical regions to end up with specific functions,” says Rebecca Saxe, a professor of brain and cognitive sciences and member of MIT’s McGovern Institute for Brain Research.

Saxe is the senior author of the study, which appears in the Jan. 10 issue of Nature Communications. The paper’s lead author is former MIT graduate student Ben Deen, who is now a postdoc at Rockefeller University.

MRI adaptations

Functional MRI (magnetic resonance imaging) is the go-to technique for studying brain function in adults. However, very few researchers have taken on the challenge of trying to scan babies’ brains, especially while they are awake.

“Babies and MRI machines have very different needs,” Saxe points out. “Babies would like to do activities for two or three minutes and then move on. They would like to be sitting in a comfortable position, and in charge of what they’re looking at.”

On the other hand, “MRI machines would like to be loud and dark and have a person show up on schedule, stay still for the entire time, pay attention to one thing for two hours, and follow instructions closely,” she says.

To make the setup more comfortable for babies, the researchers made several modifications to the MRI machine and to their usual experimental protocols. First, they built a special coil (part of the MRI scanner that acts as a radio antenna) that allows the baby to recline in a seat similar to a car seat. A mirror in front of the baby’s face allows him or her to watch videos, and there is space in the machine for a parent or one of the researchers to sit with the baby.

The researchers also made the scanner much less noisy than a typical MRI machine. “It’s quieter than a loud restaurant,” Saxe says. “The baby can hear their parent talking over the sound of the scanner.”

Once the babies, who were 4 to 6 months old, were in the scanner, the researchers played the movies continuously while scanning the babies’ brains. However, they only used data from the time periods when the babies were actively watching the movies. From 26 hours of scanning 17 babies, the researchers obtained four hours of usable data from nine babies.

“The sheer tenacity of this work is truly amazing,” says Charles Nelson, a professor of pediatrics at Boston Children’s Hospital, who was not involved in the research. “The fact that they pulled this off is incredibly novel.”

Obtaining this data allowed the MIT team to study how infants’ brains respond to specific types of sensory input, and to compare their responses with those of adults.

“The big-picture question is, how does the adult brain come to have the structure and function that you see in adulthood? How does it get like that?” Saxe says. “A lot of the answer to that question will depend on having the tools to be able to see the baby brain in action. The more we can see, the more we can ask that kind of question.”

Distinct preferences

The researchers showed the babies videos of either smiling children or outdoor scenes such as a suburban street seen from a moving car. Distinguishing social scenes from the physical environment is one of the main high-level divisions that our brains make when interpreting the world.

“The questions we’re asking are about how you understand and organize your world, with vision as the main modality for getting you into these very different mindsets,” Saxe says. “In adults, there are brain regions that prefer to look at faces and socially relevant things, and brain regions that prefer to look at environments and objects.”

The scans revealed that many regions of the babies’ visual cortex showed the same preferences for scenes or faces seen in adult brains. This suggests that these preferences form within the first few months of life and refutes the hypothesis that it takes years of experience interpreting the world for the brain to develop the responses that it shows in adulthood.

The researchers also found some differences in the way that babies’ brains respond to visual stimuli. One is that they do not seem to have regions found in the adult brain that are “highly selective,” meaning these regions prefer features such as human faces over any other kind of input, including human bodies or the faces of other animals. The babies also showed some differences in their responses when shown examples from four different categories — not just faces and scenes but also bodies and objects.

“We believe that the adult-like organization of infant visual cortex provides a scaffolding that guides the subsequent refinement of responses via experience, ultimately leading to the strongly specialized regions observed in adults,” Deen says.

Saxe and colleagues now hope to try to scan more babies between the ages of 3 and 8 months so they can get a better idea of how these vision-processing regions change over the first several months of life. They also hope to study even younger babies to help them discover when these distinctive brain responses first appear.

Distinctive brain pattern may underlie dyslexia

A distinctive neural signature found in the brains of people with dyslexia may explain why these individuals have difficulty learning to read, according to a new study from MIT neuroscientists.

The researchers discovered that in people with dyslexia, the brain has a diminished ability to acclimate to a repeated input — a trait known as neural adaptation. For example, when dyslexic students see the same word repeatedly, brain regions involved in reading do not show the same adaptation seen in typical readers.

This suggests that the brain’s plasticity, which underpins its ability to learn new things, is reduced, says John Gabrieli, the Grover M. Hermann Professor in Health Sciences and Technology, a professor of brain and cognitive sciences, and a member of MIT’s McGovern Institute for Brain Research.

“It’s a difference in the brain that’s not about reading per se, but it’s a difference in perceptual learning that’s pretty broad,” says Gabrieli, who is the study’s senior author. “This is a path by which a brain difference could influence learning to read, which involves so many demands on plasticity.”

Former MIT graduate student Tyler Perrachione, who is now an assistant professor at Boston University, is the lead author of the study, which appears in the Dec. 21 issue of Neuron.

Reduced plasticity

The MIT team used magnetic resonance imaging (MRI) to scan the brains of young adults with and without reading difficulties as they performed a variety of tasks. In the first experiment, the subjects listened to a series of words read by either four different speakers or a single speaker.

The MRI scans revealed distinctive patterns of activity in each group of subjects. In nondyslexic people, areas of the brain that are involved in language showed neural adaption after hearing words said by the same speaker, but not when different speakers said the words. However, the dyslexic subjects showed much less adaptation to hearing words said by a single speaker.

Neurons that respond to a particular sensory input usually react strongly at first, but their response becomes muted as the input continues. This neural adaptation reflects chemical changes in neurons that make it easier for them to respond to a familiar stimulus, Gabrieli says. This phenomenon, known as plasticity, is key to learning new skills.

“You learn something upon the initial presentation that makes you better able to do it the second time, and the ease is marked by reduced neural activity,” Gabrieli says. “Because you’ve done something before, it’s easier to do it again.”

The researchers then ran a series of experiments to test how broad this effect might be. They asked subjects to look at series of the same word or different words; pictures of the same object or different objects; and pictures of the same face or different faces. In each case, they found that in people with dyslexia, brain regions devoted to interpreting words, objects, and faces, respectively, did not show neural adaptation when the same stimuli were repeated multiple times.

“The brain location changed depending on the nature of the content that was being perceived, but the reduced adaptation was consistent across very different domains,” Gabrieli says.

He was surprised to see that this effect was so widespread, appearing even during tasks that have nothing to do with reading; people with dyslexia have no documented difficulties in recognizing objects or faces.

He hypothesizes that the impairment shows up primarily in reading because deciphering letters and mapping them to sounds is such a demanding cognitive task. “There are probably few tasks people undertake that require as much plasticity as reading,” Gabrieli says.

Early appearance

In their final experiment, the researchers tested first and second graders with and without reading difficulties, and they found the same disparity in neural adaptation.

“We got almost the identical reduction in plasticity, which suggests that this is occurring quite early in learning to read,” Gabrieli says. “It’s not a consequence of a different learning experience over the years in struggling to read.”

Gabrieli’s lab now plans to study younger children to see if these differences might be apparent even before children begin to learn to read. They also hope to use other types of brain measurements such as magnetoencephalography (MEG) to follow the time course of the neural adaptation more closely.

The research was funded by the Ellison Medical Foundation, the National Institutes of Health, and a National Science Foundation Graduate Research Fellowship.