Neuroscientists get at the roots of pessimism

Many patients with neuropsychiatric disorders such as anxiety or depression experience negative moods that lead them to focus on the possible downside of a given situation more than the potential benefit.

MIT neuroscientists have now pinpointed a brain region that can generate this type of pessimistic mood. In tests in animals, they showed that stimulating this region, known as the caudate nucleus, induced animals to make more negative decisions: They gave far more weight to the anticipated drawback of a situation than its benefit, compared to when the region was not stimulated. This pessimistic decision-making could continue through the day after the original stimulation.

The findings could help scientists better understand how some of the crippling effects of depression and anxiety arise, and guide them in developing new treatments.

“We feel we were seeing a proxy for anxiety, or depression, or some mix of the two,” says Ann Graybiel, an MIT Institute Professor, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study, which appears in the Aug. 9 issue of Neuron. “These psychiatric problems are still so very difficult to treat for many individuals suffering from them.”

The paper’s lead authors are McGovern Institute research affiliates Ken-ichi Amemori and Satoko Amemori, who perfected the tasks and have been studying emotion and how it is controlled by the brain. McGovern Institute researcher Daniel Gibson, an expert in data analysis, is also an author of the paper.

Emotional decisions

Graybiel’s laboratory has previously identified a neural circuit that underlies a specific kind of decision-making known as approach-avoidance conflict. These types of decisions, which require weighing options with both positive and negative elements, tend to provoke a great deal of anxiety. Her lab has also shown that chronic stress dramatically affects this kind of decision-making: More stress usually leads animals to choose high-risk, high-payoff options.

In the new study, the researchers wanted to see if they could reproduce an effect that is often seen in people with depression, anxiety, or obsessive-compulsive disorder. These patients tend to engage in ritualistic behaviors designed to combat negative thoughts, and to place more weight on the potential negative outcome of a given situation. This kind of negative thinking, the researchers suspected, could influence approach-avoidance decision-making.

To test this hypothesis, the researchers stimulated the caudate nucleus, a brain region linked to emotional decision-making, with a small electrical current as animals were offered a reward (juice) paired with an unpleasant stimulus (a puff of air to the face). In each trial, the ratio of reward to aversive stimuli was different, and the animals could choose whether to accept or not.

This kind of decision-making requires cost-benefit analysis. If the reward is high enough to balance out the puff of air, the animals will choose to accept it, but when that ratio is too low, they reject it. When the researchers stimulated the caudate nucleus, the cost-benefit calculation became skewed, and the animals began to avoid combinations that they previously would have accepted. This continued even after the stimulation ended, and could also be seen the following day, after which point it gradually disappeared.

This result suggests that the animals began to devalue the reward that they previously wanted, and focused more on the cost of the aversive stimulus. “This state we’ve mimicked has an overestimation of cost relative to benefit,” Graybiel says.

The study provides valuable insight into the role of the basal ganglia (a region that includes the caudate nucleus) in this type of decision-making, says Scott Grafton, a professor of neuroscience at the University of California at Santa Barbara, who was not involved in the research.

“We know that the frontal cortex and the basal ganglia are involved, but the relative contributions of the basal ganglia have not been well understood,” Grafton says. “This is a nice paper because it puts some of the decision-making process in the basal ganglia as well.”

A delicate balance

The researchers also found that brainwave activity in the caudate nucleus was altered when decision-making patterns changed. This change, discovered by Amemori, is in the beta frequency and might serve as a biomarker to monitor whether animals or patients respond to drug treatment, Graybiel says.

Graybiel is now working with psychiatrists at McLean Hospital to study patients who suffer from depression and anxiety, to see if their brains show abnormal activity in the neocortex and caudate nucleus during approach-avoidance decision-making. Magnetic resonance imaging (MRI) studies have shown abnormal activity in two regions of the medial prefrontal cortex that connect with the caudate nucleus.

The caudate nucleus has within it regions that are connected with the limbic system, which regulates mood, and it sends input to motor areas of the brain as well as dopamine-producing regions. Graybiel and Amemori believe that the abnormal activity seen in the caudate nucleus in this study could be somehow disrupting dopamine activity.

“There must be many circuits involved,” she says. “But apparently we are so delicately balanced that just throwing the system off a little bit can rapidly change behavior.”

The research was funded by the National Institutes of Health, the CHDI Foundation, the U.S. Office of Naval Research, the U.S. Army Research Office, MEXT KAKENHI, the Simons Center for the Social Brain, the Naito Foundation, the Uehara Memorial Foundation, Robert Buxton, Amy Sommer, and Judy Goldberg.

Testing the limits of artificial visual recognition systems

While it can sometimes seem hard to see the forest from the trees, pat yourself on the back: as a human you are actually pretty good at object recognition. A major goal for artificial visual recognition systems is to be able to distinguish objects in the way that humans do. If you see a tree or a bush from almost any angle, in any degree of shading (or even rendered in pastels and pixels in a Monet), you would recognize it as a tree or a bush. However, such recognition has traditionally been a challenge for artificial visual recognition systems. Researchers at MIT’s McGovern Institute for Brain Research and Department of Brain and Cognitive Sciences (BCS) have now directly examined and shown that artificial object recognition is quickly becoming more primate-like, but still lags behind when scrutinized at higher resolution.

In recent years, dramatic advances in “deep learning” have produced artificial neural network models that appear remarkably similar to aspects of primate brains. James DiCarlo, Peter de Florez Professor and Department Head of BCS, set out to determine and carefully quantify how well the current leading artificial visual recognition systems match humans and other higher primates when it comes to image categorization. In recent years, dramatic advances in “deep learning” have produced artificial neural network models that appear remarkably similar to aspects of primate brains, so DiCarlo and his team put these latest models through their paces.

Rishi Rajalingham, a graduate student in DiCarlo’s lab conducted the study as part of his thesis work at the McGovern Institute. As Rajalingham puts it “one might imagine that artificial vision systems should behave like humans in order to seamlessly be integrated into human society, so this tests to what extent that is true.”

The team focused on testing so-called “deep, convolutional neural networks” (DCNNs), and specifically those that had trained on ImageNet, a collection of large-scale category-labeled image sets that have recently been used as a library to train neural networks (called DCNNIC models). These specific models have thus essentially been trained in an intense image recognition bootcamp. The models were then pitted against monkeys and humans and asked to differentiate objects in synthetically constructed images. These synthetic images put the object being categorized in unusual backgrounds and orientations. The resulting images (such as the floating camel shown above) evened the playing field for the machine models (humans would ordinarily have a leg up on image categorization based on assessing context, so this was specifically removed as a confounder to allow a pure comparison of specific object categorization).

DiCarlo and his team found that humans, monkeys and DCNNIC models all appeared to perform similarly, when examined at a relatively coarse level. Essentially, each group was shown 100 images of 24 different objects. When you averaged how they did across 100 photos of a given object, they could distinguish, for example, camels pretty well overall. The researchers then zoomed in and examined the behavioral data at a much finer resolution (i.e. for each single photo of a camel), thus deriving more detailed “behavioral fingerprints” of primates and machines. These detailed analyses of how they did for each individual image revealed strong differences: monkeys still behaved very consistently like their human primate cousins, but the artificial neural networks could no longer keep up.

“I thought it was quite surprising that monkeys and humans are remarkably similar in their recognition behaviors, especially given that these objects (e.g. trucks, tanks, camels, etc.) don’t “mean” anything to monkeys” says Rajalingham. “It’s indicative of how closely related these two species are, at least in terms of these visual abilities.”

DiCarlo’s team gave the neural networks remedial homework to see if they could catch up upon extra-curricular training by now training the models on images that more closely resembled the synthetic images used in their study. Even with this extra training (which the humans and monkeys did not receive), they could not match a primate’s ability to discern what was in each individual image.

DiCarlo conveys that this is a glass half-empty and half-full story. Says DiCarlo, “The half full part is that, today’s deep artificial neural networks that have been developed based on just some aspects of brain function are far better and far more human-like in their object recognition behavior than artificial systems just a few years ago,” explains DiCarlo. “However, careful and systematic behavioral testing reveals that even for visual object recognition, the brain’s neural network still has some tricks up its sleeve that these artificial neural networks do not yet have.”

Dicarlo’s study begins to define more precisely when it is that the leading artificial neural networks start to “trip up”, and highlights a fundamental aspect of their architecture that struggles with categorization of single images. This flaw seems to be unaddressable through further brute force training. The work also provides an unprecedented and rich dataset of human (1476 anonymous humans to be exact) and primate behavior that will help act as a quantitative benchmark for improvement of artificial neural networks.

 

Image: Example of synthetic image used in the study. For category ‘camel’, 100 distinct, synthetic camel images were shown to DCNNIC models, humans and rhesus monkeys. 24 different categories were tested altogether.

Charting the cerebellum

Small and tucked away under the cerebral hemispheres toward the back of the brain, the human cerebellum is still immediately obvious due to its distinct structure. From Galen’s second century anatomical description to Cajal’s systematic analysis of its projections, the cerebellum has long drawn the eyes of researchers studying the brain.  Two parallel studies from MIT’s McGovern institute have recently converged to support an unexpectedly complex level of non-motor cerebellar organization, that would not have been predicted from known motor representation regions.

Historically the cerebellum has primarily been considered to impact motor control and coordination. Think of this view as the cerebellum being the chain on a bicycle, registering what is happening up front in the cortex, and relaying the information so that the back wheel moves at a coordinated pace. This simple view has been questioned as cerebellar circuits have been traced to the basal ganglia and to neocortical regions via the thalamus. This new view suggests the cerebellum is a hub in a complex network, with potentially higher and non-motor functions including cognition and reward-based learning.

A collaboration between the labs of John Gabrieli, Investigator at the McGovern Institute for Brain Research and Jeremy Schmahmann, of the Ataxia Unit at Massachusetts General Hospital and Harvard Medical School, has now used functional brain imaging to give new insight into the cerebellar organization of non-motor roles, including working memory, language, and, social and emotional processing. In a complementary paper, a collaboration between Sheeba Anteraper of MIT’s Martinos Imaging Center and Gagan Joshi of the Alan and Lorraine Bressler Clinical and Research Program at Massachusetts General Hospital, has found changes in connectivity that occur in the cerebellum in autism spectrum disorder (ASD).

A more complex map of the cerebellum

Published in NeuroImage, and featured on the cover, the first study was led by author Xavier Guell, a postdoc in the Gabrieli and Schmahmann labs. The authors used fMRI data from the Human Connectome Project to examine activity in different regions of the cerebellum during specific tasks and at rest. The tasks used extended beyond motor activity to functions recently linked to the cerebellum, including working memory, language, and social and emotional processing. As expected, the authors saw that two regions assigned by other methods to motor activity were clearly modulated during motor tasks.

“Neuroscientists in the 1940s and 1950s described a double representation of motor function in the cerebellum, meaning that two regions in each hemisphere of the cerebellum are engaged in motor control,” explains Guell. “That there are two areas of motor representation in the cerebellum remains one of the most well-established facts of cerebellar macroscale physiology.”

When it came to assigning non-motor tasks, to their surprise, the authors identified three representations that localized to different regions of the cerebellum, pointing to an unexpectedly complex level of organization.

Guell explains the implications further. “Our study supports the intriguing idea that while two parts of the cerebellum are simultaneously engaged in motor tasks, three other parts of the cerebellum are simultaneously engaged in non-motor tasks. Our predecessors coined the term “double motor representation,” and we may now have to add “triple non-motor representation” to the dictionary of cerebellar neuroscience.”

A serendipitous discussion

What happened next, over a discussion of data between Xavier Guell and Sheeba Arnold Anteraper of the McGovern Institute for Brain Research that culminated in a paper led by Anteraper, illustrates how independent strands can meet and reinforce to give a fuller scientific picture.

The findings by Guell and colleagues made the cover of NeuroImage.
The findings by Guell and colleagues made the cover of NeuroImage.

Anteraper and colleagues examined brain images from high-functioning ASD patients, and looked for statistically-significant patterns, letting the data speak rather than focusing on specific ‘candidate’ regions of the brain. To her surprise, networks related to language were highlighted, as well as the cerebellum, regions that had not been linked to ASD, and that seemed at first sight not to be relevant. Scientists interested in language processing, immediately pointed her to Guell.

“When I went to meet him,” says Anteraper, “I saw immediately that he had the same research paper that I’d been reading on his desk. As soon as I showed him my results, the data fell into place and made sense.”

After talking with Guell, they realized that the same non-motor cerebellar representations he had seen, were independently being highlighted by the ASD study.

“When we study brain function in neurological or psychiatric diseases we sometimes have a very clear notion of what parts of the brain we should study” explained Guell, ”We instead asked which parts of the brain have the most abnormal patterns of functional connectivity to other brain areas? This analysis gave us a simple, powerful result. Only the cerebellum survived our strict statistical thresholds.”

The authors found decreased connectivity within the cerebellum in the ASD group, but also decreased strength in connectivity between the cerebellum and the social, emotional and language processing regions in the cerebral cortex.

“Our analysis showed that regions of disrupted functional connectivity mapped to each of the three areas of non-motor representation in the cerebellum. It thus seems that the notion of two motor and three non-motor areas of representation in the cerebellum is not only important for understanding how the cerebellum works, but also important for understanding how the cerebellum becomes dysfunctional in neurology and psychiatry.”

Guell says that many questions remain to be answered. Are these abnormalities in the cerebellum reproducible in other datasets of patients diagnosed with ASD? Why is cerebellar function (and dysfunction) organized in a pattern of multiple representations? What is different between each of these representations, and what is their distinct contribution to diseases such as ASD? Future work is now aimed at unraveling these questions.

The Learning Brain

“There’s a slogan in education,” says McGovern Investigator John Gabrieli. “The first three years are learning to read, and after that you read to learn.”

For John Gabrieli, learning to read represents one of the most important milestones in a child’s life. Except, that is, when a child can’t. Children who cannot learn to read adequately by the first grade have a 90 percent chance of still reading poorly in the fourth grade, and 75 percent odds of struggling in high school. For the estimated 10 percent of schoolchildren with a reading disability, that struggle often comes with a host of other social and emotional challenges: anxiety, damaged self-esteem, increased risk for poverty and eventually, encounters with the criminal justice system.

Most reading interventions focus on classical dyslexia, which is essentially a coding problem—trouble moving letters into sound patterns in the brain. But other factors, such as inadequate vocabulary and lack of practice opportunities, hinder reading too. The diagnosis can be subjective, and for those who are diagnosed, the standard treatments help only some students. “Every teacher knows half to two-thirds have a good response, the other third don’t,” Gabrieli says. “It’s a mystery. And amazingly there’s been almost no progress on that.”

For the last two decades, Gabrieli has sought to unravel the neuroscience behind learning and reading disabilities and, ultimately, convert that understanding into new and better education
interventions—a sort of translational medicine for the classroom.

The Home Effect

In 2011, when Julia Leonard was a research assistant in Gabrieli’s lab, she planned to go into pediatrics. But she became drawn to the lab’s education projects and decided to join the lab as
a graduate student to learn more. By 2015, she helped coauthor a landmark study with postdoc Allyson Mackey, that sought neural markers for the academic “achievement gap,” which separates higher socioeconomic status (SES) children from their disadvantaged peers. It was the first study to make a connection between SES-linked differences in brain structure and educational markers. Specifically, they found children from wealthier backgrounds had thicker cortical brain regions, which correlated with better academic achievement.

“Being a doctor is a really awesome and powerful career,” she says. “But I was more curious about the research that could cause bigger changes in children’s lives.”

Leonard collaborated with Rachel Romeo, another graduate student in the Gabrieli lab who wanted to understand the powerful effect of SES on the developing brain. Romeo had a distinctive background in speech pathology and literacy, where she’d observed wealthier students progressing more quickly compared to their disadvantaged peers.

Their research is revealing a fascinating picture. In a 2017 study, Romeo compared how reading-disabled children from low and high SES backgrounds fared after an intensive summer reading intervention. Low SES children in the intervention improved most in their reading, and MRI scans revealed their brains also underwent greater structural changes in response to the intervention. Higher SES children did not appear to change much, either in skill or brain structure.

“In the few studies that have looked at SES effects on treatment outcomes,” Romeo says, “the research suggests that higher SES kids would show the most improvement. We were surprised to
find that this wasn’t true.” She suspects that the midsummer timing of the intervention may account for this. Lower SES kids’ performance often suffer most during a “summer slump,”
and would therefore have the greatest potential to improve from interventions at this time.

However, in another study this year, Leonard uncovered unique brain differences in lower-SES children. Only among lower-SES children was better reasoning ability associated with thicker
cortex in a key part of the brain. Same behavior, different neural signatures.

“So this becomes a really interesting basic science question,” Leonard says. “Does the brain support cognition the same way across everyone, or does it differ based on how you grow up?”

Not a One-Size-Fits-All

Critics of such “educational neuroscience” have highlighted the lack of useful interventions produced by this research. Gabrieli agrees that so far, little has emerged. “The painful thing is the slowness of this work. It’s mind-boggling,” Gabrieli admits. Every intervention requires all the usual human research requirements, plus coordinating with schools, parents, teachers, and so on. “It’s a huge process to do even the smallest intervention,” he explains. Partly because of that, the field is still relatively new.

But he disagrees with the idea that nothing will come from this research. Gabrieli’s lab previously identified neural markers in children who will go on to develop reading disabilities. These markers could even predict who would or would not respond to standard treatments that focus on phonetic letter-sound coding.

Romeo and Leonard’s work suggests that varied etiologies underlie reading disabilities, which may be the key. “For so long people have thought that reading disorders were just a unitary construct: kids are bad at reading, so let’s fix that with a one-size-fits-all treatment,” Romeo says.

Such findings may ultimately help resource-strapped schools target existing phonetic training rather than enrolling all struggling readers in the same program, to see some still fail.

Think Spaces

At the Oliver Hazard Perry School, a public K-8 school located on the South Boston waterfront, teachers like Colleen Labbe have begun to independently navigate similar problems as they try
to reach their own struggling students.

“A lot of times we look at assessments and put students in intervention groups like phonics,” Labbe says. “But it’s important to also ask what is happening for these students on their way to school and at home.”

For Labbe and Perry Principal Geoffrey Rose, brain science has proven transformative. They’ve embraced literature on neuroplasticity—the idea that brains can change if teachers find the right combination of intervention and circumstances, like the low-SES students who benefited in Romeo and Leonard’s study.

“A big myth is that the brain can’t grow and change, and if you can’t reach that student, you pass them off,” Labbe says.

The science has also been empowering to her students, validating their own powers of self-change. “I tell the kids, we’re going to build the goop!” she says, referring to the brain’s ability to make new connections.

“All kids can learn,” Rose agrees. “But the flip of that is, can all kids do school?” His job, he says, is to make sure they can.

The classrooms at Perry are a mix of students from different cultures and socioeconomic backgrounds, so he and Labbe have focused on helping teachers find ways to connect with these children and help them manage their stresses and thus be ready to learn. Teachers here are armed with “scaffolds”—digestible neuro- and cognitive science aids culled from Rose’s postdoctoral studies at Boston College’s Professional School Administrator Program for school leaders. These encourage teachers to be more aware of cultural differences and tendencies in themselves and their students, to better connect.

There are also “Think Spaces” tucked into classroom corners. “Take a deep breath and be calm,” read posters at these soothing stations, which are equipped with de-stressing tools, like squeezable balls, play-dough, and meditation-inspiring sparkle wands. It sounds trivial, yet studies have shown that poverty-linked stressors like food and home insecurity take a toll on emotion and memory-linked brain areas like the amygdala and hippocampus.

In fact, a new study by Clemens Bauer, a postdoc in Gabrieli’s lab, argues that mindfulness training can help calm amygdala hyperactivity, help lower self-perceived stress, and boost attention. His study was conducted with children enrolled in a Boston charter school.

Taking these combined approaches, Labbe says, she’s seen one of her students rise from struggling at the lowest levels of instruction, to thriving by year end. Labbe’s focus on understanding the girl’s stressors, her family environment, and what social and emotional support she really needed was key. “Now she knows she can do it,” Labbe says.

Rose and Labbe only wish they could better bridge the gap between educators like themselves and brain scientists like Gabrieli. To help forge these connections, Rose recently visited Gabrieli’s lab and looks forward to future collaborations. Brain research will provide critical insights into teaching strategy, he says, but the gap is still wide.

From Lab to Classroom

“I’m hugely impressed by principals and teachers who are passionately interested in understanding the brain,” Gabrieli says. Fortunately, new efforts are bridging educators and scientists.

This March, Gabrieli and the MIT Integrated Learning Initiative—MITili, which he also directs—announced a $30 million-dollar grant from the Chan Zuckerberg Initiative for a collaboration
between MIT, the Harvard Graduate School of Education, and Florida State University.

The grant aims to translate some of Gabrieli’s work into more classrooms. Specifically, he hopes to produce better diagnostics that can identify children at risk for dyslexia and other learning
disabilities before they even learn to read.

He hopes to also provide rudimentary diagnostics that identify the source of struggle, be it classic dyslexia, lack of home support, stress, or maybe a combination of factors. That in turn,
could guide treatment—standard phonetic care for some children, versus alternatives: social support akin to Labbe’s efforts, reading practice, or maybe just vocabulary-boosting conversation time with adults.

“We want to get every kid to be an adequate reader by the end of the third grade,” Gabrieli says. “That’s the ultimate goal for me: to help all children become learners.”

Michale Fee receives McKnight Technological Innovations in Neuroscience Award

McGovern Institute investigator Michale Fee has been selected to receive a 2018 McKnight Technological Innovations in Neuroscience Award for his research on “new technologies for imaging and analyzing neural state-space trajectories in freely-behaving small animals.”

“I am delighted to get support from the McKnight Foundation,” says Fee, who is also the Glen V. and Phyllis F. Dorflinger Professor in the Department of Brain and Cognitive Neurosciences at MIT. “We’re very excited about this project which aims to develop technology that will be a great help to the broader neuroscience community.”

Fee studies the neural mechanisms by which the brain, specifically that of juvenile songbirds, learns complex sequential behaviors. The way that songbirds learn a song through trial and error is analogous to humans learning complex behaviors, such as riding a bicycle. While it would be insightful to link such learning to neural activity, current methods for monitoring neurons can only monitor a limited field of neurons, a big issue since such learning and behavior involve complex interactions between larger circuits. While a wider field of view for recordings would help decipher neural changes linked to this learning paradigm, current microscopy equipment is large relative to a juvenile songbird, and microscopes that can record neural activity generally constrain the behavior of small animals. Ideally, technologies need to be lightweight (about 1 gram) and compact in size (the size of a dime), a far cry from current larger microscopes that weigh in at 3 grams. Fee hopes to be able to break these technical boundaries and miniaturize the recording equipment thus allowing recording of more neurons in naturally behaving small animals.

“We are thrilled that the McKnight Foundation has chosen to support this project. The technology that Michale’s developing will help to better visualize and understand the circuits underlying learning,” says Robert Desimone, director of MIT’s McGovern Institute for Brain Research.

In addition to development and miniaturization of the microscopy hardware itself, the award will support the development of technology that helps analyze the resulting images, so that the neuroscience community at large can more easily deploy and use the technology.

Are eyes the window to the soul?

Covert attention has been defined as shifting attention without shifting the eyes. The notion that we can internally pay attention to an object in a scene without making eye movements to it has been a cornerstone of the fields of psychology and cognitive neuroscience, which attempt to understand mental phenomena that are purely internal to the mind, divorced from movements of the eyes or limbs. A study from the McGovern Institute for Brain Research at MIT now questions the dissociation of eye movements from attention in this context, finding that microsaccades precede modulation of specific brain regions associated with attention. In other words, a small shift of the eyes is linked to covert attention, after all.

Seeing the world through human eyes, which have a focused, high-acuity center to the field of vision, requires saccades (rapid movements of the eyes that move between points of fixation). Saccades help to piece together important information in an overall scene and are closely linked to attention shifts, at least in the case of overt attention. In the case of covert attention, the view has been different since this type of attention can shift while the gaze is fixed. Microsaccades are tiny movements of the eyes that are made when subjects maintain fixation on an object.

“Microsaccades are typically so small, that they are ignored by many researchers.” says Robert Desimone, director of MIT’s McGovern Institute for Brain Research and lead author on the study. “We went in and tested what they might represent by linking them to attentional firing in particular brain regions.”

In the study from Desimone and his team, the authors used an infrared eye-tracking system to follow microsaccades in awake macaques. The authors monitored activity in cortical regions of the brain linked to visual attention, including area V4. The authors saw increased neuronal firing in V4, but only when preceded by a microsaccade toward the attended stimulus. This effect on neuronal activity vanished when a microsaccade was directed away from the stimulus. The authors also saw increased firing in the inferior temporal (IT) cortex after a microsaccade, and even found that attention to an object amongst a ‘clutter’ of different visual objects, finding that attention to a specific object in the group was preceded by a microsaccade.

“I expected some links between microsaccades and covert attention,” says lead author of the study Eric Lowet, now a postdoctoral fellow at Boston University. “However, the magnitude of the effect and the precise link to microsaccade onset was surprising to me and the lab. Furthermore, to see these effects also in the IT cortex, which has large receptive fields and is involved in higher-order visual cognition, was striking”.

Why was this strong effect previously missed? The separation of eye movement and attention is so core to the concept of covert attention, that studies often actively seek to separate the visual stimulus by directing attention to a target outside the receptive field of vision, while the subject’s gaze is maintained on a fixation stimulus. The authors are the first to directly test microsaccades toward and away from an attended stimulus, and it was this set up, and the difference in neuronal firing upon separating these eye movements, that allowed them to draw the conclusions made.

“When we first separated attention effects on V4 firing rates by the direction of the microsaccade relative to the attended stimulus,” Lowet explains, “I realized this analysis was a game changer.”

The study suggests several future directions of study that are being pursued by the Desimone lab. Low frequency rhythmic (in the delta and theta range) sampling has been suggested as a possible explanation for attentional modulation. According to this idea, people sample visual scenes rhythmically, with an intrinsic sampling interval of about a quarter of a second.

“We do not know whether microsaccades and delta/theta rhythms have a common generator,” points out Karthik Srinivasan, a co-author on the study and a scientist at the McGovern Institute. “But if they do, what brain areas are the source of such a generator? Are the low frequency rhythms observed merely the frequency-analytic manifestation of microsaccades or are they linked?”

These are intriguing future steps for analysis that can be addressed in light of the current study which points to microsaccades as an important marker for visual attention and cognitive processes. Indeed, some of the previously hidden aspects of our cognition are revealed through our motor behavior after all.

Does our ability to learn new things stop at a certain age?

This is actually a neuromyth, but it has some basis in scientific research. People’s endorsement of this statement is likely due to research indicating that there is a high level of synaptogenesis (formation of connections between neurons) between ages 0-3, that some skills (learning a new language, for example) do diminish with age, and some events in brain development, such as connections in the visual system, are tied to exposure to a stimulus, such as light. That said, it is clear that a new language can be learned later in life, and at the level of synaptogenesis, we now know that synaptic connections are plastic.

If you thought this statement was true, you’re not alone. Indeed, a 2017 study by McGrath and colleagues found that 18% of the public (N = 3,045) and 19% of educators (N = 598) believed this statement was correct.

Learn more about how teachers and McGovern researchers are working to target learning interventions well past so-called “critical periods” for learning.

Chronic neural implants modulate microstructures in the brain with pinpoint accuracy

Post by Windy Pham

The diversity of structures and functions of the brain is becoming increasingly realized in research today. Key structures exist in the brain that regulate emotion, anxiety, happiness, memory, and mobility. These structures can come in a huge variety of shapes and sizes and can all be physically near one another. Dysfunction of these structures and circuits linking them are common causes of many neurologic and neuropsychiatric diseases. For example, the substantia nigra is only a few millimeters in size yet is crucial for movement and coordination. Destruction of substantia nigra neurons is what causes motor symptoms in Parkinson’s disease.

New technologies such as optogenetics have allowed us to identify similar microstructures in the brain. However, these techniques rely on liquid infusions into the brain, which prepare the regions to be studied to respond to light. These infusions are done with large needles, which do not have the fine control to target specific regions. Clinical therapy has also lagged behind. New drug therapies aimed at treating these conditions are delivered orally, which results in drug distribution throughout the brain, or through large needle-cannulas, which do not have the fine control to accurately dose specific regions. As a result, patients of neurologic and psychiatric disorders frequently fail to respond to therapies due to poor drug delivery to diseased regions.

A new study addressing this problem has been published in Proceedings of the National Academy of Sciences. The lead author is Khalil Ramadi, a medical engineering and medical physics (MEMP) PhD candidate in the Harvard-MIT Program in Health Sciences and Technology (HST). For this study, Khalil and his thesis advisor, Michael Cima, the David H. Koch Professor of Engineering within the Department of Materials Science and Engineering and the Koch Institute for Integrative Cancer Research, and associate dean of innovation in the School of Engineering, collaborated with Institute Professors Robert Langer and Ann Graybiel, an Investigator at the McGovern Institute of Brain Research to tackle this issue.

The team developed tools to enable targeted delivery of nanoliters of drugs to deep brain structures through chronically implanted microprobes. They also developed nuclear imaging techniques using positron emission tomography (PET) to measure the volume of the brain region targeted with each infusion. “Drugs for disorders of the central nervous system are nonspecific and get distributed throughout the brain,” Cima says. “Our animal studies show that volume is a critical factor when delivering drugs to the brain, as important as the total dose delivered. Using microcannulas and microPET imaging, we can control the area of brain exposed to these drugs, improving targeting accuracy double time comparing to the traditional methods used today.”

The researchers were also able to design cannulas that are MRI-compatible and implanted up to one year in rats. Implanting these cannulas with micropumps allowed the researchers to remotely control the behavior of animals. Significantly, they found that varying the volume infused alone had a profound effect on behavior induced, even if the total drug dose delivered stayed constant. These results show that regulation of volume delivery to brain region is extremely important in influencing brain activity. This technology could potentially enable precise investigation of neurological disease pathology in preclinical models, and more effective treatment in human patients.

 

 

Advancing knowledge in medical and genetic sciences

Research proposals from Laurie Boyer, associate professor of biology; Matt Shoulders, the Whitehead Career Development Associate Professor of Chemistry; and Feng Zhang, associate professor in the departments of Brain and Cognitive Sciences and Biological Engineering, Patricia and James Poitras ’63 Professor in Neuroscience, investigator at the McGovern Institute for Brain Research, and core member of the Broad Institute, have recently been selected for funding by the G. Harold and Leila Y. Mathers Foundation. These three grants from the Mathers Foundation will enable, over the next three years, key projects in the researchers’ respective labs.

Regenerative medicine holds great promise for treating heart failure, but that promise is unrealized, in part, due to a lack of sufficient understanding of heart development at the mechanistic level. Boyer’s research aims to achieve a deep, mechanistic understanding of the gene control switches that coordinate normal heart development. She then aims to leverage this knowledge and design effective strategies for rewiring faulty circuits in aging and disease.

“We are very grateful to receive support and recognition of our work from the Mathers Foundation,” said Boyer. “This award will allow us to build upon our prior work and to embark upon high risk projects that could ultimately change how we think about treating diseases resulting from faulty wiring of gene expression programs.”

Shoulders’ goal, with this support from the Mathers Foundation, is to elucidate underlying causes of osteoarthritis. There is currently no cure for osteoarthritis, which is perhaps the most common aging-related disease and is characterized by a progressive deterioration of joint cartilage culminating in inflammation, debilitating pain, and joint dysfunction. The Shoulders Group aims to test a new model for osteoarthritis — specifically, the concept that a collapse of proteostasis in aging cartilage cells creates an unrecoverable cartilage repair defect, thus initiating a self-amplifying, destructive feedback loop leading to pathology. Proteostasis collapse in aging cells is a well-known, disease-causing phenomenon that has previously been considered primarily in the context of neurodegenerative disorders. If correct, the proteostasis collapse model for osteoarthritis could one day lead to a novel class of therapeutic options for the disease.

“We are delighted to receive this generous support from the Mathers Foundation, which makes it possible for us to pursue an outside-the-box, high-risk/high-impact idea regarding the origins of osteoarthritis,” said Shoulders. “The research we are now able to pursue will not only provide fundamental, molecular-level insights into joint function, but also could change how we think about this widespread disease.”

Many genetic diseases are caused by the change of just a single base of DNA. Zhang is a leader in the field of genome editing, and he and his team have developed an array of tools based on the microbial immune CRISPR-Cas systems that can manipulate DNA and RNA in human cells. Together, these tools are changing the way molecular biology research is conducted, and they hold immense potential as therapeutic agents to correct thousands of genetic diseases. Now, with the support of the Mathers Foundation, Zhang is working to realize this potential by developing a CRISPR-based therapeutic that works at the level of RNA and offers a safe, effective route to treating a range of diseases, including diseases of the brain and central nervous system, which are difficult to treat with existing gene therapies.

“The generous support from the Mathers Foundation allows us the freedom to explore this exciting new direction for CRISPR-based technologies,” Zhang stated.

Known for their generosity and philanthropy, G. Harold and Leila Y. Mathers created their foundation with the goal of distributing their wealth among sustainable, charitable causes, with a particular interest in basic scientific research. The Mathers Foundation, whose ongoing mission is to advance knowledge in the life sciences by sponsoring scientific research and applying learnings and discoveries to benefit mankind, has issued grants since 1982.

How music lessons can improve language skills

Many studies have shown that musical training can enhance language skills. However, it was unknown whether music lessons improve general cognitive ability, leading to better language proficiency, or if the effect of music is more specific to language processing.

A new study from MIT has found that piano lessons have a very specific effect on kindergartners’ ability to distinguish different pitches, which translates into an improvement in discriminating between spoken words. However, the piano lessons did not appear to confer any benefit for overall cognitive ability, as measured by IQ, attention span, and working memory.

“The children didn’t differ in the more broad cognitive measures, but they did show some improvements in word discrimination, particularly for consonants. The piano group showed the best improvement there,” says Robert Desimone, director of MIT’s McGovern Institute for Brain Research and the senior author of the paper.

The study, performed in Beijing, suggests that musical training is at least as beneficial in improving language skills, and possibly more beneficial, than offering children extra reading lessons. The school where the study was performed has continued to offer piano lessons to students, and the researchers hope their findings could encourage other schools to keep or enhance their music offerings.

Yun Nan, an associate professor at Beijing Normal University, is the lead author of the study, which appears in the Proceedings of the National Academy of Sciences the week of June 25.

Other authors include Li Liu, Hua Shu, and Qi Dong, all of Beijing Normal University; Eveline Geiser, a former MIT research scientist; Chen-Chen Gong, an MIT research associate; and John Gabrieli, the Grover M. Hermann Professor in Health Sciences and Technology, a professor of brain and cognitive sciences, and a member of MIT’s McGovern Institute for Brain Research.

Benefits of music

Previous studies have shown that on average, musicians perform better than nonmusicians on tasks such as reading comprehension, distinguishing speech from background noise, and rapid auditory processing. However, most of these studies have been done by asking people about their past musical training. The MIT researchers wanted to perform a more controlled study in which they could randomly assign children to receive music lessons or not, and then measure the effects.

They decided to perform the study at a school in Beijing, along with researchers from the IDG/McGovern Institute at Beijing Normal University, in part because education officials there were interested in studying the value of music education versus additional reading instruction.

“If children who received music training did as well or better than children who received additional academic instruction, that could a justification for why schools might want to continue to fund music,” Desimone says.

The 74 children participating in the study were divided into three groups: one that received 45-minute piano lessons three times a week; one that received extra reading instruction for the same period of time; and one that received neither intervention. All children were 4 or 5 years old and spoke Mandarin as their native language.

After six months, the researchers tested the children on their ability to discriminate words based on differences in vowels, consonants, or tone (many Mandarin words differ only in tone). Better word discrimination usually corresponds with better phonological awareness — the awareness of the sound structure of words, which is a key component of learning to read.

Children who had piano lessons showed a significant advantage over children in the extra reading group in discriminating between words that differ by one consonant. Children in both the piano group and extra reading group performed better than children who received neither intervention when it came to discriminating words based on vowel differences.

The researchers also used electroencephalography (EEG) to measure brain activity and found that children in the piano group had stronger responses than the other children when they listened to a series of tones of different pitch. This suggest that a greater sensitivity to pitch differences is what helped the children who took piano lessons to better distinguish different words, Desimone says.

“That’s a big thing for kids in learning language: being able to hear the differences between words,” he says. “They really did benefit from that.”

In tests of IQ, attention, and working memory, the researchers did not find any significant differences among the three groups of children, suggesting that the piano lessons did not confer any improvement on overall cognitive function.

Aniruddh Patel, a professor of psychology at Tufts University, says the findings also address the important question of whether purely instrumental musical training can enhance speech processing.

“This study answers the question in the affirmative, with an elegant design that directly compares the effect of music and language instruction on young children. The work specifically relates behavioral improvements in speech perception to the neural impact of musical training, which has both theoretical and real-world significance,” says Patel, who was not involved in the research.

Educational payoff

Desimone says he hopes the findings will help to convince education officials who are considering abandoning music classes in schools not to do so.

“There are positive benefits to piano education in young kids, and it looks like for recognizing differences between sounds including speech sounds, it’s better than extra reading. That means schools could invest in music and there will be generalization to speech sounds,” Desimone says. “It’s not worse than giving extra reading to the kids, which is probably what many schools are tempted to do — get rid of the arts education and just have more reading.”

Desimone now hopes to delve further into the neurological changes caused by music training. One way to do that is to perform EEG tests before and after a single intense music lesson to see how the brain’s activity has been altered.

The research was funded by the National Natural Science Foundation of China, the Beijing Municipal Science and Technology Commission, the Interdiscipline Research Funds of Beijing Normal University, and the Fundamental Research Funds for the Central Universities.