Constructing the striatum

The striatum, the largest nucleus of the basal ganglia in the vertebrate brain, was historically thought to be a homogeneous group of cells. This view was overturned in a classic series of papers from MIT Institute Professor, Ann Graybiel. In previous work, Graybiel, who is also an investigator at MIT’s McGovern Institute, found that the striatum is highly organized, both structurally and functionally and in terms of connectivity. Graybiel has now collaborated with Z. Josh Huang’s lab at Cold Spring Harbor Laboratory to map the developmental lineage of cells that give rise to this complex architecture. The authors found that different functions of the striatum, such as execution of actions as opposed to evaluation of outcomes, are defined early on as part of the blueprint that constructs this brain region, rather than sculpted through a later mechanism.

Graybiel and colleagues tracked what is happening early in development by driving cell-specific fluorescent markers that allowed them to follow the progenitors that give rise to cells in the striatum. The striatum is known, thanks to Graybiel’s early work, to be organized into compartments called striosomes and the matrix. These have distinct connections to other brain regions. Broadly speaking, while striosomes are linked to value-based decision-making and reinforcement-based behaviors, the matrix has been linked to action execution. These regions are further subdivided into direct and indirect pathways. The direct pathway neurons are involved in releasing inhibition in other regions of the basal ganglia and thus actively promote action. Neurons projecting into the indirect pathway, instead inhibit “unwanted” actions that are not part of the current “cortical plan.” Based on their tracking, Graybiel and colleagues were indeed able to build a “fate map” that told them when the cells that build these different regions of the striatum commit to a functional path during development.

“It was already well known that individual neurons have lineages that can be traced back to early development, and many such lineages are now being traced,” says Graybiel. “What is so striking in what we have found with the Huang lab is that the earliest specification of lineages we find—at least with the markers that we have used—corresponds to what later become the two major neurochemically distinct compartments of the striatum, rather than many other divisions that might have been specified first. If this is so, then the fundamental developmental ground plan of the striatum is expressed later by these two distinct compartments of the striatum.”

Building the striatum turns out to be a symphony of organization embedded in lateral ganglion eminence cells, the source of cells during development that will end up in the striatum. Progenitors made early in development are somewhat committed: they can only generate spiny projection neurons (SPNs) that are striosomal. Following this in time, cells that will give rise to matrix SPNs appear. There is then a second mechanism laid over this initial ground plan that is switched on in both striosomal and matrisomal neurons and independently gives rise to neurons that will connect into direct as opposed to indirect pathways. This latter specification of direct-indirect pathway neurons is less rigid, but there is an overarching tendency for neurons expressing a certain neurotransmitter, dopamine, to appear earlier in developmental time. In short, progenitors move through an orchestrated process where they generate spiny projection neurons that can first sit in any area of the striatum, then where the ultimate fate of cells is more restricted at the level of striosome or matrix, and finally choices are made in both regions regarding indirect-direct pathway circuitry. Remarkably, these results suggest that even at the very earliest development of the striatum, its ultimate organization is already laid down in a way that distinguishes value-related circuit from movement-related circuits.

“What is thrilling,” says Graybiel, “is that there are lineage progressions— the step by step laying out of the brain’s organization— the turn out to match the striosome-matrix architecture of the striatum the were not even known to exist 40 years ago!”

The striatum is a hub regulating movement, emotion, motivation, evaluation, and learning, and linked to disorders such as Parkinson’s Disease and persistent negative valuations. This means that understanding its construction has important implications, perhaps even, one day, for rebuilding a striatum affected by neurodegeneration. That said, the findings have broader implications. Consider the worm, specifically, C. elegans. The complete lineage of cells that make up this organism is known, including where each neuron comes from, what it connects to, and its function and phenotype. There’s a clear relationship between lineage and function in this relatively simple organism with its highly stereotyped nervous system. Graybiel’s work suggests that in the big picture, early development in the forebrain is also providing a game plan. In this case, however, this groundwork underpins for circuits that underlie extremely complex behaviors, those that come to support the volitional and habitual behaviors that make up part of who we are as individuals.

 

A social side to face recognition by infants

When interacting with an infant you have likely noticed that the human face holds a special draw from a very young age. But how does this relate to face recognition by adults, which is known to map to specific cortical regions? Rebecca Saxe, Associate Investigator at MIT’s McGovern Institute and John W. Jarve (1978) Professor in Brain and Cognitive Sciences, and her team have now considered two emerging theories regarding early face recognition, and come up with a third proposition, arguing that when a baby looks at a face, the response is also social, and that the resulting contingent interactions are key to subsequent development of organized face recognition areas in the brain.

By a certain age you are highly skilled at recognizing and responding to faces, and this correlates with activation of a number of face-selective regions of the cortex. This is incredibly important to reading the identities and intentions of other people, and selective categorical representation of faces in cortical areas is a feature shared by our primate cousins. While brain imaging tells us where face-responsive regions are in the adult cortex, how and when they emerge remains unclear.

In 2017, functional magnetic resonance imaging (fMRI) studies of human and macaque infants provided the first glimpse of how the youngest brains respond to faces. The scans showed that in 4-6 month human infants and equivalently aged macaques, regions known to be face-responsive in the adult brain are activated when shown movies of faces, but not in a selective fashion. Essentially fMRI argues that these specific, cortical regions are activated by faces, but a chair will do just as well. Upon further experience of faces over time, the specific cortical regions in macaques became face-selective, no longer responding to other objects.

There are two prevailing ideas in the field of how face preference, and eventually selectivity, arise through experience. These ideas are now considered in turn by Saxe and her team in an opinion piece in the September issue of Trends in Cognitive Sciences, and then a third, new theory proposed. The first idea centers on the way we dote over babies, centering our own faces right in their field of vision. The idea is that such frequent exposures to low level face features (curvilinear shape etc.) will eventually lead to co-activation of neurons that are responsive to all of the different aspects of facial features. If these neurons stimulated by different features are co-activated, and there’s a brain region where these neurons are also found together, this area with be stimulated eventually reinforcing emergence of a face category-specific area.

A second idea is that babies already have an innate “face template,” just as a duckling or chick already knows to follow its mother after hatching. So far there is little evidence for the second proposition, and the first fails to explain why babies seek out a face, rather than passively look upon and eventually “learn” the overlapping features that represent “face.”

Saxe, along with postdoc Lindsey Powell and graduate student Heather Kosakowski, instead now argue that the role a face plays in positive social interactions comes to drive organization of face-selective cortical regions. Taking the next step, the researchers propose that a prime suspect for linking social interactions to the development of face-selective areas is the medial prefrontal cortex (mPFC), a region linked to social cognition and behavior.

“I was asked to give a talk at a conference, and I wanted to talk about both the development of cortical face areas and the social role of the medial prefrontal cortex in young infants,” says Saxe. “I was puzzling over whether these two ideas were related, when I suddenly saw that they could be very fundamentally related.”

The authors argue that this relationship is supported by existing data that has shown that babies prefer dynamic faces and are more interested in faces that engage in a back and forth interaction. Regions of the mPFC are also known to activated during social interactions and known to be activated during exposure to dynamic faces in infants.

Powell is now using functional near infrared spectroscopy (fNIRS), a brain imaging technique that measures changes in blood flow to the brain, to test this hypothesis in infants. “This will allow us to see whether mPFC responses to social cues are linked to the development of face-responsive areas.”

In Daniel Deronda, the novel by George Eliot, the protagonist says “I think my life began with waking up and loving my mother’s face: it was so near to me, and her arms were round me, and she sang to me.” Perhaps this type of positively valenced social interaction, reinforced by the mPFC, is exactly what leads to the particular importance of faces and their selective categorical representation in the human brain. Further testing of the hypothesis proposed by Powell, Kosakowski, and Saxe will tell.

Testing the limits of artificial visual recognition systems

While it can sometimes seem hard to see the forest from the trees, pat yourself on the back: as a human you are actually pretty good at object recognition. A major goal for artificial visual recognition systems is to be able to distinguish objects in the way that humans do. If you see a tree or a bush from almost any angle, in any degree of shading (or even rendered in pastels and pixels in a Monet), you would recognize it as a tree or a bush. However, such recognition has traditionally been a challenge for artificial visual recognition systems. Researchers at MIT’s McGovern Institute for Brain Research and Department of Brain and Cognitive Sciences (BCS) have now directly examined and shown that artificial object recognition is quickly becoming more primate-like, but still lags behind when scrutinized at higher resolution.

In recent years, dramatic advances in “deep learning” have produced artificial neural network models that appear remarkably similar to aspects of primate brains. James DiCarlo, Peter de Florez Professor and Department Head of BCS, set out to determine and carefully quantify how well the current leading artificial visual recognition systems match humans and other higher primates when it comes to image categorization. In recent years, dramatic advances in “deep learning” have produced artificial neural network models that appear remarkably similar to aspects of primate brains, so DiCarlo and his team put these latest models through their paces.

Rishi Rajalingham, a graduate student in DiCarlo’s lab conducted the study as part of his thesis work at the McGovern Institute. As Rajalingham puts it “one might imagine that artificial vision systems should behave like humans in order to seamlessly be integrated into human society, so this tests to what extent that is true.”

The team focused on testing so-called “deep, convolutional neural networks” (DCNNs), and specifically those that had trained on ImageNet, a collection of large-scale category-labeled image sets that have recently been used as a library to train neural networks (called DCNNIC models). These specific models have thus essentially been trained in an intense image recognition bootcamp. The models were then pitted against monkeys and humans and asked to differentiate objects in synthetically constructed images. These synthetic images put the object being categorized in unusual backgrounds and orientations. The resulting images (such as the floating camel shown above) evened the playing field for the machine models (humans would ordinarily have a leg up on image categorization based on assessing context, so this was specifically removed as a confounder to allow a pure comparison of specific object categorization).

DiCarlo and his team found that humans, monkeys and DCNNIC models all appeared to perform similarly, when examined at a relatively coarse level. Essentially, each group was shown 100 images of 24 different objects. When you averaged how they did across 100 photos of a given object, they could distinguish, for example, camels pretty well overall. The researchers then zoomed in and examined the behavioral data at a much finer resolution (i.e. for each single photo of a camel), thus deriving more detailed “behavioral fingerprints” of primates and machines. These detailed analyses of how they did for each individual image revealed strong differences: monkeys still behaved very consistently like their human primate cousins, but the artificial neural networks could no longer keep up.

“I thought it was quite surprising that monkeys and humans are remarkably similar in their recognition behaviors, especially given that these objects (e.g. trucks, tanks, camels, etc.) don’t “mean” anything to monkeys” says Rajalingham. “It’s indicative of how closely related these two species are, at least in terms of these visual abilities.”

DiCarlo’s team gave the neural networks remedial homework to see if they could catch up upon extra-curricular training by now training the models on images that more closely resembled the synthetic images used in their study. Even with this extra training (which the humans and monkeys did not receive), they could not match a primate’s ability to discern what was in each individual image.

DiCarlo conveys that this is a glass half-empty and half-full story. Says DiCarlo, “The half full part is that, today’s deep artificial neural networks that have been developed based on just some aspects of brain function are far better and far more human-like in their object recognition behavior than artificial systems just a few years ago,” explains DiCarlo. “However, careful and systematic behavioral testing reveals that even for visual object recognition, the brain’s neural network still has some tricks up its sleeve that these artificial neural networks do not yet have.”

Dicarlo’s study begins to define more precisely when it is that the leading artificial neural networks start to “trip up”, and highlights a fundamental aspect of their architecture that struggles with categorization of single images. This flaw seems to be unaddressable through further brute force training. The work also provides an unprecedented and rich dataset of human (1476 anonymous humans to be exact) and primate behavior that will help act as a quantitative benchmark for improvement of artificial neural networks.

 

Image: Example of synthetic image used in the study. For category ‘camel’, 100 distinct, synthetic camel images were shown to DCNNIC models, humans and rhesus monkeys. 24 different categories were tested altogether.

Charting the cerebellum

Small and tucked away under the cerebral hemispheres toward the back of the brain, the human cerebellum is still immediately obvious due to its distinct structure. From Galen’s second century anatomical description to Cajal’s systematic analysis of its projections, the cerebellum has long drawn the eyes of researchers studying the brain.  Two parallel studies from MIT’s McGovern institute have recently converged to support an unexpectedly complex level of non-motor cerebellar organization, that would not have been predicted from known motor representation regions.

Historically the cerebellum has primarily been considered to impact motor control and coordination. Think of this view as the cerebellum being the chain on a bicycle, registering what is happening up front in the cortex, and relaying the information so that the back wheel moves at a coordinated pace. This simple view has been questioned as cerebellar circuits have been traced to the basal ganglia and to neocortical regions via the thalamus. This new view suggests the cerebellum is a hub in a complex network, with potentially higher and non-motor functions including cognition and reward-based learning.

A collaboration between the labs of John Gabrieli, Investigator at the McGovern Institute for Brain Research and Jeremy Schmahmann, of the Ataxia Unit at Massachusetts General Hospital and Harvard Medical School, has now used functional brain imaging to give new insight into the cerebellar organization of non-motor roles, including working memory, language, and, social and emotional processing. In a complementary paper, a collaboration between Sheeba Anteraper of MIT’s Martinos Imaging Center and Gagan Joshi of the Alan and Lorraine Bressler Clinical and Research Program at Massachusetts General Hospital, has found changes in connectivity that occur in the cerebellum in autism spectrum disorder (ASD).

A more complex map of the cerebellum

Published in NeuroImage, and featured on the cover, the first study was led by author Xavier Guell, a postdoc in the Gabrieli and Schmahmann labs. The authors used fMRI data from the Human Connectome Project to examine activity in different regions of the cerebellum during specific tasks and at rest. The tasks used extended beyond motor activity to functions recently linked to the cerebellum, including working memory, language, and social and emotional processing. As expected, the authors saw that two regions assigned by other methods to motor activity were clearly modulated during motor tasks.

“Neuroscientists in the 1940s and 1950s described a double representation of motor function in the cerebellum, meaning that two regions in each hemisphere of the cerebellum are engaged in motor control,” explains Guell. “That there are two areas of motor representation in the cerebellum remains one of the most well-established facts of cerebellar macroscale physiology.”

When it came to assigning non-motor tasks, to their surprise, the authors identified three representations that localized to different regions of the cerebellum, pointing to an unexpectedly complex level of organization.

Guell explains the implications further. “Our study supports the intriguing idea that while two parts of the cerebellum are simultaneously engaged in motor tasks, three other parts of the cerebellum are simultaneously engaged in non-motor tasks. Our predecessors coined the term “double motor representation,” and we may now have to add “triple non-motor representation” to the dictionary of cerebellar neuroscience.”

A serendipitous discussion

What happened next, over a discussion of data between Xavier Guell and Sheeba Arnold Anteraper of the McGovern Institute for Brain Research that culminated in a paper led by Anteraper, illustrates how independent strands can meet and reinforce to give a fuller scientific picture.

The findings by Guell and colleagues made the cover of NeuroImage.
The findings by Guell and colleagues made the cover of NeuroImage.

Anteraper and colleagues examined brain images from high-functioning ASD patients, and looked for statistically-significant patterns, letting the data speak rather than focusing on specific ‘candidate’ regions of the brain. To her surprise, networks related to language were highlighted, as well as the cerebellum, regions that had not been linked to ASD, and that seemed at first sight not to be relevant. Scientists interested in language processing, immediately pointed her to Guell.

“When I went to meet him,” says Anteraper, “I saw immediately that he had the same research paper that I’d been reading on his desk. As soon as I showed him my results, the data fell into place and made sense.”

After talking with Guell, they realized that the same non-motor cerebellar representations he had seen, were independently being highlighted by the ASD study.

“When we study brain function in neurological or psychiatric diseases we sometimes have a very clear notion of what parts of the brain we should study” explained Guell, ”We instead asked which parts of the brain have the most abnormal patterns of functional connectivity to other brain areas? This analysis gave us a simple, powerful result. Only the cerebellum survived our strict statistical thresholds.”

The authors found decreased connectivity within the cerebellum in the ASD group, but also decreased strength in connectivity between the cerebellum and the social, emotional and language processing regions in the cerebral cortex.

“Our analysis showed that regions of disrupted functional connectivity mapped to each of the three areas of non-motor representation in the cerebellum. It thus seems that the notion of two motor and three non-motor areas of representation in the cerebellum is not only important for understanding how the cerebellum works, but also important for understanding how the cerebellum becomes dysfunctional in neurology and psychiatry.”

Guell says that many questions remain to be answered. Are these abnormalities in the cerebellum reproducible in other datasets of patients diagnosed with ASD? Why is cerebellar function (and dysfunction) organized in a pattern of multiple representations? What is different between each of these representations, and what is their distinct contribution to diseases such as ASD? Future work is now aimed at unraveling these questions.

Are eyes the window to the soul?

Covert attention has been defined as shifting attention without shifting the eyes. The notion that we can internally pay attention to an object in a scene without making eye movements to it has been a cornerstone of the fields of psychology and cognitive neuroscience, which attempt to understand mental phenomena that are purely internal to the mind, divorced from movements of the eyes or limbs. A study from the McGovern Institute for Brain Research at MIT now questions the dissociation of eye movements from attention in this context, finding that microsaccades precede modulation of specific brain regions associated with attention. In other words, a small shift of the eyes is linked to covert attention, after all.

Seeing the world through human eyes, which have a focused, high-acuity center to the field of vision, requires saccades (rapid movements of the eyes that move between points of fixation). Saccades help to piece together important information in an overall scene and are closely linked to attention shifts, at least in the case of overt attention. In the case of covert attention, the view has been different since this type of attention can shift while the gaze is fixed. Microsaccades are tiny movements of the eyes that are made when subjects maintain fixation on an object.

“Microsaccades are typically so small, that they are ignored by many researchers.” says Robert Desimone, director of MIT’s McGovern Institute for Brain Research and lead author on the study. “We went in and tested what they might represent by linking them to attentional firing in particular brain regions.”

In the study from Desimone and his team, the authors used an infrared eye-tracking system to follow microsaccades in awake macaques. The authors monitored activity in cortical regions of the brain linked to visual attention, including area V4. The authors saw increased neuronal firing in V4, but only when preceded by a microsaccade toward the attended stimulus. This effect on neuronal activity vanished when a microsaccade was directed away from the stimulus. The authors also saw increased firing in the inferior temporal (IT) cortex after a microsaccade, and even found that attention to an object amongst a ‘clutter’ of different visual objects, finding that attention to a specific object in the group was preceded by a microsaccade.

“I expected some links between microsaccades and covert attention,” says lead author of the study Eric Lowet, now a postdoctoral fellow at Boston University. “However, the magnitude of the effect and the precise link to microsaccade onset was surprising to me and the lab. Furthermore, to see these effects also in the IT cortex, which has large receptive fields and is involved in higher-order visual cognition, was striking”.

Why was this strong effect previously missed? The separation of eye movement and attention is so core to the concept of covert attention, that studies often actively seek to separate the visual stimulus by directing attention to a target outside the receptive field of vision, while the subject’s gaze is maintained on a fixation stimulus. The authors are the first to directly test microsaccades toward and away from an attended stimulus, and it was this set up, and the difference in neuronal firing upon separating these eye movements, that allowed them to draw the conclusions made.

“When we first separated attention effects on V4 firing rates by the direction of the microsaccade relative to the attended stimulus,” Lowet explains, “I realized this analysis was a game changer.”

The study suggests several future directions of study that are being pursued by the Desimone lab. Low frequency rhythmic (in the delta and theta range) sampling has been suggested as a possible explanation for attentional modulation. According to this idea, people sample visual scenes rhythmically, with an intrinsic sampling interval of about a quarter of a second.

“We do not know whether microsaccades and delta/theta rhythms have a common generator,” points out Karthik Srinivasan, a co-author on the study and a scientist at the McGovern Institute. “But if they do, what brain areas are the source of such a generator? Are the low frequency rhythms observed merely the frequency-analytic manifestation of microsaccades or are they linked?”

These are intriguing future steps for analysis that can be addressed in light of the current study which points to microsaccades as an important marker for visual attention and cognitive processes. Indeed, some of the previously hidden aspects of our cognition are revealed through our motor behavior after all.

Does our ability to learn new things stop at a certain age?

This is actually a neuromyth, but it has some basis in scientific research. People’s endorsement of this statement is likely due to research indicating that there is a high level of synaptogenesis (formation of connections between neurons) between ages 0-3, that some skills (learning a new language, for example) do diminish with age, and some events in brain development, such as connections in the visual system, are tied to exposure to a stimulus, such as light. That said, it is clear that a new language can be learned later in life, and at the level of synaptogenesis, we now know that synaptic connections are plastic.

If you thought this statement was true, you’re not alone. Indeed, a 2017 study by McGrath and colleagues found that 18% of the public (N = 3,045) and 19% of educators (N = 598) believed this statement was correct.

Learn more about how teachers and McGovern researchers are working to target learning interventions well past so-called “critical periods” for learning.

Feng Zhang elected to National Academy of Sciences

Feng Zhang has been elected to join the National Academy of Sciences (NAS), a prestigious, non-profit society of distinguished scholars that was established through an Act of Congress signed by Abraham Lincoln in 1863. Zhang is the Patricia and James Poitras ’63 Professor in Neuroscience at MIT, an associate professor in the departments of Brain and Cognitive Sciences and Biological Engineering, an investigator at the McGovern Institute for Brain Research, and a core member of the Broad Institute of MIT and Harvard. Scientists are elected to the National Academy of Sciences by members of the organization as recognition of their outstanding contributions to research.

“Because it comes from the scientific community, election to the National Academy of Sciences is a very special honor,” says Zhang, “and I’m grateful to all of my colleagues for the recognition and support.”

Zhang has revolutionized research across the life sciences by developing and sharing a number of powerful molecular biology tools, most notably, genome engineering tools based on the microbial CRISPR-Cas9 system. The simplicity and precision of Cas9 has led to its widespread adoption by researchers around the world. Indeed, the Zhang lab has shared more than 49,000 plasmids and reagents with more than 2,300 institutions across 62 countries through the non-profit plasmid repository Addgene.

Zhang continues to pioneer CRISPR-based technologies. For example, Zhang and his colleagues discovered new CRISPR systems that use a single enzyme to target RNA, rather than DNA. They have engineered these systems to achieve precise editing of single bases of RNA, enabling a wide range of applications in research, therapeutics, and biotechnology. Recently, he and his team also reported a highly sensitive nucleic acid detection system based on the CRISPR enzyme Cas13 that can be used in the field for monitoring pathogens and other molecular diagnostic applications.

Zhang has long shown a keen eye for recognizing the potential of transformative technologies and developing robust tools with broad utility. As a graduate student in Karl Diesseroth’s group at Stanford, he contributed to the development of optogenetics, a light-based technology that allows scientists to both track neurons and causally test outcomes of neuronal activity. Zhang also created an efficient system for reprogramming TAL effector proteins (TALEs) to specifically recognize and modulate target genes.

“Feng Zhang is unusually young to be elected into the National Academy of Science, which attests to the tremendous impact he is having on the field even at an early stage of his career, “ says Robert Desimone, director of the McGovern Institute for Brain Research at MIT.

This year the NAS, an organization that includes over 500 Nobel Laureates, elected 84 new members from across disciplines. The mission of the organization is to provide sound, objective advice on science to the nation, and to further the cause of science and technology in America. Four MIT professors were elected this year, with Amy Finkelstein (recognized for contributions to economics) as well as Mehran Karder and Xiao-Gang Wen (for their research in the realm of physics) also becoming members of the Academy.

The formal induction ceremony for new NAS members will be held at the Academy’s annual meeting in Washington D.C. next spring.

Ann Graybiel wins 2018 Gruber Neuroscience Prize

Institute Professor Ann Graybiel, a professor in the Department of Brain and Cognitive Sciences and member of MIT’s McGovern Institute for Brain Research, is being recognized by the Gruber Foundation for her work on the structure, organization, and function of the once-mysterious basal ganglia. She was awarded the prize alongside Okihide Hikosaka of the National Institute of Health’s National Eye Institute and Wolfram Schultz of the University of Cambridge in the U.K.

The basal ganglia have long been known to play a role in movement, and the work of Graybiel and others helped to extend their roles to cognition and emotion. Dysfunction in the basal ganglia has been linked to a host of disorders including Parkinson’s disease, Huntington’s disease, obsessive-compulsive disorder and attention-deficit hyperactivity disorder, and to depression and anxiety disorders. Graybiel’s research focuses on the circuits thought to underlie these disorders, and on how these circuits act to help us form habits in everyday life.

“We are delighted that Ann has been honored with the Gruber Neuroscience Prize,” says Robert Desimone, director of the McGovern Institute. “Ann’s work has truly elucidated the complexity and functional importance of these forebrain structures. Her work has driven the field forward in a fundamental fashion, and continues to do so.’

Graybiel’s research focuses broadly on the striatum, a hub in basal ganglia-based circuits that is linked to goal-directed actions and habits. Prior to her work, the striatum was considered to be a primitive forebrain region. Graybiel found that the striatum instead has a complex architecture consisting of specialized zones: striosomes and the surrounding matrix. Her group went on to relate these zones to function, finding that striosomes and matrix differentially influence behavior. Among other important findings, Graybiel has shown that striosomes are focal points in circuits that link mood-related cortical regions with the dopamine-containing neurons of the midbrain, which are implicated in learning and motivation and which undergo degeneration in Parkinson’s disorder and other clinical conditions. She and her group have shown that these regions are activated by drugs of abuse, and that they influence decision-making, including decisions that require weighing of costs and benefits.

Graybiel continues to drive the field forward, finding that striatal neurons spike in an accentuated fashion and ‘bookend’ the beginning and end of behavioral sequences in rodents and primates. This activity pattern suggests that the striatum demarcates useful behavioral sequences such, in the case of rodents, pressing levers or running down mazes to receive a reward. Additionally, she and her group worked on miniaturized tools for chemical sensing and delivery as part of a continued drive toward therapeutic intervention in collaboration with the laboratories of Robert Langer in the Department of Chemical Engineering and Michael Cima, in the Department of Materials Science and Engineering.

“My first thought was of our lab, and how fortunate I am to work with such talented and wonderful people,” says Graybiel.  “I am deeply honored to be recognized by this prestigious award on behalf of our lab.”

The Gruber Foundation’s international prize program recognizes researchers in the areas of cosmology, neuroscience and genetics, and includes a cash award of $500,000 in each field. The medal given to award recipients also outlines the general mission of the foundation, “for the fundamental expansion of human knowledge,” and the prizes specifically honor those whose groundbreaking work fits into this paradigm.

Graybiel, a member of the MIT Class of 1971, has also previously been honored with the National Medal of Science, the Kavli Award, the James R. Killian Faculty Achievement Award at MIT, Woman Leader of Parkinson’s Science award from the Parkinson’s Disease Foundation, and has been recognized by the National Parkinson Foundation for her contributions to the understanding and treatment of Parkinson’s disease. Graybiel is a member of the National Academy of Sciences, the National Academy of Medicine, and the American Academy of Arts and Sciences.

The Gruber Neuroscience Prize will be presented in a ceremony at the annual meeting of the Society for Neuroscience in San Diego this coming November.