School of Science welcomes 10 professors

The MIT School of Science recently welcomed 10 new professors, including Ila Fiete in the departments of Brain and Cognitive Sciences, Chemistry, Biology, Physics, Mathematics, and Earth, Atmospheric and Planetary Sciences.

Ila Fiete uses computational and theoretical tools to better understand the dynamical mechanisms and coding strategies that underlie computation in the brain, with a focus on elucidating how plasticity and development shape networks to perform computation and why information is encoded the way that it is. Her recent focus is on error control in neural codes, rules for synaptic plasticity that enable neural circuit organization, and questions at the nexus of information and dynamics in neural systems, such as understand how coding and statistics fundamentally constrain dynamics and vice-versa.

Tristan Collins conducts research at the intersection of geometric analysis, partial differential equations, and algebraic geometry. In joint work with Valentino Tosatti, Collins described the singularity formation of the Ricci flow on Kahler manifolds in terms of algebraic data. In recent work with Gabor Szekelyhidi, he gave a necessary and sufficient algebraic condition for existence of Ricci-flat metrics, which play an important role in string theory and mathematical physics. This result lead to the discovery of infinitely many new Einstein metrics on the 5-dimensional sphere. With Shing-Tung Yau and Adam Jacob, Collins is currently studying the relationship between categorical stability conditions and existence of solutions to differential equations arising from mirror symmetry.

Collins earned his BS in mathematics at the University of British Columbia in 2009, after which he completed his PhD in mathematics at Columbia University in 2014 under the direction of Duong H. Phong. Following a four-year appointment as a Benjamin Peirce Assistant Professor at Harvard University, Collins joins MIT as an assistant professor in the Department of Mathematics.

Julien de Wit develops and applies new techniques to study exoplanets, their atmospheres, and their interactions with their stars. While a graduate student in the Sara Seager group at MIT, he developed innovative analysis techniques to map exoplanet atmospheres, studied the radiative and tidal planet-star interactions in eccentric planetary systems, and constrained the atmospheric properties and mass of exoplanets solely from transmission spectroscopy. He plays a critical role in the TRAPPIST/SPECULOOS project, headed by Université of Liège, leading the atmospheric characterization of the newly discovered TRAPPIST-1 planets, for which he has already obtained significant results with the Hubble Space Telescope. De Wit’s efforts are now also focused on expanding the SPECULOOS network of telescopes in the northern hemisphere to continue the search for new potentially habitable TRAPPIST-1-like systems.

De Wit earned a BEng in physics and mechanics from the Université de Liège in Belgium in 2008, an MS in aeronautic engineering and an MRes in astrophysics, planetology, and space sciences from the Institut Supérieur de l’Aéronautique et de l’Espace at the Université de Toulouse, France in 2010; he returned to the Université de Liège for an MS in aerospace engineering, completed in 2011. After finishing his PhD in planetary sciences in 2014 and a postdoc at MIT, both under the direction of Sara Seager, he joins the MIT faculty in the Department of Earth, Atmospheric and Planetary Sciences as an assistant professor.

After earning a BS in mathematics and physics at the University of Michigan, Fiete obtained her PhD in 2004 at Harvard University in the Department of Physics. While holding an appointment at the Kavli Institute for Theoretical Physics at the University of California, Santa Barbara from 2004 to 2006, she was also a visiting member of the Center for Theoretical Biophysics at the University of California at San Diego. Fiete subsequently spent two years at Caltech as a Broad Fellow in brain circuitry, and in 2008 joined the faculty of the University of Texas at Austin. She joins the MIT faculty in the Department of Brain and Cognitive Sciences as an associate professor with tenure.

Ankur Jain explores the biology of RNA aggregation. Several genetic neuromuscular disorders, such as myotonic dystrophy and amyotrophic lateral sclerosis, are caused by expansions of nucleotide repeats in their cognate disease genes. Such repeats cause the transcribed RNA to form pathogenic clumps or aggregates. Jain uses a variety of biophysical approaches to understand how the RNA aggregates form, and how they can be disrupted to restore normal cell function. Jain will also study the role of RNA-DNA interactions in chromatin organization, investigating whether the RNA transcribed from telomeres (the protective repetitive sequences that cap the ends of chromosomes) undergoes the phase separation that characterizes repeat expansion diseases.

Jain completed a bachelor’s of technology degree in biotechnology and biochemical engineering at the Indian Institute of Technology Kharagpur, India in 2007, followed by a PhD in biophysics and computational biology at the University of Illinois at Urbana-Champaign under the direction of Taekjip Ha in 2013. After a postdoc at the University of California at San Francisco, he joins the MIT faculty in the Department of Biology as an assistant professor with an appointment as a member of the Whitehead Institute for Biomedical Research.

Kiyoshi Masui works to understand fundamental physics and the evolution of the universe through observations of the large-scale structure — the distribution of matter on scales much larger than galaxies. He works principally with radio-wavelength surveys to develop new observational methods such as hydrogen intensity mapping and fast radio bursts. Masui has shown that such observations will ultimately permit precise measurements of properties of the early and late universe and enable sensitive searches for primordial gravitational waves. To this end, he is working with a new generation of rapid-survey digital radio telescopes that have no moving parts and rely on signal processing software running on large computer clusters to focus and steer, including work on the Canadian Hydrogen Intensity Mapping Experiment (CHIME).

Masui obtained a BSCE in engineering physics at Queen’s University, Canada in 2008 and a PhD in physics at the University of Toronto in 2013 under the direction of Ue-Li Pen. After postdoctoral appointments at the University of British Columbia as the Canadian Institute for Advanced Research Global Scholar and the Canadian Institute for Theoretical Astrophysics National Fellow, Masui joins the MIT faculty in the Department of Physics as an assistant professor.

Phiala Shanahan studies theoretical nuclear and particle physics, in particular the structure and interactions of hadrons and nuclei from the fundamental (quark and gluon) degrees of freedom encoded in the Standard Model of particle physics. Shanahan’s recent work has focused on the role of gluons, the force carriers of the strong interactions described by quantum chromodynamics (QCD), in hadron and nuclear structure by using analytic tools and high-performance supercomputing. She recently achieved the first calculation of the gluon structure of light nuclei, making predictions that will be testable in new experiments proposed at Jefferson National Accelerator Facility and at the planned Electron-Ion Collider. She has also undertaken extensive studies of the role of strange quarks in the proton and light nuclei that sharpen theory predictions for dark matter cross-sections in direct detection experiments. To overcome computational limitations in QCD calculations for hadrons and in particular for nuclei, Shanahan is pursuing a program to integrate modern machine learning techniques in computational nuclear physics studies.

Shanahan obtained her BS in 2012 and her PhD in 2015, both in physics, from the University of Adelaide. She completed postdoctoral work at MIT in 2017, then held a joint position as an assistant professor at the College of William and Mary and senior staff scientist at the Thomas Jefferson National Accelerator Facility until 2018. She returns to MIT in the Department of Physics as an assistant professor.

Nike Sun works in probability theory at the interface of statistical physics and computation. Her research focuses in particular on phase transitions in average-case (randomized) formulations of classical computational problems. Her joint work with Jian Ding and Allan Sly establishes the satisfiability threshold of random k-SAT for large k, and relatedly the independence ratio of random regular graphs of large degree. Both are long-standing open problems where heuristic methods of statistical physics yield detailed conjectures, but few rigorous techniques exist. More recently she has been investigating phase transitions of dense graph models.

Sun completed BA mathematics and MA statistics degrees at Harvard in 2009, and an MASt in mathematics at Cambridge in 2010. She received her PhD in statistics from Stanford University in 2014 under the supervision of Amir Dembo. She held a Schramm fellowship at Microsoft New England and MIT Mathematics in 2014-2015 and a Simons postdoctoral fellowship at the University of California at Berkeley in 2016, and joined the Berkeley Department of Statistics as an assistant professor in 2016. She returns to the MIT Department of Mathematics as an associate professor with tenure.

Alison Wendlandt focuses on the development of selective, catalytic reactions using the tools of organic and organometallic synthesis and physical organic chemistry. Mechanistic study plays a central role in the development of these new transformations. Her projects involve the design of new catalysts and catalytic transformations, identification of important applications for selective catalytic processes, and elucidation of new mechanistic principles to expand powerful existing catalytic reaction manifolds.

Wendlandt received a BS in chemistry and biological chemistry from the University of Chicago in 2007, an MS in chemistry from Yale University in 2009, and a PhD in chemistry from the University of Wisconsin at Madison in 2015 under the direction of Shannon S. Stahl. Following an NIH Ruth L. Krichstein Postdoctoral Fellowship at Harvard University, Wendlandt joins the MIT faculty in the Department of Chemistry as an assistant professor.

Chenyang Xu specializes in higher-dimensional algebraic geometry, an area that involves classifying algebraic varieties, primarily through the minimal model program (MMP). MMP was introduced by Fields Medalist S. Mori in the early 1980s to make advances in higher dimensional birational geometry. The MMP was further developed by Hacon and McKernan in the mid-2000s, so that the MMP could be applied to other questions. Collaborating with Hacon, Xu expanded the MMP to varieties of certain conditions, such as those of characteristic p, and, with Hacon and McKernan, proved a fundamental conjecture on the MMP, generating a great deal of follow-up activity. In collaboration with Chi Li, Xu proved a conjecture of Gang Tian concerning higher-dimensional Fano varieties, a significant achievement. In a series of papers with different collaborators, he successfully applied MMP to singularities.

Xu received his BS in 2002 and MS in 2004 in mathematics from Peking University, and completed his PhD at Princeton University under János Kollár in 2008. He came to MIT as a CLE Moore Instructor in 2008-2011, and was subsequently appointed assistant professor at the University of Utah. He returned to Peking University as a research fellow at the Beijing International Center of Mathematical Research in 2012, and was promoted to professor in 2013. Xu joins the MIT faculty as a full professor in the Department of Mathematics.

Zhiwei Yun’s research is at the crossroads between algebraic geometry, number theory, and representation theory. He studies geometric structures aiming at solving problems in representation theory and number theory, especially those in the Langlands program. While he was a CLE Moore Instructor at MIT, he started to develop the theory of rigid automorphic forms, and used it to answer an open question of J-P Serre on motives, which also led to a major result on the inverse Galois problem in number theory. More recently, in his joint work with Wei Zhang, they give geometric interpretation of higher derivatives of automorphic L- functions in terms of intersection numbers, which sheds new light on the geometric analogue of the Birch and Swinnerton-Dyer conjecture.

Yun earned his BS at Peking University in 2004, after which he completed his PhD at Princeton University in 2009 under the direction of Robert MacPherson. After appointments at the Institute for Advanced Study and as a CLE Moore Instructor at MIT, he held faculty appointments at Stanford and Yale. He returned to the MIT Department of Mathematics as a full professor in the spring of 2018.

Mark Harnett named Vallee Foundation Scholar

The Bert L and N Kuggie Vallee Foundation has named McGovern Institute investigator Mark Harnett a 2018 Vallee Scholar. The Vallee Scholars Program recognizes original, innovative, and pioneering work by early career scientists at a critical juncture in their careers and provides $300,000 in discretionary funds to be spent over four years for basic biomedical research. Harnett is among five researchers named to this year’s Vallee Scholars Program.

Harnett, who is also the Fred and Carole Middleton Career Development Assistant Professor in the Department of Brain and Cognitive Sciences, is being recognized for his work exploring how the biophysical features of neurons give rise to the computational power of the brain. By exploiting new technologies and approaches at the interface of biophysics and systems neuroscience, research in the Harnett lab aims to provide a new understanding of the biology underlying how mammalian brains learn. This may open new areas of research into brain disorders characterized by atypical learning and memory (such as dementia and schizophrenia) and may also have important implications for designing new, brain-inspired artificial neural networks.

The Vallee Foundation was established in 1996 by Bert and Kuggie Vallee to foster originality, creativity, and leadership within biomedical scientific research and medical education. The foundation’s goal to fund originality, innovation, and pioneering work “recognizes the future promise of these scientists who are dedicated to understanding fundamental biological processes.” Harnett joins a list of 24 Vallee Scholars, including McGovern investigator Feng Zhang, who have been appointed to the program since its inception in 2013.

New sensors track dopamine in the brain for more than a year

Dopamine, a signaling molecule used throughout the brain, plays a major role in regulating our mood, as well as controlling movement. Many disorders, including Parkinson’s disease, depression, and schizophrenia, are linked to dopamine deficiencies.

MIT neuroscientists have now devised a way to measure dopamine in the brain for more than a year, which they believe will help them to learn much more about its role in both healthy and diseased brains.

“Despite all that is known about dopamine as a crucial signaling molecule in the brain, implicated in neurologic and neuropsychiatric conditions as well as our ability to learn, it has been impossible to monitor changes in the online release of dopamine over time periods long enough to relate these to clinical conditions,” says Ann Graybiel, an MIT Institute Professor, a member of MIT’s McGovern Institute for Brain Research, and one of the senior authors of the study.

Michael Cima, the David H. Koch Professor of Engineering in the Department of Materials Science and Engineering and a member of MIT’s Koch Institute for Integrative Cancer Research, and Rober Langer, the David H. Koch Institute Professor and a member of the Koch Institute, are also senior authors of the study. MIT postdoc Helen Schwerdt is the lead author of the paper, which appears in the Sept. 12 issue of Communications Biology.

Long-term sensing

Dopamine is one of many neurotransmitters that neurons in the brain use to communicate with each other. Traditional systems for measuring dopamine — carbon electrodes with a shaft diameter of about 100 microns — can only be used reliably for about a day because they produce scar tissue that interferes with the electrodes’ ability to interact with dopamine.

In 2015, the MIT team demonstrated that tiny microfabricated sensors could be used to measure dopamine levels in a part of the brain called the striatum, which contains dopamine-producing cells that are critical for habit formation and reward-reinforced learning.

Because these probes are so small (about 10 microns in diameter), the researchers could implant up to 16 of them to measure dopamine levels in different parts of the striatum. In the new study, the researchers wanted to test whether they could use these sensors for long-term dopamine tracking.

“Our fundamental goal from the very beginning was to make the sensors work over a long period of time and produce accurate readings from day to day,” Schwerdt says. “This is necessary if you want to understand how these signals mediate specific diseases or conditions.”

To develop a sensor that can be accurate over long periods of time, the researchers had to make sure that it would not provoke an immune reaction, to avoid the scar tissue that interferes with the accuracy of the readings.

The MIT team found that their tiny sensors were nearly invisible to the immune system, even over extended periods of time. After the sensors were implanted, populations of microglia (immune cells that respond to short-term damage), and astrocytes, which respond over longer periods, were the same as those in brain tissue that did not have the probes inserted.

In this study, the researchers implanted three to five sensors per animal, about 5 millimeters deep, in the striatum. They took readings every few weeks, after stimulating dopamine release from the brainstem, which travels to the striatum. They found that the measurements remained consistent for up to 393 days.

“This is the first time that anyone’s shown that these sensors work for more than a few months. That gives us a lot of confidence that these kinds of sensors might be feasible for human use someday,” Schwerdt says.

Paul Glimcher, a professor of physiology and neuroscience at New York University, says the new sensors should enable more researchers to perform long-term studies of dopamine, which is essential for studying phenomena such as learning, which occurs over long time periods.

“This is a really solid engineering accomplishment that moves the field forward,” says Glimcher, who was not involved in the research. “This dramatically improves the technology in a way that makes it accessible to a lot of labs.”

Monitoring Parkinson’s

If developed for use in humans, these sensors could be useful for monitoring Parkinson’s patients who receive deep brain stimulation, the researchers say. This treatment involves implanting an electrode that delivers electrical impulses to a structure deep within the brain. Using a sensor to monitor dopamine levels could help doctors deliver the stimulation more selectively, only when it is needed.

The researchers are now looking into adapting the sensors to measure other neurotransmitters in the brain, and to measure electrical signals, which can also be disrupted in Parkinson’s and other diseases.

“Understanding those relationships between chemical and electrical activity will be really important to understanding all of the issues that you see in Parkinson’s,” Schwerdt says.

The research was funded by the National Institute of Biomedical Imaging and Bioengineering, the National Institute of Neurological Disorders and Stroke, the Army Research Office, the Saks Kavanaugh Foundation, the Nancy Lurie Marks Family Foundation, and Dr. Tenley Albright.

Constructing the striatum

The striatum, the largest nucleus of the basal ganglia in the vertebrate brain, was historically thought to be a homogeneous group of cells. This view was overturned in a classic series of papers from MIT Institute Professor, Ann Graybiel. In previous work, Graybiel, who is also an investigator at MIT’s McGovern Institute, found that the striatum is highly organized, both structurally and functionally and in terms of connectivity. Graybiel has now collaborated with Z. Josh Huang’s lab at Cold Spring Harbor Laboratory to map the developmental lineage of cells that give rise to this complex architecture. The authors found that different functions of the striatum, such as execution of actions as opposed to evaluation of outcomes, are defined early on as part of the blueprint that constructs this brain region, rather than sculpted through a later mechanism.

Graybiel and colleagues tracked what is happening early in development by driving cell-specific fluorescent markers that allowed them to follow the progenitors that give rise to cells in the striatum. The striatum is known, thanks to Graybiel’s early work, to be organized into compartments called striosomes and the matrix. These have distinct connections to other brain regions. Broadly speaking, while striosomes are linked to value-based decision-making and reinforcement-based behaviors, the matrix has been linked to action execution. These regions are further subdivided into direct and indirect pathways. The direct pathway neurons are involved in releasing inhibition in other regions of the basal ganglia and thus actively promote action. Neurons projecting into the indirect pathway, instead inhibit “unwanted” actions that are not part of the current “cortical plan.” Based on their tracking, Graybiel and colleagues were indeed able to build a “fate map” that told them when the cells that build these different regions of the striatum commit to a functional path during development.

“It was already well known that individual neurons have lineages that can be traced back to early development, and many such lineages are now being traced,” says Graybiel. “What is so striking in what we have found with the Huang lab is that the earliest specification of lineages we find—at least with the markers that we have used—corresponds to what later become the two major neurochemically distinct compartments of the striatum, rather than many other divisions that might have been specified first. If this is so, then the fundamental developmental ground plan of the striatum is expressed later by these two distinct compartments of the striatum.”

Building the striatum turns out to be a symphony of organization embedded in lateral ganglion eminence cells, the source of cells during development that will end up in the striatum. Progenitors made early in development are somewhat committed: they can only generate spiny projection neurons (SPNs) that are striosomal. Following this in time, cells that will give rise to matrix SPNs appear. There is then a second mechanism laid over this initial ground plan that is switched on in both striosomal and matrisomal neurons and independently gives rise to neurons that will connect into direct as opposed to indirect pathways. This latter specification of direct-indirect pathway neurons is less rigid, but there is an overarching tendency for neurons expressing a certain neurotransmitter, dopamine, to appear earlier in developmental time. In short, progenitors move through an orchestrated process where they generate spiny projection neurons that can first sit in any area of the striatum, then where the ultimate fate of cells is more restricted at the level of striosome or matrix, and finally choices are made in both regions regarding indirect-direct pathway circuitry. Remarkably, these results suggest that even at the very earliest development of the striatum, its ultimate organization is already laid down in a way that distinguishes value-related circuit from movement-related circuits.

“What is thrilling,” says Graybiel, “is that there are lineage progressions— the step by step laying out of the brain’s organization— the turn out to match the striosome-matrix architecture of the striatum the were not even known to exist 40 years ago!”

The striatum is a hub regulating movement, emotion, motivation, evaluation, and learning, and linked to disorders such as Parkinson’s Disease and persistent negative valuations. This means that understanding its construction has important implications, perhaps even, one day, for rebuilding a striatum affected by neurodegeneration. That said, the findings have broader implications. Consider the worm, specifically, C. elegans. The complete lineage of cells that make up this organism is known, including where each neuron comes from, what it connects to, and its function and phenotype. There’s a clear relationship between lineage and function in this relatively simple organism with its highly stereotyped nervous system. Graybiel’s work suggests that in the big picture, early development in the forebrain is also providing a game plan. In this case, however, this groundwork underpins for circuits that underlie extremely complex behaviors, those that come to support the volitional and habitual behaviors that make up part of who we are as individuals.

 

Neuroscientists get at the roots of pessimism

Many patients with neuropsychiatric disorders such as anxiety or depression experience negative moods that lead them to focus on the possible downside of a given situation more than the potential benefit.

MIT neuroscientists have now pinpointed a brain region that can generate this type of pessimistic mood. In tests in animals, they showed that stimulating this region, known as the caudate nucleus, induced animals to make more negative decisions: They gave far more weight to the anticipated drawback of a situation than its benefit, compared to when the region was not stimulated. This pessimistic decision-making could continue through the day after the original stimulation.

The findings could help scientists better understand how some of the crippling effects of depression and anxiety arise, and guide them in developing new treatments.

“We feel we were seeing a proxy for anxiety, or depression, or some mix of the two,” says Ann Graybiel, an MIT Institute Professor, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study, which appears in the Aug. 9 issue of Neuron. “These psychiatric problems are still so very difficult to treat for many individuals suffering from them.”

The paper’s lead authors are McGovern Institute research affiliates Ken-ichi Amemori and Satoko Amemori, who perfected the tasks and have been studying emotion and how it is controlled by the brain. McGovern Institute researcher Daniel Gibson, an expert in data analysis, is also an author of the paper.

Emotional decisions

Graybiel’s laboratory has previously identified a neural circuit that underlies a specific kind of decision-making known as approach-avoidance conflict. These types of decisions, which require weighing options with both positive and negative elements, tend to provoke a great deal of anxiety. Her lab has also shown that chronic stress dramatically affects this kind of decision-making: More stress usually leads animals to choose high-risk, high-payoff options.

In the new study, the researchers wanted to see if they could reproduce an effect that is often seen in people with depression, anxiety, or obsessive-compulsive disorder. These patients tend to engage in ritualistic behaviors designed to combat negative thoughts, and to place more weight on the potential negative outcome of a given situation. This kind of negative thinking, the researchers suspected, could influence approach-avoidance decision-making.

To test this hypothesis, the researchers stimulated the caudate nucleus, a brain region linked to emotional decision-making, with a small electrical current as animals were offered a reward (juice) paired with an unpleasant stimulus (a puff of air to the face). In each trial, the ratio of reward to aversive stimuli was different, and the animals could choose whether to accept or not.

This kind of decision-making requires cost-benefit analysis. If the reward is high enough to balance out the puff of air, the animals will choose to accept it, but when that ratio is too low, they reject it. When the researchers stimulated the caudate nucleus, the cost-benefit calculation became skewed, and the animals began to avoid combinations that they previously would have accepted. This continued even after the stimulation ended, and could also be seen the following day, after which point it gradually disappeared.

This result suggests that the animals began to devalue the reward that they previously wanted, and focused more on the cost of the aversive stimulus. “This state we’ve mimicked has an overestimation of cost relative to benefit,” Graybiel says.

The study provides valuable insight into the role of the basal ganglia (a region that includes the caudate nucleus) in this type of decision-making, says Scott Grafton, a professor of neuroscience at the University of California at Santa Barbara, who was not involved in the research.

“We know that the frontal cortex and the basal ganglia are involved, but the relative contributions of the basal ganglia have not been well understood,” Grafton says. “This is a nice paper because it puts some of the decision-making process in the basal ganglia as well.”

A delicate balance

The researchers also found that brainwave activity in the caudate nucleus was altered when decision-making patterns changed. This change, discovered by Amemori, is in the beta frequency and might serve as a biomarker to monitor whether animals or patients respond to drug treatment, Graybiel says.

Graybiel is now working with psychiatrists at McLean Hospital to study patients who suffer from depression and anxiety, to see if their brains show abnormal activity in the neocortex and caudate nucleus during approach-avoidance decision-making. Magnetic resonance imaging (MRI) studies have shown abnormal activity in two regions of the medial prefrontal cortex that connect with the caudate nucleus.

The caudate nucleus has within it regions that are connected with the limbic system, which regulates mood, and it sends input to motor areas of the brain as well as dopamine-producing regions. Graybiel and Amemori believe that the abnormal activity seen in the caudate nucleus in this study could be somehow disrupting dopamine activity.

“There must be many circuits involved,” she says. “But apparently we are so delicately balanced that just throwing the system off a little bit can rapidly change behavior.”

The research was funded by the National Institutes of Health, the CHDI Foundation, the U.S. Office of Naval Research, the U.S. Army Research Office, MEXT KAKENHI, the Simons Center for the Social Brain, the Naito Foundation, the Uehara Memorial Foundation, Robert Buxton, Amy Sommer, and Judy Goldberg.

Michale Fee receives McKnight Technological Innovations in Neuroscience Award

McGovern Institute investigator Michale Fee has been selected to receive a 2018 McKnight Technological Innovations in Neuroscience Award for his research on “new technologies for imaging and analyzing neural state-space trajectories in freely-behaving small animals.”

“I am delighted to get support from the McKnight Foundation,” says Fee, who is also the Glen V. and Phyllis F. Dorflinger Professor in the Department of Brain and Cognitive Neurosciences at MIT. “We’re very excited about this project which aims to develop technology that will be a great help to the broader neuroscience community.”

Fee studies the neural mechanisms by which the brain, specifically that of juvenile songbirds, learns complex sequential behaviors. The way that songbirds learn a song through trial and error is analogous to humans learning complex behaviors, such as riding a bicycle. While it would be insightful to link such learning to neural activity, current methods for monitoring neurons can only monitor a limited field of neurons, a big issue since such learning and behavior involve complex interactions between larger circuits. While a wider field of view for recordings would help decipher neural changes linked to this learning paradigm, current microscopy equipment is large relative to a juvenile songbird, and microscopes that can record neural activity generally constrain the behavior of small animals. Ideally, technologies need to be lightweight (about 1 gram) and compact in size (the size of a dime), a far cry from current larger microscopes that weigh in at 3 grams. Fee hopes to be able to break these technical boundaries and miniaturize the recording equipment thus allowing recording of more neurons in naturally behaving small animals.

“We are thrilled that the McKnight Foundation has chosen to support this project. The technology that Michale’s developing will help to better visualize and understand the circuits underlying learning,” says Robert Desimone, director of MIT’s McGovern Institute for Brain Research.

In addition to development and miniaturization of the microscopy hardware itself, the award will support the development of technology that helps analyze the resulting images, so that the neuroscience community at large can more easily deploy and use the technology.

Are eyes the window to the soul?

Covert attention has been defined as shifting attention without shifting the eyes. The notion that we can internally pay attention to an object in a scene without making eye movements to it has been a cornerstone of the fields of psychology and cognitive neuroscience, which attempt to understand mental phenomena that are purely internal to the mind, divorced from movements of the eyes or limbs. A study from the McGovern Institute for Brain Research at MIT now questions the dissociation of eye movements from attention in this context, finding that microsaccades precede modulation of specific brain regions associated with attention. In other words, a small shift of the eyes is linked to covert attention, after all.

Seeing the world through human eyes, which have a focused, high-acuity center to the field of vision, requires saccades (rapid movements of the eyes that move between points of fixation). Saccades help to piece together important information in an overall scene and are closely linked to attention shifts, at least in the case of overt attention. In the case of covert attention, the view has been different since this type of attention can shift while the gaze is fixed. Microsaccades are tiny movements of the eyes that are made when subjects maintain fixation on an object.

“Microsaccades are typically so small, that they are ignored by many researchers.” says Robert Desimone, director of MIT’s McGovern Institute for Brain Research and lead author on the study. “We went in and tested what they might represent by linking them to attentional firing in particular brain regions.”

In the study from Desimone and his team, the authors used an infrared eye-tracking system to follow microsaccades in awake macaques. The authors monitored activity in cortical regions of the brain linked to visual attention, including area V4. The authors saw increased neuronal firing in V4, but only when preceded by a microsaccade toward the attended stimulus. This effect on neuronal activity vanished when a microsaccade was directed away from the stimulus. The authors also saw increased firing in the inferior temporal (IT) cortex after a microsaccade, and even found that attention to an object amongst a ‘clutter’ of different visual objects, finding that attention to a specific object in the group was preceded by a microsaccade.

“I expected some links between microsaccades and covert attention,” says lead author of the study Eric Lowet, now a postdoctoral fellow at Boston University. “However, the magnitude of the effect and the precise link to microsaccade onset was surprising to me and the lab. Furthermore, to see these effects also in the IT cortex, which has large receptive fields and is involved in higher-order visual cognition, was striking”.

Why was this strong effect previously missed? The separation of eye movement and attention is so core to the concept of covert attention, that studies often actively seek to separate the visual stimulus by directing attention to a target outside the receptive field of vision, while the subject’s gaze is maintained on a fixation stimulus. The authors are the first to directly test microsaccades toward and away from an attended stimulus, and it was this set up, and the difference in neuronal firing upon separating these eye movements, that allowed them to draw the conclusions made.

“When we first separated attention effects on V4 firing rates by the direction of the microsaccade relative to the attended stimulus,” Lowet explains, “I realized this analysis was a game changer.”

The study suggests several future directions of study that are being pursued by the Desimone lab. Low frequency rhythmic (in the delta and theta range) sampling has been suggested as a possible explanation for attentional modulation. According to this idea, people sample visual scenes rhythmically, with an intrinsic sampling interval of about a quarter of a second.

“We do not know whether microsaccades and delta/theta rhythms have a common generator,” points out Karthik Srinivasan, a co-author on the study and a scientist at the McGovern Institute. “But if they do, what brain areas are the source of such a generator? Are the low frequency rhythms observed merely the frequency-analytic manifestation of microsaccades or are they linked?”

These are intriguing future steps for analysis that can be addressed in light of the current study which points to microsaccades as an important marker for visual attention and cognitive processes. Indeed, some of the previously hidden aspects of our cognition are revealed through our motor behavior after all.

Chronic neural implants modulate microstructures in the brain with pinpoint accuracy

Post by Windy Pham

The diversity of structures and functions of the brain is becoming increasingly realized in research today. Key structures exist in the brain that regulate emotion, anxiety, happiness, memory, and mobility. These structures can come in a huge variety of shapes and sizes and can all be physically near one another. Dysfunction of these structures and circuits linking them are common causes of many neurologic and neuropsychiatric diseases. For example, the substantia nigra is only a few millimeters in size yet is crucial for movement and coordination. Destruction of substantia nigra neurons is what causes motor symptoms in Parkinson’s disease.

New technologies such as optogenetics have allowed us to identify similar microstructures in the brain. However, these techniques rely on liquid infusions into the brain, which prepare the regions to be studied to respond to light. These infusions are done with large needles, which do not have the fine control to target specific regions. Clinical therapy has also lagged behind. New drug therapies aimed at treating these conditions are delivered orally, which results in drug distribution throughout the brain, or through large needle-cannulas, which do not have the fine control to accurately dose specific regions. As a result, patients of neurologic and psychiatric disorders frequently fail to respond to therapies due to poor drug delivery to diseased regions.

A new study addressing this problem has been published in Proceedings of the National Academy of Sciences. The lead author is Khalil Ramadi, a medical engineering and medical physics (MEMP) PhD candidate in the Harvard-MIT Program in Health Sciences and Technology (HST). For this study, Khalil and his thesis advisor, Michael Cima, the David H. Koch Professor of Engineering within the Department of Materials Science and Engineering and the Koch Institute for Integrative Cancer Research, and associate dean of innovation in the School of Engineering, collaborated with Institute Professors Robert Langer and Ann Graybiel, an Investigator at the McGovern Institute of Brain Research to tackle this issue.

The team developed tools to enable targeted delivery of nanoliters of drugs to deep brain structures through chronically implanted microprobes. They also developed nuclear imaging techniques using positron emission tomography (PET) to measure the volume of the brain region targeted with each infusion. “Drugs for disorders of the central nervous system are nonspecific and get distributed throughout the brain,” Cima says. “Our animal studies show that volume is a critical factor when delivering drugs to the brain, as important as the total dose delivered. Using microcannulas and microPET imaging, we can control the area of brain exposed to these drugs, improving targeting accuracy double time comparing to the traditional methods used today.”

The researchers were also able to design cannulas that are MRI-compatible and implanted up to one year in rats. Implanting these cannulas with micropumps allowed the researchers to remotely control the behavior of animals. Significantly, they found that varying the volume infused alone had a profound effect on behavior induced, even if the total drug dose delivered stayed constant. These results show that regulation of volume delivery to brain region is extremely important in influencing brain activity. This technology could potentially enable precise investigation of neurological disease pathology in preclinical models, and more effective treatment in human patients.

 

 

How the brain performs flexible computations

Humans can perform a vast array of mental operations and adjust their behavioral responses based on external instructions and internal beliefs. For example, to tap your feet to a musical beat, your brain has to process the incoming sound and also use your internal knowledge of how the song goes.

MIT neuroscientists have now identified a strategy that the brain uses to rapidly select and flexibly perform different mental operations. To make this discovery, they applied a mathematical framework known as dynamical systems analysis to understand the logic that governs the evolution of neural activity across large populations of neurons.

“The brain can combine internal and external cues to perform novel computations on the fly,” says Mehrdad Jazayeri, the Robert A. Swanson Career Development Professor of Life Sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study. “What makes this remarkable is that we can make adjustments to our behavior at a much faster time scale than the brain’s hardware can change. As it turns out, the same hardware can assume many different states, and the brain uses instructions and beliefs to select between those states.”

Previous work from Jazayeri’s group has found that the brain can control when it will initiate a movement by altering the speed at which patterns of neural activity evolve over time. Here, they found that the brain controls this speed flexibly based on two factors: external sensory inputs and adjustment of internal states, which correspond to knowledge about the rules of the task being performed.

Evan Remington, a McGovern Institute postdoc, is the lead author of the paper, which appears in the June 6 edition of Neuron. Other authors are former postdoc Devika Narain and MIT graduate student Eghbal Hosseini.

Ready, set, go

Neuroscientists believe that “cognitive flexibility,” or the ability to rapidly adapt to new information, resides in the brain’s higher cortical areas, but little is known about how the brain achieves this kind of flexibility.

To understand the new findings, it is useful to think of how switches and dials can be used to change the output of an electrical circuit. For example, in an amplifier, a switch may select the sound source by controlling the input to the circuit, and a dial may adjust the volume by controlling internal parameters such as a variable resistance. The MIT team theorized that the brain similarly transforms instructions and beliefs to inputs and internal states that control the behavior of neural circuits.

To test this, the researchers recorded neural activity in the frontal cortex of animals trained to perform a flexible timing task called “ready, set, go.” In this task, the animal sees two visual flashes — “ready” and “set” — that are separated by an interval anywhere between 0.5 and 1 second, and initiates a movement — “go” — some time after “set.” The animal has to initiate the movement such that the “set-go” interval is either the same as or 1.5 times the “ready-set” interval. The instruction for whether to use a multiplier of 1 or 1.5 is provided in each trial.

Neural signals recorded during the “set-go” interval clearly carried information about both the multiplier and the measured length of the “ready-set” interval, but the nature of these representations seemed bewilderingly complex. To decode the logic behind these representations, the researchers used the dynamical systems analysis framework. This analysis is used in the study of a wide range of physical systems, from simple electrical circuits to space shuttles.

The application of this approach to neural data in the “ready, set, go” task enabled Jazayeri and his colleagues to discover how the brain adjusts the inputs to and initial conditions of frontal cortex to control movement times flexibly. A switch-like operation sets the input associated with the correct multiplier, and a dial-like operation adjusts the state of neurons based on the “ready-set” interval. These two complementary control strategies allow the same hardware to produce different behaviors.

David Sussillo, a research scientist at Google Brain and an adjunct professor at Stanford University, says a key to the study was the research team’s development of new mathematical tools to analyze huge amounts of data from neuron recordings, allowing the researchers to uncover how a large population of neurons can work together to perform mental operations related to timing and rhythm.

“They have very rigorously brought the dynamical systems approach to the problem of timing,” says Sussillo, who was not involved in the research.

“A bridge between behavior and neurobiology”

Many unanswered questions remain about how the brain achieves this flexibility, the researchers say. They are now trying to find out what part of the brain sends information about the multiplier to the frontal cortex, and they also hope to study what happens in these neurons as they first learn tasks that require them to respond flexibly.

“We haven’t connected all the dots from behavioral flexibility to neurobiological details. But what we have done is to establish an algorithmic understanding based on the mathematics of dynamical systems that serves as a bridge between behavior and neurobiology,” Jazayeri says.

The researchers also hope to explore whether this type of model could help to explain behavior of other parts of the brain that have to perform computations flexibly.

The research was funded by the National Institutes of Health, the Sloan Foundation, the Klingenstein Foundation, the Simons Foundation, the McKnight Foundation, the Center for Sensorimotor Neural Engineering, and the McGovern Institute.

Ann Graybiel wins 2018 Gruber Neuroscience Prize

Institute Professor Ann Graybiel, a professor in the Department of Brain and Cognitive Sciences and member of MIT’s McGovern Institute for Brain Research, is being recognized by the Gruber Foundation for her work on the structure, organization, and function of the once-mysterious basal ganglia. She was awarded the prize alongside Okihide Hikosaka of the National Institute of Health’s National Eye Institute and Wolfram Schultz of the University of Cambridge in the U.K.

The basal ganglia have long been known to play a role in movement, and the work of Graybiel and others helped to extend their roles to cognition and emotion. Dysfunction in the basal ganglia has been linked to a host of disorders including Parkinson’s disease, Huntington’s disease, obsessive-compulsive disorder and attention-deficit hyperactivity disorder, and to depression and anxiety disorders. Graybiel’s research focuses on the circuits thought to underlie these disorders, and on how these circuits act to help us form habits in everyday life.

“We are delighted that Ann has been honored with the Gruber Neuroscience Prize,” says Robert Desimone, director of the McGovern Institute. “Ann’s work has truly elucidated the complexity and functional importance of these forebrain structures. Her work has driven the field forward in a fundamental fashion, and continues to do so.’

Graybiel’s research focuses broadly on the striatum, a hub in basal ganglia-based circuits that is linked to goal-directed actions and habits. Prior to her work, the striatum was considered to be a primitive forebrain region. Graybiel found that the striatum instead has a complex architecture consisting of specialized zones: striosomes and the surrounding matrix. Her group went on to relate these zones to function, finding that striosomes and matrix differentially influence behavior. Among other important findings, Graybiel has shown that striosomes are focal points in circuits that link mood-related cortical regions with the dopamine-containing neurons of the midbrain, which are implicated in learning and motivation and which undergo degeneration in Parkinson’s disorder and other clinical conditions. She and her group have shown that these regions are activated by drugs of abuse, and that they influence decision-making, including decisions that require weighing of costs and benefits.

Graybiel continues to drive the field forward, finding that striatal neurons spike in an accentuated fashion and ‘bookend’ the beginning and end of behavioral sequences in rodents and primates. This activity pattern suggests that the striatum demarcates useful behavioral sequences such, in the case of rodents, pressing levers or running down mazes to receive a reward. Additionally, she and her group worked on miniaturized tools for chemical sensing and delivery as part of a continued drive toward therapeutic intervention in collaboration with the laboratories of Robert Langer in the Department of Chemical Engineering and Michael Cima, in the Department of Materials Science and Engineering.

“My first thought was of our lab, and how fortunate I am to work with such talented and wonderful people,” says Graybiel.  “I am deeply honored to be recognized by this prestigious award on behalf of our lab.”

The Gruber Foundation’s international prize program recognizes researchers in the areas of cosmology, neuroscience and genetics, and includes a cash award of $500,000 in each field. The medal given to award recipients also outlines the general mission of the foundation, “for the fundamental expansion of human knowledge,” and the prizes specifically honor those whose groundbreaking work fits into this paradigm.

Graybiel, a member of the MIT Class of 1971, has also previously been honored with the National Medal of Science, the Kavli Award, the James R. Killian Faculty Achievement Award at MIT, Woman Leader of Parkinson’s Science award from the Parkinson’s Disease Foundation, and has been recognized by the National Parkinson Foundation for her contributions to the understanding and treatment of Parkinson’s disease. Graybiel is a member of the National Academy of Sciences, the National Academy of Medicine, and the American Academy of Arts and Sciences.

The Gruber Neuroscience Prize will be presented in a ceremony at the annual meeting of the Society for Neuroscience in San Diego this coming November.