Recognizing the partially seen

When we open our eyes in the morning and take in that first scene of the day, we don’t give much thought to the fact that our brain is processing the objects within our field of view with great efficiency and that it is compensating for a lack of information about our surroundings — all in order to allow us to go about our daily functions. The glass of water you left on the nightstand when preparing for bed is now partially blocked from your line of sight by your alarm clock, yet you know that it is a glass.

This seemingly simple ability for humans to recognize partially occluded objects — defined in this situation as the effect of one object in a 3-D space blocking another object from view — has been a complicated problem for the computer vision community. Martin Schrimpf, a graduate student in the DiCarlo lab in the Department of Brain and Cognitive Sciences at MIT, explains that machines have become increasingly adept at recognizing whole items quickly and confidently, but when something covers part of that item from view, this task becomes increasingly difficult for the models to accurately recognize the article.

“For models from computer vision to function in everyday life, they need to be able to digest occluded objects just as well as whole ones — after all, when you look around, most objects are partially hidden behind another object,” says Schrimpf, co-author of a paper on the subject that was recently published in the Proceedings of the National Academy of Sciences (PNAS).

In the new study, he says, “we dug into the underlying computations in the brain and then used our findings to build computational models. By recapitulating visual processing in the human brain, we are thus hoping to also improve models in computer vision.”

How are we as humans able to repeatedly do this everyday task without putting much thought and energy into this action, identifying whole scenes quickly and accurately after injesting just pieces? Researchers in the study started with the human visual cortex as a model for how to improve the performance of machines in this setting, says Gabriel Kreiman, an affiliate of the MIT Center for Brains, Minds, and Machines. Kreinman is a professor of ophthalmology at Boston Children’s Hospital and Harvard Medical School and was lead principal investigator for the study.

In their paper, “Recurrent computations for visual pattern completion,” the team showed how they developed a computational model, inspired by physiological and anatomical constraints, that was able to capture the behavioral and neurophysiological observations during pattern completion. In the end, the model provided useful insights towards understanding how to make inferences from minimal information.

Work for this study was conducted at the Center for Brains, Minds and Machines within the McGovern Institute for Brain Research at MIT.

School of Science welcomes 10 professors

The MIT School of Science recently welcomed 10 new professors, including Ila Fiete in the departments of Brain and Cognitive Sciences, Chemistry, Biology, Physics, Mathematics, and Earth, Atmospheric and Planetary Sciences.

Ila Fiete uses computational and theoretical tools to better understand the dynamical mechanisms and coding strategies that underlie computation in the brain, with a focus on elucidating how plasticity and development shape networks to perform computation and why information is encoded the way that it is. Her recent focus is on error control in neural codes, rules for synaptic plasticity that enable neural circuit organization, and questions at the nexus of information and dynamics in neural systems, such as understand how coding and statistics fundamentally constrain dynamics and vice-versa.

Tristan Collins conducts research at the intersection of geometric analysis, partial differential equations, and algebraic geometry. In joint work with Valentino Tosatti, Collins described the singularity formation of the Ricci flow on Kahler manifolds in terms of algebraic data. In recent work with Gabor Szekelyhidi, he gave a necessary and sufficient algebraic condition for existence of Ricci-flat metrics, which play an important role in string theory and mathematical physics. This result lead to the discovery of infinitely many new Einstein metrics on the 5-dimensional sphere. With Shing-Tung Yau and Adam Jacob, Collins is currently studying the relationship between categorical stability conditions and existence of solutions to differential equations arising from mirror symmetry.

Collins earned his BS in mathematics at the University of British Columbia in 2009, after which he completed his PhD in mathematics at Columbia University in 2014 under the direction of Duong H. Phong. Following a four-year appointment as a Benjamin Peirce Assistant Professor at Harvard University, Collins joins MIT as an assistant professor in the Department of Mathematics.

Julien de Wit develops and applies new techniques to study exoplanets, their atmospheres, and their interactions with their stars. While a graduate student in the Sara Seager group at MIT, he developed innovative analysis techniques to map exoplanet atmospheres, studied the radiative and tidal planet-star interactions in eccentric planetary systems, and constrained the atmospheric properties and mass of exoplanets solely from transmission spectroscopy. He plays a critical role in the TRAPPIST/SPECULOOS project, headed by Université of Liège, leading the atmospheric characterization of the newly discovered TRAPPIST-1 planets, for which he has already obtained significant results with the Hubble Space Telescope. De Wit’s efforts are now also focused on expanding the SPECULOOS network of telescopes in the northern hemisphere to continue the search for new potentially habitable TRAPPIST-1-like systems.

De Wit earned a BEng in physics and mechanics from the Université de Liège in Belgium in 2008, an MS in aeronautic engineering and an MRes in astrophysics, planetology, and space sciences from the Institut Supérieur de l’Aéronautique et de l’Espace at the Université de Toulouse, France in 2010; he returned to the Université de Liège for an MS in aerospace engineering, completed in 2011. After finishing his PhD in planetary sciences in 2014 and a postdoc at MIT, both under the direction of Sara Seager, he joins the MIT faculty in the Department of Earth, Atmospheric and Planetary Sciences as an assistant professor.

After earning a BS in mathematics and physics at the University of Michigan, Fiete obtained her PhD in 2004 at Harvard University in the Department of Physics. While holding an appointment at the Kavli Institute for Theoretical Physics at the University of California, Santa Barbara from 2004 to 2006, she was also a visiting member of the Center for Theoretical Biophysics at the University of California at San Diego. Fiete subsequently spent two years at Caltech as a Broad Fellow in brain circuitry, and in 2008 joined the faculty of the University of Texas at Austin. She joins the MIT faculty in the Department of Brain and Cognitive Sciences as an associate professor with tenure.

Ankur Jain explores the biology of RNA aggregation. Several genetic neuromuscular disorders, such as myotonic dystrophy and amyotrophic lateral sclerosis, are caused by expansions of nucleotide repeats in their cognate disease genes. Such repeats cause the transcribed RNA to form pathogenic clumps or aggregates. Jain uses a variety of biophysical approaches to understand how the RNA aggregates form, and how they can be disrupted to restore normal cell function. Jain will also study the role of RNA-DNA interactions in chromatin organization, investigating whether the RNA transcribed from telomeres (the protective repetitive sequences that cap the ends of chromosomes) undergoes the phase separation that characterizes repeat expansion diseases.

Jain completed a bachelor’s of technology degree in biotechnology and biochemical engineering at the Indian Institute of Technology Kharagpur, India in 2007, followed by a PhD in biophysics and computational biology at the University of Illinois at Urbana-Champaign under the direction of Taekjip Ha in 2013. After a postdoc at the University of California at San Francisco, he joins the MIT faculty in the Department of Biology as an assistant professor with an appointment as a member of the Whitehead Institute for Biomedical Research.

Kiyoshi Masui works to understand fundamental physics and the evolution of the universe through observations of the large-scale structure — the distribution of matter on scales much larger than galaxies. He works principally with radio-wavelength surveys to develop new observational methods such as hydrogen intensity mapping and fast radio bursts. Masui has shown that such observations will ultimately permit precise measurements of properties of the early and late universe and enable sensitive searches for primordial gravitational waves. To this end, he is working with a new generation of rapid-survey digital radio telescopes that have no moving parts and rely on signal processing software running on large computer clusters to focus and steer, including work on the Canadian Hydrogen Intensity Mapping Experiment (CHIME).

Masui obtained a BSCE in engineering physics at Queen’s University, Canada in 2008 and a PhD in physics at the University of Toronto in 2013 under the direction of Ue-Li Pen. After postdoctoral appointments at the University of British Columbia as the Canadian Institute for Advanced Research Global Scholar and the Canadian Institute for Theoretical Astrophysics National Fellow, Masui joins the MIT faculty in the Department of Physics as an assistant professor.

Phiala Shanahan studies theoretical nuclear and particle physics, in particular the structure and interactions of hadrons and nuclei from the fundamental (quark and gluon) degrees of freedom encoded in the Standard Model of particle physics. Shanahan’s recent work has focused on the role of gluons, the force carriers of the strong interactions described by quantum chromodynamics (QCD), in hadron and nuclear structure by using analytic tools and high-performance supercomputing. She recently achieved the first calculation of the gluon structure of light nuclei, making predictions that will be testable in new experiments proposed at Jefferson National Accelerator Facility and at the planned Electron-Ion Collider. She has also undertaken extensive studies of the role of strange quarks in the proton and light nuclei that sharpen theory predictions for dark matter cross-sections in direct detection experiments. To overcome computational limitations in QCD calculations for hadrons and in particular for nuclei, Shanahan is pursuing a program to integrate modern machine learning techniques in computational nuclear physics studies.

Shanahan obtained her BS in 2012 and her PhD in 2015, both in physics, from the University of Adelaide. She completed postdoctoral work at MIT in 2017, then held a joint position as an assistant professor at the College of William and Mary and senior staff scientist at the Thomas Jefferson National Accelerator Facility until 2018. She returns to MIT in the Department of Physics as an assistant professor.

Nike Sun works in probability theory at the interface of statistical physics and computation. Her research focuses in particular on phase transitions in average-case (randomized) formulations of classical computational problems. Her joint work with Jian Ding and Allan Sly establishes the satisfiability threshold of random k-SAT for large k, and relatedly the independence ratio of random regular graphs of large degree. Both are long-standing open problems where heuristic methods of statistical physics yield detailed conjectures, but few rigorous techniques exist. More recently she has been investigating phase transitions of dense graph models.

Sun completed BA mathematics and MA statistics degrees at Harvard in 2009, and an MASt in mathematics at Cambridge in 2010. She received her PhD in statistics from Stanford University in 2014 under the supervision of Amir Dembo. She held a Schramm fellowship at Microsoft New England and MIT Mathematics in 2014-2015 and a Simons postdoctoral fellowship at the University of California at Berkeley in 2016, and joined the Berkeley Department of Statistics as an assistant professor in 2016. She returns to the MIT Department of Mathematics as an associate professor with tenure.

Alison Wendlandt focuses on the development of selective, catalytic reactions using the tools of organic and organometallic synthesis and physical organic chemistry. Mechanistic study plays a central role in the development of these new transformations. Her projects involve the design of new catalysts and catalytic transformations, identification of important applications for selective catalytic processes, and elucidation of new mechanistic principles to expand powerful existing catalytic reaction manifolds.

Wendlandt received a BS in chemistry and biological chemistry from the University of Chicago in 2007, an MS in chemistry from Yale University in 2009, and a PhD in chemistry from the University of Wisconsin at Madison in 2015 under the direction of Shannon S. Stahl. Following an NIH Ruth L. Krichstein Postdoctoral Fellowship at Harvard University, Wendlandt joins the MIT faculty in the Department of Chemistry as an assistant professor.

Chenyang Xu specializes in higher-dimensional algebraic geometry, an area that involves classifying algebraic varieties, primarily through the minimal model program (MMP). MMP was introduced by Fields Medalist S. Mori in the early 1980s to make advances in higher dimensional birational geometry. The MMP was further developed by Hacon and McKernan in the mid-2000s, so that the MMP could be applied to other questions. Collaborating with Hacon, Xu expanded the MMP to varieties of certain conditions, such as those of characteristic p, and, with Hacon and McKernan, proved a fundamental conjecture on the MMP, generating a great deal of follow-up activity. In collaboration with Chi Li, Xu proved a conjecture of Gang Tian concerning higher-dimensional Fano varieties, a significant achievement. In a series of papers with different collaborators, he successfully applied MMP to singularities.

Xu received his BS in 2002 and MS in 2004 in mathematics from Peking University, and completed his PhD at Princeton University under János Kollár in 2008. He came to MIT as a CLE Moore Instructor in 2008-2011, and was subsequently appointed assistant professor at the University of Utah. He returned to Peking University as a research fellow at the Beijing International Center of Mathematical Research in 2012, and was promoted to professor in 2013. Xu joins the MIT faculty as a full professor in the Department of Mathematics.

Zhiwei Yun’s research is at the crossroads between algebraic geometry, number theory, and representation theory. He studies geometric structures aiming at solving problems in representation theory and number theory, especially those in the Langlands program. While he was a CLE Moore Instructor at MIT, he started to develop the theory of rigid automorphic forms, and used it to answer an open question of J-P Serre on motives, which also led to a major result on the inverse Galois problem in number theory. More recently, in his joint work with Wei Zhang, they give geometric interpretation of higher derivatives of automorphic L- functions in terms of intersection numbers, which sheds new light on the geometric analogue of the Birch and Swinnerton-Dyer conjecture.

Yun earned his BS at Peking University in 2004, after which he completed his PhD at Princeton University in 2009 under the direction of Robert MacPherson. After appointments at the Institute for Advanced Study and as a CLE Moore Instructor at MIT, he held faculty appointments at Stanford and Yale. He returned to the MIT Department of Mathematics as a full professor in the spring of 2018.

Feng Zhang wins 2018 Keio Medical Science Prize

Molecular biologist Feng Zhang has been named a winner of the prestigious Keio Medical Science Prize. He is being recognized for the groundbreaking development of CRISPR-Cas9-mediated genome engineering in cells and its application for medical science.

Zhang is the James and Patricia Poitras Professor of Neuroscience at MIT, an associate professor in the departments of Brain and Cognitive Sciences and Biological Engineering, a Howard Hughes Medical Institute investigator, an investigator at the McGovern Institute for Brain Research, and a core member of the Broad Institute of MIT and Harvard.

“We are delighted that Feng is now a Keio Prize laureate,” says McGovern Institute Director Robert Desimone. “This truly recognizes the remarkable achievements that he has made at such a young age.”

Zhang is a molecular biologist who has contributed to the development of multiple molecular tools to accelerate the understanding of human disease and create new therapeutic modalities. During his graduate work, Zhang contributed to the development of optogenetics, a system for activating neurons using light, which has advanced our understanding of brain connectivity.

Zhang went on to pioneer the deployment of the microbial CRISPR-Cas9 system for genome engineering in eukaryotic cells. The ease and specificity of the system has led to its widespread use across the life sciences and it has groundbreaking implications for disease therapeutics, biotechnology, and agriculture. He has continued to mine bacterial CRISPR systems for additional enzymes with useful properties, leading to the discovery of Cas13, which targets RNA, rather than DNA, and may potentially be a way to treat genetic diseases without altering the genome. Zhang has also developed a molecular detection system called SHERLOCK based on the Cas13 family, which can sense trace amounts of genetic material, including viruses and alterations in genes that might be linked to cancer.

“I am tremendously honored to have our work recognized by the Keio Medical Prize,” says Zhang. “It is an inspiration to us to continue our work to improve human health.”

Now in its 23rd year, the Keio Medical Science Prize is awarded to a maximum of two scientists each year. The other 2018 laureate, Masashi Yanagisawa, director of the International Institute for Integrative Sleep Medicine at the University of Tsukuba, is being recognized for his seminal work on sleep control mechanisms.

The prize is offered by Keio University, and the selection committee specifically looks for laureates that have made an outstanding contribution to medicine or the life sciences. The prize was initially endowed by Mitsunada Sakaguchi in 1994, with the express condition that it be used to commend outstanding science, promote advances in medicine and the life sciences, expand researcher networks, and contribute to the wellbeing of humankind. The winners receive a certificate of merit, a medal, and a monetary award of approximately $90,000.

The prize ceremony will be held on Dec. 18 at Keio University in Tokyo.

Mark Harnett named Vallee Foundation Scholar

The Bert L and N Kuggie Vallee Foundation has named McGovern Institute investigator Mark Harnett a 2018 Vallee Scholar. The Vallee Scholars Program recognizes original, innovative, and pioneering work by early career scientists at a critical juncture in their careers and provides $300,000 in discretionary funds to be spent over four years for basic biomedical research. Harnett is among five researchers named to this year’s Vallee Scholars Program.

Harnett, who is also the Fred and Carole Middleton Career Development Assistant Professor in the Department of Brain and Cognitive Sciences, is being recognized for his work exploring how the biophysical features of neurons give rise to the computational power of the brain. By exploiting new technologies and approaches at the interface of biophysics and systems neuroscience, research in the Harnett lab aims to provide a new understanding of the biology underlying how mammalian brains learn. This may open new areas of research into brain disorders characterized by atypical learning and memory (such as dementia and schizophrenia) and may also have important implications for designing new, brain-inspired artificial neural networks.

The Vallee Foundation was established in 1996 by Bert and Kuggie Vallee to foster originality, creativity, and leadership within biomedical scientific research and medical education. The foundation’s goal to fund originality, innovation, and pioneering work “recognizes the future promise of these scientists who are dedicated to understanding fundamental biological processes.” Harnett joins a list of 24 Vallee Scholars, including McGovern investigator Feng Zhang, who have been appointed to the program since its inception in 2013.

Feng Zhang named winner of the 2018 Keio Medical Science Prize

Feng Zhang and Masashi Yanagisawa have been named the 2018 winners of the prestigious Keio Medical Science Prize. Zhang is being recognized for the groundbreaking development of CRISPR-Cas9-mediated genome engineering in cells and its application for medical science. Zhang is an HHMI Investigator and the James and Patricia Poitras Professor of Neuroscience at MIT, an associate professor in MIT’s Departments of Brain and Cognitive Sciences and Biological Engineering, an investigator at the McGovern Institute for Brain Research, and a core member of the Broad Institute of MIT and Harvard. Masashi Yanagisawa, Director of the International Institute for Integrative Sleep Medicine at the University of Tsukuba, is being recognized for his seminal work on sleep control mechanisms.

“We are delighted that Feng is now a Keio Prize laureate,” says McGovern Institute Director Robert Desimone. “This truly recognizes the remarkable achievements that he has made at such a young age.”

The Keio Medical Prize is awarded to a maximum of two scientists each year, and is now in its 23rd year. The prize is offered by Keio University, and the selection committee specifically looks for laureates that have made an outstanding contribution to medicine or the life sciences. The prize was initially endowed by Dr. Mitsunada Sakaguchi in 1994, with the express condition that it be used to commend outstanding science, promote medical advances in medicine and the life sciences, expand researcher networks, and contribute to the well-being of humankind. The winners receive a certificate of merit, medal, and a monetary award of 10 million yen.

Feng Zhang is a molecular biologist who has contributed to the development of multiple molecular tools to accelerate our understanding of human disease and create new therapeutic modalities. During his graduate work Zhang contributed to the development of optogenetics, a system for activating neurons using light, which has advanced our understanding of brain connectivity. Zhang went on to pioneer the deployment of the microbial CRISPR-Cas9 system for genome engineering in eukaryotic cells. The ease and specificity of the system has led to its widespread use across the life sciences and it has groundbreaking implications for disease therapeutics, biotechnology, and agriculture. Zhang has continued to mine bacterial CRISPR systems for additional enzymes with useful properties, leading to the discovery of Cas13, which targets RNA, rather than DNA, and may potentially be a way to treat genetic diseases without altering the genome. He has also developed a molecular detection system called SHERLOCK based on the Cas13 family, which can sense trace amounts of genetic material, including viruses and alterations in genes that might be linked to cancer.

“I am tremendously honored to have our work recognized by the Keio Medical Prize,” says Zhang. “It is an inspiration to us to continue our work to improve human health.”

The prize ceremony will be held on December 18th 2018 at Keio University in Tokyo, Japan.

New sensors track dopamine in the brain for more than a year

Dopamine, a signaling molecule used throughout the brain, plays a major role in regulating our mood, as well as controlling movement. Many disorders, including Parkinson’s disease, depression, and schizophrenia, are linked to dopamine deficiencies.

MIT neuroscientists have now devised a way to measure dopamine in the brain for more than a year, which they believe will help them to learn much more about its role in both healthy and diseased brains.

“Despite all that is known about dopamine as a crucial signaling molecule in the brain, implicated in neurologic and neuropsychiatric conditions as well as our ability to learn, it has been impossible to monitor changes in the online release of dopamine over time periods long enough to relate these to clinical conditions,” says Ann Graybiel, an MIT Institute Professor, a member of MIT’s McGovern Institute for Brain Research, and one of the senior authors of the study.

Michael Cima, the David H. Koch Professor of Engineering in the Department of Materials Science and Engineering and a member of MIT’s Koch Institute for Integrative Cancer Research, and Rober Langer, the David H. Koch Institute Professor and a member of the Koch Institute, are also senior authors of the study. MIT postdoc Helen Schwerdt is the lead author of the paper, which appears in the Sept. 12 issue of Communications Biology.

Long-term sensing

Dopamine is one of many neurotransmitters that neurons in the brain use to communicate with each other. Traditional systems for measuring dopamine — carbon electrodes with a shaft diameter of about 100 microns — can only be used reliably for about a day because they produce scar tissue that interferes with the electrodes’ ability to interact with dopamine.

In 2015, the MIT team demonstrated that tiny microfabricated sensors could be used to measure dopamine levels in a part of the brain called the striatum, which contains dopamine-producing cells that are critical for habit formation and reward-reinforced learning.

Because these probes are so small (about 10 microns in diameter), the researchers could implant up to 16 of them to measure dopamine levels in different parts of the striatum. In the new study, the researchers wanted to test whether they could use these sensors for long-term dopamine tracking.

“Our fundamental goal from the very beginning was to make the sensors work over a long period of time and produce accurate readings from day to day,” Schwerdt says. “This is necessary if you want to understand how these signals mediate specific diseases or conditions.”

To develop a sensor that can be accurate over long periods of time, the researchers had to make sure that it would not provoke an immune reaction, to avoid the scar tissue that interferes with the accuracy of the readings.

The MIT team found that their tiny sensors were nearly invisible to the immune system, even over extended periods of time. After the sensors were implanted, populations of microglia (immune cells that respond to short-term damage), and astrocytes, which respond over longer periods, were the same as those in brain tissue that did not have the probes inserted.

In this study, the researchers implanted three to five sensors per animal, about 5 millimeters deep, in the striatum. They took readings every few weeks, after stimulating dopamine release from the brainstem, which travels to the striatum. They found that the measurements remained consistent for up to 393 days.

“This is the first time that anyone’s shown that these sensors work for more than a few months. That gives us a lot of confidence that these kinds of sensors might be feasible for human use someday,” Schwerdt says.

Paul Glimcher, a professor of physiology and neuroscience at New York University, says the new sensors should enable more researchers to perform long-term studies of dopamine, which is essential for studying phenomena such as learning, which occurs over long time periods.

“This is a really solid engineering accomplishment that moves the field forward,” says Glimcher, who was not involved in the research. “This dramatically improves the technology in a way that makes it accessible to a lot of labs.”

Monitoring Parkinson’s

If developed for use in humans, these sensors could be useful for monitoring Parkinson’s patients who receive deep brain stimulation, the researchers say. This treatment involves implanting an electrode that delivers electrical impulses to a structure deep within the brain. Using a sensor to monitor dopamine levels could help doctors deliver the stimulation more selectively, only when it is needed.

The researchers are now looking into adapting the sensors to measure other neurotransmitters in the brain, and to measure electrical signals, which can also be disrupted in Parkinson’s and other diseases.

“Understanding those relationships between chemical and electrical activity will be really important to understanding all of the issues that you see in Parkinson’s,” Schwerdt says.

The research was funded by the National Institute of Biomedical Imaging and Bioengineering, the National Institute of Neurological Disorders and Stroke, the Army Research Office, the Saks Kavanaugh Foundation, the Nancy Lurie Marks Family Foundation, and Dr. Tenley Albright.

Can the brain recover after paralysis?

Why is it that motor skills can be gained after paralysis but vision cannot recover in similar ways? – Ajay, Puppala

Thank you so much for this very important question, Ajay. To answer, I asked two local experts in the field, Pawan Sinha who runs the vision research lab at MIT, and Xavier Guell, a postdoc in John Gabrieli’s lab at the McGovern Institute who also works in the ataxia unit at Massachusetts General Hospital.

“Simply stated, the prospects of improvement, whether in movement or in vision, depend on the cause of the impairment,” explains Sinha. “Often, the cause of paralysis is stroke, a reduction in blood supply to a localized part of the brain, resulting in tissue damage. Fortunately, the brain has some ability to rewire itself, allowing regions near the damaged one to take on some of the lost functionality. This rewiring manifests itself as improvements in movement abilities after an initial period of paralysis. However, if the paralysis is due to spinal-cord transection (as was the case following Christopher Reeve’s tragic injury in 1995), then prospects for improvement are diminished.”

“Turning to the domain of sight,” continues Sinha, “stroke can indeed cause vision loss. As with movement control, these losses can dissipate over time as the cortex reorganizes via rewiring. However, if the blindness is due to optic nerve transection, then the condition is likely to be permanent. It is also worth noting that many cases of blindness are due to problems in the eye itself. These include corneal opacities, cataracts and retinal damage. Some of these conditions (corneal opacities and cataracts) are eminently treatable while others (typically those associated with the retina and optic nerve) still pose challenges to medical science.”

You might be wondering what makes lesions in the eye and spinal cord hard to overcome. Some systems (the blood, skin, and intestine are good examples) contain a continuously active stem cell population in adults. These cells can divide and replenish lost cells in damaged regions. While “adult-born” neurons can arise, elements of a degenerating or damaged retina, optic nerve, or spinal cord cannot be replaced as easily lost skin cells can. There is currently a very active effort in the stem cell community to understand how we might be able to replace neurons in all cases of neuronal degeneration and injury using stem cell technologies. To further explore lesions that specifically affect the brain, and how these might lead to a different outcome in the two systems, I turned to Xavier Guell.

“It might be true that visual deficits in the population are less likely to recover when compared to motor deficits in the population. However, the scientific literature seems to indicate that our body has a similar capacity to recover from both motor and visual injuries,” explains Guell. “The reason for this apparent contradiction is that visual lesions are usually not in the cerebral cortex (but instead in other places such as the retina or the lens), while motor lesions in the cerebral cortex are more common. In fact, a large proportion of people who suffer a stroke will have damage in the motor aspects of the cerebral cortex, but no damage in the visual aspects of the cerebral cortex. Crucially, recovery of neurological functions is usually seen when lesions are in the cerebral cortex or in other parts of the cerebrum or cerebellum. In this way, while our body has a similar capacity to recover from both motor and visual injuries, motor injuries are more frequently located in the parts of our body that have a better capacity to regain function (specifically, the cerebral cortex).”

In short, some cells cannot be replaced in either system, but stem cell research provides hope there. That said, there is remarkable plasticity in the brain, so when the lesion is located there, we can see recovery with training.

Do you have a question for The Brain? Ask it here.

Why do I talk with my hands?

This is a very interesting question sent to us by Gabriel Castellanos (thank you!) Many of us gesture with our hands when we speak (and even when we do not) as a form of non-verbal communication. How hand gestures are coordinated with speech remains unclear. In part, it is difficult to monitor natural hand gestures in fMRI-based brain imaging studies as you have to stay still.

“Performing hand movements when stuck in the bore of a scanner is really tough beyond simple signing and keypresses,” explains McGovern Principal Research Scientist Satrajit Ghosh. “Thus ecological experiments of co-speech with motor gestures have not been carried out in the context of a magnetic resonance scanner, and therefore little is known about language and motor integration within this context.”

There have been studies that use proxies such as co-verbal pushing of buttons, and also studies using other imaging techniques, such as electroencephalography (EEG) and magnetoencephalography (MEG), to monitor brain activity during gesturing, but it would be difficult to precisely spatially localize the regions involved in natural co-speech hand gesticulation using such approaches. Another possible avenue for addressing this question would be to look at patients with conditions that might implicate particular brain regions in coordinating hand gestures, but such approaches have not really pinpointed a pathway for coordinating speech and hand movements.

That said, co-speech hand gesturing plays an important role in communication. “More generally co-speech hand gestures are seen as a mechanism for emphasis and disambiguation of the semantics of a sentence, in addition to prosody and facial queues,” says Ghosh. “In fact, one may consider the act of speaking as one large orchestral score involving vocal tract movement, respiration, voicing, facial expression, hand gestures, and even whole body postures acting as different instruments coordinated dynamically by the brain. Based on our current understanding of language production, co-speech or gestural events would likely be planned at a higher level than articulation and therefore would likely activate inferior frontal gyrus, SMA, and others.”

How this orchestra is coordinated and conducted thus remains to be unraveled, but certainly the question is one that gets to the heart of human social interactions.

Do you have a question for The Brain? Ask it here.

Constructing the striatum

The striatum, the largest nucleus of the basal ganglia in the vertebrate brain, was historically thought to be a homogeneous group of cells. This view was overturned in a classic series of papers from MIT Institute Professor, Ann Graybiel. In previous work, Graybiel, who is also an investigator at MIT’s McGovern Institute, found that the striatum is highly organized, both structurally and functionally and in terms of connectivity. Graybiel has now collaborated with Z. Josh Huang’s lab at Cold Spring Harbor Laboratory to map the developmental lineage of cells that give rise to this complex architecture. The authors found that different functions of the striatum, such as execution of actions as opposed to evaluation of outcomes, are defined early on as part of the blueprint that constructs this brain region, rather than sculpted through a later mechanism.

Graybiel and colleagues tracked what is happening early in development by driving cell-specific fluorescent markers that allowed them to follow the progenitors that give rise to cells in the striatum. The striatum is known, thanks to Graybiel’s early work, to be organized into compartments called striosomes and the matrix. These have distinct connections to other brain regions. Broadly speaking, while striosomes are linked to value-based decision-making and reinforcement-based behaviors, the matrix has been linked to action execution. These regions are further subdivided into direct and indirect pathways. The direct pathway neurons are involved in releasing inhibition in other regions of the basal ganglia and thus actively promote action. Neurons projecting into the indirect pathway, instead inhibit “unwanted” actions that are not part of the current “cortical plan.” Based on their tracking, Graybiel and colleagues were indeed able to build a “fate map” that told them when the cells that build these different regions of the striatum commit to a functional path during development.

“It was already well known that individual neurons have lineages that can be traced back to early development, and many such lineages are now being traced,” says Graybiel. “What is so striking in what we have found with the Huang lab is that the earliest specification of lineages we find—at least with the markers that we have used—corresponds to what later become the two major neurochemically distinct compartments of the striatum, rather than many other divisions that might have been specified first. If this is so, then the fundamental developmental ground plan of the striatum is expressed later by these two distinct compartments of the striatum.”

Building the striatum turns out to be a symphony of organization embedded in lateral ganglion eminence cells, the source of cells during development that will end up in the striatum. Progenitors made early in development are somewhat committed: they can only generate spiny projection neurons (SPNs) that are striosomal. Following this in time, cells that will give rise to matrix SPNs appear. There is then a second mechanism laid over this initial ground plan that is switched on in both striosomal and matrisomal neurons and independently gives rise to neurons that will connect into direct as opposed to indirect pathways. This latter specification of direct-indirect pathway neurons is less rigid, but there is an overarching tendency for neurons expressing a certain neurotransmitter, dopamine, to appear earlier in developmental time. In short, progenitors move through an orchestrated process where they generate spiny projection neurons that can first sit in any area of the striatum, then where the ultimate fate of cells is more restricted at the level of striosome or matrix, and finally choices are made in both regions regarding indirect-direct pathway circuitry. Remarkably, these results suggest that even at the very earliest development of the striatum, its ultimate organization is already laid down in a way that distinguishes value-related circuit from movement-related circuits.

“What is thrilling,” says Graybiel, “is that there are lineage progressions— the step by step laying out of the brain’s organization— the turn out to match the striosome-matrix architecture of the striatum the were not even known to exist 40 years ago!”

The striatum is a hub regulating movement, emotion, motivation, evaluation, and learning, and linked to disorders such as Parkinson’s Disease and persistent negative valuations. This means that understanding its construction has important implications, perhaps even, one day, for rebuilding a striatum affected by neurodegeneration. That said, the findings have broader implications. Consider the worm, specifically, C. elegans. The complete lineage of cells that make up this organism is known, including where each neuron comes from, what it connects to, and its function and phenotype. There’s a clear relationship between lineage and function in this relatively simple organism with its highly stereotyped nervous system. Graybiel’s work suggests that in the big picture, early development in the forebrain is also providing a game plan. In this case, however, this groundwork underpins for circuits that underlie extremely complex behaviors, those that come to support the volitional and habitual behaviors that make up part of who we are as individuals.

 

A social side to face recognition by infants

When interacting with an infant you have likely noticed that the human face holds a special draw from a very young age. But how does this relate to face recognition by adults, which is known to map to specific cortical regions? Rebecca Saxe, Associate Investigator at MIT’s McGovern Institute and John W. Jarve (1978) Professor in Brain and Cognitive Sciences, and her team have now considered two emerging theories regarding early face recognition, and come up with a third proposition, arguing that when a baby looks at a face, the response is also social, and that the resulting contingent interactions are key to subsequent development of organized face recognition areas in the brain.

By a certain age you are highly skilled at recognizing and responding to faces, and this correlates with activation of a number of face-selective regions of the cortex. This is incredibly important to reading the identities and intentions of other people, and selective categorical representation of faces in cortical areas is a feature shared by our primate cousins. While brain imaging tells us where face-responsive regions are in the adult cortex, how and when they emerge remains unclear.

In 2017, functional magnetic resonance imaging (fMRI) studies of human and macaque infants provided the first glimpse of how the youngest brains respond to faces. The scans showed that in 4-6 month human infants and equivalently aged macaques, regions known to be face-responsive in the adult brain are activated when shown movies of faces, but not in a selective fashion. Essentially fMRI argues that these specific, cortical regions are activated by faces, but a chair will do just as well. Upon further experience of faces over time, the specific cortical regions in macaques became face-selective, no longer responding to other objects.

There are two prevailing ideas in the field of how face preference, and eventually selectivity, arise through experience. These ideas are now considered in turn by Saxe and her team in an opinion piece in the September issue of Trends in Cognitive Sciences, and then a third, new theory proposed. The first idea centers on the way we dote over babies, centering our own faces right in their field of vision. The idea is that such frequent exposures to low level face features (curvilinear shape etc.) will eventually lead to co-activation of neurons that are responsive to all of the different aspects of facial features. If these neurons stimulated by different features are co-activated, and there’s a brain region where these neurons are also found together, this area with be stimulated eventually reinforcing emergence of a face category-specific area.

A second idea is that babies already have an innate “face template,” just as a duckling or chick already knows to follow its mother after hatching. So far there is little evidence for the second proposition, and the first fails to explain why babies seek out a face, rather than passively look upon and eventually “learn” the overlapping features that represent “face.”

Saxe, along with postdoc Lindsey Powell and graduate student Heather Kosakowski, instead now argue that the role a face plays in positive social interactions comes to drive organization of face-selective cortical regions. Taking the next step, the researchers propose that a prime suspect for linking social interactions to the development of face-selective areas is the medial prefrontal cortex (mPFC), a region linked to social cognition and behavior.

“I was asked to give a talk at a conference, and I wanted to talk about both the development of cortical face areas and the social role of the medial prefrontal cortex in young infants,” says Saxe. “I was puzzling over whether these two ideas were related, when I suddenly saw that they could be very fundamentally related.”

The authors argue that this relationship is supported by existing data that has shown that babies prefer dynamic faces and are more interested in faces that engage in a back and forth interaction. Regions of the mPFC are also known to activated during social interactions and known to be activated during exposure to dynamic faces in infants.

Powell is now using functional near infrared spectroscopy (fNIRS), a brain imaging technique that measures changes in blood flow to the brain, to test this hypothesis in infants. “This will allow us to see whether mPFC responses to social cues are linked to the development of face-responsive areas.”

In Daniel Deronda, the novel by George Eliot, the protagonist says “I think my life began with waking up and loving my mother’s face: it was so near to me, and her arms were round me, and she sang to me.” Perhaps this type of positively valenced social interaction, reinforced by the mPFC, is exactly what leads to the particular importance of faces and their selective categorical representation in the human brain. Further testing of the hypothesis proposed by Powell, Kosakowski, and Saxe will tell.