School of Science appoints 12 faculty members to named professorships

The School of Science has awarded chaired appointments to 12 faculty members. These faculty, who are members of the departments of Biology; Brain and Cognitive Sciences; Chemistry; Earth, Atmospheric and Planetary Sciences; and Physics, receive additional support to pursue their research and develop their careers.

Kristin Bergmann, an assistant professor in the Department of Earth, Atmospheric and Planetary Sciences, has been named a D. Reid Weedon, Jr. ’41 Career Development Professor. This is a three-year professorship. Bergmann’s research integrates across sedimentology and stratigraphy, geochemistry, and geobiology to reveal aspects of Earth’s ancient environments. She aims to better constrain Earth’s climate record and carbon cycle during the evolution of early eukaryotes, including animals. Most of her efforts involve reconstructing the details of carbonate rocks, which store much of Earth’s carbon, and thus, are an important component of Earth’s climate system over long timescales.

Joseph Checkelscky is an associate professor in the Department of Physics and has been named a Mitsui Career Development Professor in Contemporary Technology, an appointment he will hold until 2023. His research in quantum materials relies on experimental methods at the intersection of physics, chemistry, and nanoscience. This work is aimed toward synthesizing new crystalline systems that manifest their quantum nature on a macroscopic scale. He aims to realize and study these crystalline systems, which can then serve as platforms for next-generation quantum sensors, quantum communication, and quantum computers.

Mircea Dincă, appointed a W. M. Keck Professor of Energy, is a professor in the Department of Chemistry. This appointment has a five-year term. The topic of Dincă’s research falls largely under the umbrella of energy storage and conversion. His interest in applied energy usage involves creating new organic and inorganic materials that can improve the efficiency of energy collection, storage, and generation while decreasing environmental impacts. Recently, he has developed materials for efficient air-conditioning units and been collaborating with Automobili Lamborghini on electric vehicle design.

Matthew Evans has been appointed to a five-year Mathworks Physics Professorship. Evans, a professor in the Department of Physics, focuses on the instruments used to detect gravitational waves. A member of MIT’s Laser Interferometer Gravitational-Wave Observatory (LIGO) research group, he engineers ways to fine-tune the detection capabilities of the massive ground-based facilities that are being used to identify collisions between black holes and stars in deep space. By removing thermal and quantum limitations, he can increase the sensitivity of the device’s measurements and, thus, its scope of exploration. Evans is also a member of the MIT Kavli Institute for Astrophysics and Space Research.

Evelina Fedorenko is an associate professor in the Department of Brain and Cognitive Sciences and has been named a Frederick A. (1971) and Carole J. Middleton Career Development Professor of Neuroscience. Studying how the brain processes language, Fedorenko uses behavioral studies, brain imaging, neurosurgical recording and stimulation, and computational modelling to better grasp language comprehension and production. In her efforts to elucidate how and what parts of the brain support language processing, she evaluates both typical and atypical brains. Fedorenko is also a member of the McGovern Institute for Brain Research.

Ankur Jain is an assistant professor in the Department of Biology and now a Thomas D. and Virginia W. Cabot Career Development Professor. He will hold this career development appointment for a term of three years. Jain studies how cells organize their contents. Within a cell, there are numerous compartments that form due to weak interactions between biomolecules and exist without an enclosing membrane. By analyzing the biochemistry and biophysics of these compartments, Jain deduces the principles of cellular organization and its dysfunction in human disease. Jain is also a member of the Whitehead Institute for Biomedical Research.

Pulin Li, an assistant professor in the Department of Biology and the Eugene Bell Career Development Professor of Tissue Engineering for the next three years, explores genetic circuitry in building and maintain a tissue. In particular, she investigates how communication circuitry between individual cells can extrapolate into multicellular behavior using both natural and synthetically generated tissues, for which she combines the fields of synthetic and systems biology, biophysics, and bioengineering. A stronger understanding of genetic circuitry could allow for progress in medicine involving embryonic development and tissue engineering. Li is a member of the Whitehead Institute for Biomedical Research.

Elizabeth Nolan, appointed an Ivan R. Cottrell Professor of Immunology, investigates innate immunity and infectious disease. The Department of Chemistry professor, who will hold this chaired professorship for five years, combines experimental chemistry and microbiology to learn about human immune responses to, and interactions with, microbial pathogens. This research includes elucidating the fight between host and pathogen for essential metal nutrients and the functions of host-defense peptides and proteins during infection. With this knowledge, Nolan contributes to fundamental understanding of the host’s ability to combat microbial infection, which may provide new strategies to treat infectious disease.

Leigh “Wiki” Royden is now a Cecil and Ida Green Professor of Geology and Geophysics. The five-year appointment supports her research on the large-scale dynamics and tectonics of the Earth as a professor in the Department of Earth, Atmospheric and Planetary Sciences. Fundamental to geoscience, the tectonics of regional and global systems are closely linked, particularly through the subduction of the plates into the mantle. Royden’s research adds to our understanding a of the structure and dynamics of the crust and the upper portion of the mantle through observation, theory and modeling. This progress has profound implications for global natural events, like mountain building and continental break-up.

Phiala Shanahan has been appointed a Class of 1957 Career Development Professor for three years. Shanahan is an assistant professor in the Department of Physics, where she specializes in theoretical and nuclear physics. Shanahan’s research uses supercomputers to provide insight into the structure of protons and nuclei in terms of their quark and gluon constituents. Her work also informs searches for new physics beyond the current Standard Model, such dark matter. She is a member of the MIT Center for Theoretical Physics.

Xiao Wang, an assistant professor, has also been named a new Thomas D. and Virginia W. Cabot Professor. In the Department of Chemistry, Wang designs and produces novel methods and tools for analyzing the brain. Integrating chemistry, biophysics, and genomics, her work provides higher-resolution imaging and sampling to explain how the brain functions across molecular to system-wide scales. Wang is also a core member of the Broad Institute of MIT and Harvard.

Bin Zhang has been appointed a Pfizer Inc-Gerald Laubach Career Development Professor for a three-year term. Zhang, an assistant professor in the Department of Chemistry, hopes to connect the framework of the human genome sequence with its various functions on various time and spatial scales. By developing theoretical and computational approaches to categorize information about dynamics, organization, and complexity of the genome, he aims to build a quantitative, predictive modelling tool. This tool could even produce 3D representations of details happening at a microscopic level within the body.

How general anesthesia reduces pain

General anesthesia is medication that suppresses pain and renders patients unconscious during surgery, but whether pain suppression is simply a side effect of loss of consciousness has been unclear. Fan Wang and colleagues have now identified the circuits linked to pain suppression under anesthesia in mouse models, showing that this effect is separable from the unconscious state itself.

“Existing literature suggests that the brain may contain a switch that can turn off pain perception,” explains Fan Wang, a professor at Duke University and lead author of the study. “I had always wanted to find this switch, and it occurred to me that general anesthetics may activate this switch to produce analgesia.”

Wang, who will join the McGovern Institute in January 2021, set out to test this idea with her student, Thuy Hua, and postdoc, Bin Chen.

Pain suppressor

Loss of pain, or analgesia, is an important property of anesthetics that helps to make surgical and invasive medical procedures humane and bearable. In spite of their long use in the medical world, there is still very little understanding of how anesthetics work. It has generally been assumed that a side effect of loss of consciousness is analgesia, but several recent observations have brought this idea into question, and suggest that changes in consciousness might be separable from pain suppression.

A key clue that analgesia is separable from general anesthesia comes from the accounts of patients that regain consciousness during surgery. After surgery, these patients can recount conversations between staff or events that occurred in the operating room, despite not feeling any pain. In addition, some general anesthetics, such as ketamine, can be deployed at low concentrations for pain suppression without loss of consciousness.

Following up on these leads, Wang and colleagues set out to uncover which neural circuits might be involved in suppressing pain during exposure to general anesthetics. Using CANE, a procedure developed by Wang that can detect which neurons activate in response to an event, Wang discovered a new population of GABAergic neurons activated by general anesthetic in the mouse central amygdala.

These neurons become activated in response to different anesthetics, including ketamine, dexmedetomidine, and isoflurane. Using optogenetics to manipulate the activity state of these neurons, Wang and her lab found that they led to marked changes in behavioral responses to painful stimuli.

“The first time we used optogenetics to turn on these cells, a mouse that was in the middle of taking care of an injury simply stopped and started walked around with no sign of pain,” Wang explains.

Specifically, activating these cells blocks pain in multiple models and tests, whereas inhibiting these neurons rendered mice aversive to gentle touch — suggesting that they are involved in a newly uncovered central pain circuit.

The study has implications for both anesthesia and pain. It shows that general anesthetics have complex, multi-faceted effects and that the brain may contain a central pain suppression system.

“We want to figure out how diverse general anesthetics activate these neurons,” explains Wang. “That way we can find compounds that can specifically activate these pain-suppressing neurons without sedation. We’re now also testing whether placebo analgesia works by activating these same central neurons.”

The study also has implications for addiction as it may point to an alternative system for central pain suppression that could be a target of drugs that do not have the devastating side effects of opioids.

Fan Wang joins the McGovern Institute

The McGovern Institute is pleased to announce that Fan Wang, currently a Professor at Duke University, will be joining its team of investigators in 2021. Wang is well-known for her work on sensory perception, pain, and behavior. She takes a broad, and very practical approach to these questions, knowing that sensory perception has broad implications for biomedicine when it comes to pain management, addiction, anesthesia, and hypersensitivity.

“McGovern is a dream place for doing innovative and transformative neuroscience.” – Fan Wang

“I am so thrilled that Fan is coming to the McGovern Institute,” says Robert Desimone, director of the institute and the Doris and Don Berkey Professor of Neuroscience at MIT. “I’ve followed her work for a number of years, and she is making inroads into questions that are relevant to a number of societal problems, such as how we can turn off the perception of chronic pain.”

Wang brings with her a range of techniques developed in her lab, including CANE, which precisely highlights neurons that become activated in response to a stimulus. CANE is highlighting new neuronal subtypes in long-studied brain regions such as the amygdala, and recently elucidated previously undefined neurons in the lateral parabrachial nucleus involved in pain processing.

“I am so excited to join the McGovern Institute,” says Wang. “It is a dream place for doing innovative and transformative neuroscience. McGovern researchers are known for using the most cutting-edge, multi-disciplinary technologies to understand how the brain works. I can’t wait to join the team.”

Wang earned her PhD in 1998 with Richard Axel at Columbia University, subsequently conducting postdoctoral research at Stanford University with Mark Tessier-Lavigne. Wang joined Duke University as a Professor in the Department of Neurobiology in 2003, and was later appointed the Morris N. Broad Distinguished Professor of Neurobiology at Duke University School of Medicine. Wang will join the McGovern Institute as an investigator in January 2021.

National Science Foundation announces MIT-led Institute for Artificial Intelligence and Fundamental Interactions

The U.S. National Science Foundation (NSF) announced today an investment of more than $100 million to establish five artificial intelligence (AI) institutes, each receiving roughly $20 million over five years. One of these, the NSF AI Institute for Artificial Intelligence and Fundamental Interactions (IAIFI), will be led by MIT’s Laboratory for Nuclear Science (LNS) and become the intellectual home of more than 25 physics and AI senior researchers at MIT and Harvard, Northeastern, and Tufts universities.

By merging research in physics and AI, the IAIFI seeks to tackle some of the most challenging problems in physics, including precision calculations of the structure of matter, gravitational-wave detection of merging black holes, and the extraction of new physical laws from noisy data.

“The goal of the IAIFI is to develop the next generation of AI technologies, based on the transformative idea that artificial intelligence can directly incorporate physics intelligence,” says Jesse Thaler, an associate professor of physics at MIT, LNS researcher, and IAIFI director.  “By fusing the ‘deep learning’ revolution with the time-tested strategies of ‘deep thinking’ in physics, we aim to gain a deeper understanding of our universe and of the principles underlying intelligence.”

IAIFI researchers say their approach will enable making groundbreaking physics discoveries, and advance AI more generally, through the development of novel AI approaches that incorporate first principles from fundamental physics.

“Invoking the simple principle of translational symmetry — which in nature gives rise to conservation of momentum — led to dramatic improvements in image recognition,” says Mike Williams, an associate professor of physics at MIT, LNS researcher, and IAIFI deputy director. “We believe incorporating more complex physics principles will revolutionize how AI is used to study fundamental interactions, while simultaneously advancing the foundations of AI.”

In addition, a core element of the IAIFI mission is to transfer their technologies to the broader AI community.

“Recognizing the critical role of AI, NSF is investing in collaborative research and education hubs, such as the NSF IAIFI anchored at MIT, which will bring together academia, industry, and government to unearth profound discoveries and develop new capabilities,” says NSF Director Sethuraman Panchanathan. “Just as prior NSF investments enabled the breakthroughs that have given rise to today’s AI revolution, the awards being announced today will drive discovery and innovation that will sustain American leadership and competitiveness in AI for decades to come.”

Research in AI and fundamental interactions

Fundamental interactions are described by two pillars of modern physics: at short distances by the Standard Model of particle physics, and at long distances by the Lambda Cold Dark Matter model of Big Bang cosmology. Both models are based on physical first principles such as causality and space-time symmetries.  An abundance of experimental evidence supports these theories, but also exposes where they are incomplete, most pressingly that the Standard Model does not explain the nature of dark matter, which plays an essential role in cosmology.

AI has the potential to help answer these questions and others in physics.

For many physics problems, the governing equations that encode the fundamental physical laws are known. However, undertaking key calculations within these frameworks, as is essential to test our understanding of the universe and guide physics discovery, can be computationally demanding or even intractable. IAIFI researchers are developing AI for such first-principles theory studies, which naturally require AI approaches that rigorously encode physics knowledge.

“My group is developing new provably exact algorithms for theoretical nuclear physics,” says Phiala Shanahan, an assistant professor of physics and LNS researcher at MIT. “Our first-principles approach turns out to have applications in other areas of science and even in robotics, leading to exciting collaborations with industry partners.”

Incorporating physics principles into AI could also have a major impact on many experimental applications, such as designing AI methods that are more easily verifiable. IAIFI researchers are working to enhance the scientific potential of various facilities, including the Large Hadron Collider (LHC) and the Laser Interferometer Gravity Wave Observatory (LIGO).

“Gravitational-wave detectors are among the most sensitive instruments on Earth, but the computational systems used to operate them are mostly based on technology from the previous century,” says Principal Research Scientist Lisa Barsotti of the MIT Kavli Institute for Astrophysics and Space Research. “We have only begun to scratch the surface of what can be done with AI; just enough to see that the IAIFI will be a game-changer.”

The unique features of these physics applications also offer compelling research opportunities in AI more broadly. For example, physics-informed architectures and hardware development could lead to advances in the speed of AI algorithms, and work in statistical physics is providing a theoretical foundation for understanding AI dynamics.

“Physics has inspired many time-tested ideas in machine learning: maximizing entropy, Boltzmann machines, and variational inference, to name a few,” says Pulkit Agrawal, an assistant professor of electrical engineering and computer science at MIT, and researcher in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “We believe that close interaction between physics and AI researchers will be the catalyst that leads to the next generation of machine learning algorithms.”

Cultivating early-career talent

AI technologies are advancing rapidly, making it both important and challenging to train junior researchers at the intersection of physics and AI. The IAIFI aims to recruit and train a talented and diverse group of early-career researchers, including at the postdoc level through its IAIFI Fellows Program.

“By offering our fellows their choice of research problems, and the chance to focus on cutting-edge challenges in physics and AI, we will prepare many talented young scientists to become future leaders in both academia and industry,” says MIT professor of physics Marin Soljacic of the Research Laboratory of Electronics (RLE).

IAIFI researchers hope these fellows will spark interdisciplinary and multi-investigator collaborations, generate new ideas and approaches, translate physics challenges beyond their native domains, and help develop a common language across disciplines. Applications for the inaugural IAIFI fellows are due in mid-October.

Another related effort spearheaded by Thaler, Williams, and Alexander Rakhlin, an associate professor of brain and cognitive science at MIT and researcher in the Institute for Data, Systems, and Society (IDSS), is the development of a new interdisciplinary PhD program in physics, statistics, and data science, a collaborative effort between the Department of Physics and the Statistics and Data Science Center.

“Statistics and data science are among the foundational pillars of AI. Physics joining the interdisciplinary doctoral program will bring forth new ideas and areas of exploration, while fostering a new generation of leaders at the intersection of physics, statistics, and AI,” says Rakhlin.

Education, outreach, and partnerships 

The IAIFI aims to cultivate “human intelligence” by promoting education and outreach. For example, IAIFI members will contribute to establishing a MicroMasters degree program at MIT for students from non-traditional backgrounds.

“We will increase the number of students in both physics and AI from underrepresented groups by providing fellowships for the MicroMasters program,” says Isaac Chuang, professor of physics and electrical engineering, senior associate dean for digital learning, and RLE researcher at MIT. “We also plan on working with undergraduate MIT Summer Research Program students, to introduce them to the tools of physics and AI research that they might not have access to at their home institutions.”

The IAIFI plans to expand its impact via numerous outreach efforts, including a K-12 program in which students are given data from the LHC and LIGO and tasked with rediscovering the Higgs boson and gravitational waves.

“After confirming these recent Nobel Prizes, we can ask the students to find tiny artificial signals embedded in the data using AI and fundamental physics principles,” says assistant professor of physics Phil Harris, an LNS researcher at MIT. “With projects like this, we hope to disseminate knowledge about — and enthusiasm for — physics, AI, and their intersection.”

In addition, the IAIFI will collaborate with industry and government to advance the frontiers of both AI and physics, as well as societal sectors that stand to benefit from AI innovation. IAIFI members already have many active collaborations with industry partners, including DeepMind, Microsoft Research, and Amazon.

“We will tackle two of the greatest mysteries of science: how our universe works and how intelligence works,” says MIT professor of physics Max Tegmark, an MIT Kavli Institute researcher. “Our key strategy is to link them, using physics to improve AI and AI to improve physics. We’re delighted that the NSF is investing the vital seed funding needed to launch this exciting effort.”

Building new connections at MIT and beyond

Leveraging MIT’s culture of collaboration, the IAIFI aims to generate new connections and to strengthen existing ones across MIT and beyond.

Of the 27 current IAIFI senior investigators, 16 are at MIT and members of the LNS, RLE, MIT Kavli Institute, CSAIL, and IDSS. In addition, IAIFI investigators are members of related NSF-supported efforts at MIT, such as the Center for Brains, Minds, and Machines within the McGovern Institute for Brain Research and the MIT-Harvard Center for Ultracold Atoms.

“We expect a lot of creative synergies as we bring physics and computer science together to study AI,” says Bill Freeman, the Thomas and Gerd Perkins Professor of Electrical Engineering and Computer Science and researcher in CSAIL. “I’m excited to work with my physics colleagues on topics that bridge these fields.”

More broadly, the IAIFI aims to make Cambridge, Massachusetts, and the surrounding Boston area a hub for collaborative efforts to advance both physics and AI.

“As we teach in 8.01 and 8.02, part of what makes physics so powerful is that it provides a universal language that can be applied to a wide range of scientific problems,” says Thaler. “Through the IAIFI, we will create a common language that transcends the intellectual borders between physics and AI to facilitate groundbreaking discoveries.”

Face-specific brain area responds to faces even in people born blind

More than 20 years ago, neuroscientist Nancy Kanwisher and others discovered that a small section of the brain located near the base of the skull responds much more strongly to faces than to other objects we see. This area, known as the fusiform face area, is believed to be specialized for identifying faces.

Now, in a surprising new finding, Kanwisher and her colleagues have shown that this same region also becomes active in people who have been blind since birth, when they touch a three-dimensional model of a face with their hands. The finding suggests that this area does not require visual experience to develop a preference for faces.

“That doesn’t mean that visual input doesn’t play a role in sighted subjects — it probably does,” she says. “What we showed here is that visual input is not necessary to develop this particular patch, in the same location, with the same selectivity for faces. That was pretty astonishing.”

Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience and a member of MIT’s McGovern Institute for Brain Research, is the senior author of the study. N. Apurva Ratan Murty, an MIT postdoc, is the lead author of the study, which appears this week in the Proceedings of the National Academy of Sciences. Other authors of the paper include Santani Teng, a former MIT postdoc; Aude Oliva, a senior research scientist, co-director of the MIT Quest for Intelligence, and MIT director of the MIT-IBM Watson AI Lab; and David Beeler and Anna Mynick, both former lab technicians.

Selective for faces

Studying people who were born blind allowed the researchers to tackle longstanding questions regarding how specialization arises in the brain. In this case, they were specifically investigating face perception, but the same unanswered questions apply to many other aspects of human cognition, Kanwisher says.

“This is part of a broader question that scientists and philosophers have been asking themselves for hundreds of years, about where the structure of the mind and brain comes from,” she says. “To what extent are we products of experience, and to what extent do we have built-in structure? This is a version of that question asking about the particular role of visual experience in constructing the face area.”

The new work builds on a 2017 study from researchers in Belgium. In that study, congenitally blind subjects were scanned with functional magnetic resonance imaging (fMRI) as they listened to a variety of sounds, some related to faces (such as laughing or chewing), and others not. That study found higher responses in the vicinity of the FFA to face-related sounds than to sounds such as a ball bouncing or hands clapping.

In the new study, the MIT team wanted to use tactile experience to measure more directly how the brains of blind people respond to faces. They created a ring of 3D-printed objects that included faces, hands, chairs, and mazes, and rotated them so that the subject could handle each one while in the fMRI scanner.

They began with normally sighted subjects and found that when they handled the 3D objects, a small area that corresponded to the location of the FFA was preferentially active when the subjects touched the faces, compared to when they touched other objects. This activity, which was weaker than the signal produced when sighted subjects looked at faces, was not surprising to see, Kanwisher says.

“We know that people engage in visual imagery, and we know from prior studies that visual imagery can activate the FFA. So the fact that you see the response with touch in a sighted person is not shocking because they’re visually imagining what they’re feeling,” she says.

The researchers then performed the same experiments, using tactile input only, with 15 subjects who reported being blind since birth. To their surprise, they found that the brain showed face-specific activity in the same area as the sighted subjects, at levels similar to when sighted people handled the 3D-printed faces.

“When we saw it in the first few subjects, it was really shocking, because no one had seen individual face-specific activations in the fusiform gyrus in blind subjects previously,” Murty says.

Patterns of connection

The researchers also explored several hypotheses that have been put forward to explain why face-selectivity always seems to develop in the same region of the brain. One prominent hypothesis suggests that the FFA develops face-selectivity because it receives visual input from the fovea (the center of the retina), and we tend to focus on faces at the center of our visual field. However, since this region developed in blind people with no foveal input, the new findings do not support this idea.

Another hypothesis is that the FFA has a natural preference for curved shapes. To test that idea, the researchers performed another set of experiments in which they asked the blind subjects to handle a variety of 3D-printed shapes, including cubes, spheres, and eggs. They found that the FFA did not show any preference for the curved objects over the cube-shaped objects.

The researchers did find evidence for a third hypothesis, which is that face selectivity arises in the FFA because of its connections to other parts of the brain. They were able to measure the FFA’s “connectivity fingerprint” — a measure of the correlation between activity in the FFA and activity in other parts of the brain — in both blind and sighted subjects.

They then used the data from each group to train a computer model to predict the exact location of the brain’s selective response to faces based on the FFA connectivity fingerprint. They found that when the model was trained on data from sighted patients, it could accurately predict the results in blind subjects, and vice versa. They also found evidence that connections to the frontal and parietal lobes of the brain, which are involved in high-level processing of sensory information, may be the most important in determining the role of the FFA.

“It’s suggestive of this very interesting story that the brain wires itself up in development not just by taking perceptual information and doing statistics on the input and allocating patches of brain, according to some kind of broadly agnostic statistical procedure,” Kanwisher says. “Rather, there are endogenous constraints in the brain present at birth, in this case, in the form of connections to higher-level brain regions, and these connections are perhaps playing a causal role in its development.”

The research was funded by the National Institutes of Health Shared Instrumentation Grant to the Athinoula Martinos Center at MIT, a National Eye Institute Training Grant, the Smith-Kettlewell Eye Research Institute’s Rehabilitation Engineering Research Center, an Office of Naval Research Vannevar Bush Faculty Fellowship, an NIH Pioneer Award, and a National Science Foundation Science and Technology Center Grant.

Full paper at PNAS

Key brain region was “recycled” as humans developed the ability to read

Humans began to develop systems of reading and writing only within the past few thousand years. Our reading abilities set us apart from other animal species, but a few thousand years is much too short a timeframe for our brains to have evolved new areas specifically devoted to reading.

To account for the development of this skill, some scientists have hypothesized that parts of the brain that originally evolved for other purposes have been “recycled” for reading. As one example, they suggest that a part of the visual system that is specialized to perform object recognition has been repurposed for a key component of reading called orthographic processing — the ability to recognize written letters and words.

A new study from MIT neuroscientists offers evidence for this hypothesis. The findings suggest that even in nonhuman primates, who do not know how to read, a part of the brain called the inferotemporal (IT) cortex is capable of performing tasks such as distinguishing words from nonsense words, or picking out specific letters from a word.

“This work has opened up a potential linkage between our rapidly developing understanding of the neural mechanisms of visual processing and an important primate behavior — human reading,” says James DiCarlo, the head of MIT’s Department of Brain and Cognitive Sciences, an investigator in the McGovern Institute for Brain Research and the Center for Brains, Minds, and Machines, and the senior author of the study.

Rishi Rajalingham, an MIT postdoc, is the lead author of the study, which appears in Nature Communications. Other MIT authors are postdoc Kohitij Kar and technical associate Sachi Sanghavi. The research team also includes Stanislas Dehaene, a professor of experimental cognitive psychology at the Collège de France.

Word recognition

Reading is a complex process that requires recognizing words, assigning meaning to those words, and associating words with their corresponding sound. These functions are believed to be spread out over different parts of the human brain.

Functional magnetic resonance imaging (fMRI) studies have identified a region called the visual word form area (VWFA) that lights up when the brain processes a written word. This region is involved in the orthographic stage: It discriminates words from jumbled strings of letters or words from unknown alphabets. The VWFA is located in the IT cortex, a part of the visual cortex that is also responsible for identifying objects.

DiCarlo and Dehaene became interested in studying the neural mechanisms behind word recognition after cognitive psychologists in France reported that baboons could learn to discriminate words from nonwords, in a study that appeared in Science in 2012.

Using fMRI, Dehaene’s lab has previously found that parts of the IT cortex that respond to objects and faces become highly specialized for recognizing written words once people learn to read.

“However, given the limitations of human imaging methods, it has been challenging to characterize these representations at the resolution of individual neurons, and to quantitatively test if and how these representations might be reused to support orthographic processing,” Dehaene says. “These findings inspired us to ask if nonhuman primates could provide a unique opportunity to investigate the neuronal mechanisms underlying orthographic processing.”

The researchers hypothesized that if parts of the primate brain are predisposed to process text, they might be able to find patterns reflecting that in the neural activity of nonhuman primates as they simply look at words.

To test that idea, the researchers recorded neural activity from about 500 neural sites across the IT cortex of macaques as they looked at about 2,000 strings of letters, some of which were English words and some of which were nonsensical strings of letters.

“The efficiency of this methodology is that you don’t need to train animals to do anything,” Rajalingham says. “What you do is just record these patterns of neural activity as you flash an image in front of the animal.”

The researchers then fed that neural data into a simple computer model called a linear classifier. This model learns to combine the inputs from each of the 500 neural sites to predict whether the string of letters that provoked that activity pattern was a word or not. While the animal itself is not performing this task, the model acts as a “stand-in” that uses the neural data to generate a behavior, Rajalingham says.

Using that neural data, the model was able to generate accurate predictions for many orthographic tasks, including distinguishing words from nonwords and determining if a particular letter is present in a string of words. The model was about 70 percent accurate at distinguishing words from nonwords, which is very similar to the rate reported in the 2012 Science study with baboons. Furthermore, the patterns of errors made by model were similar to those made by the animals.

Neuronal recycling

The researchers also recorded neural activity from a different brain area that also feeds into IT cortex: V4, which is part of the visual cortex. When they fed V4 activity patterns into the linear classifier model, the model poorly predicted (compared to IT) the human or baboon performance on the orthographic processing tasks.

The findings suggest that the IT cortex is particularly well-suited to be repurposed for skills that are needed for reading, and they support the hypothesis that some of the mechanisms of reading are built upon highly evolved mechanisms for object recognition, the researchers say.

The researchers now plan to train animals to perform orthographic tasks and measure how their neural activity changes as they learn the tasks.

The research was funded by the Simons Foundation and the U.S. Office of Naval Research.

Full Paper at Nature Communications

Ila Fiete studies how the brain performs complex computations

While doing a postdoc about 15 years ago, Ila Fiete began searching for faculty jobs in computational neuroscience — a field that uses mathematical tools to investigate brain function. However, there were no advertised positions in theoretical or computational neuroscience at that time in the United States.

“It wasn’t really a field,” she recalls. “That has changed completely, and [now] there are 15 to 20 openings advertised per year.” She ended up finding a position in the Center for Learning and Memory at the University of Texas at Austin, which along with a small handful of universities including MIT, was open to neurobiologists with a computational background.

Computation is the cornerstone of Fiete’s research at MIT’s McGovern Institute for Brain Research, where she has been a faculty member since 2018. Using computational and mathematical techniques, she studies how the brain encodes information in ways that enable cognitive tasks such as learning, memory, and reasoning about our surroundings.

One major research area in Fiete’s lab is how the brain is able to continuously compute the body’s position in space and make constant adjustments to that estimate as we move about.

“When we walk through the world, we can close our eyes and still have a pretty good estimate of where we are,” she says. “This involves being able to update our estimate based on our sense of self-motion. There are also many computations in the brain that involve moving through abstract or mental rather than physical space, and integrating velocity signals of some variety or another. Some of the same ideas and even circuits for spatial navigation might be involved in navigating through these mental spaces.”

No better fit

Fiete spent her childhood between Mumbai, India, and the United States, where her mathematician father held a series of visiting or permanent appointments at the Institute for Advanced Study in Princeton, NJ, the University of California at Berkeley, and the University of Michigan at Ann Arbor.

In India, Fiete’s father did research at the Tata Institute of Fundamental Research, and she grew up spending time with many other children of academics. She was always interested in biology, but also enjoyed math, following in her father’s footsteps.

“My father was not a hands-on parent, wanting to teach me a lot of mathematics, or even asking me about how my math schoolwork was going, but the influence was definitely there. There’s a certain aesthetic to thinking mathematically, which I absorbed very indirectly,” she says. “My parents did not push me into academics, but I couldn’t help but be influenced by the environment.”

She spent her last two years of high school in Ann Arbor and then went to the University of Michigan, where she majored in math and physics. While there, she worked on undergraduate research projects, including two summer stints at Indiana University and the University of Virginia, which gave her firsthand experience in physics research. Those projects covered a range of topics, including proton radiation therapy, the magnetic properties of single crystal materials, and low-temperature physics.

“Those three experiences are what really made me sure that I wanted to go into academics,” Fiete says. “It definitely seemed like the path that I knew the best, and I think it also best suited my temperament. Even now, with more exposure to other fields, I cannot think of a better fit.”

Although she was still interested in biology, she took only one course in the subject in college, holding back because she did not know how to marry quantitative approaches with biological sciences. She began her graduate studies at Harvard University planning to study low-temperature physics, but while there, she decided to start explore quantitative classes in biology. One of those was a systems biology course taught by then-MIT professor Sebastian Seung, which transformed her career trajectory.

“It was truly inspiring,” she recalls. “Thinking mathematically about interacting systems in biology was really exciting. It was really my first introduction to systems biology, and it had me hooked immediately.”

She ended up doing most of her PhD research in Seung’s lab at MIT, where she studied how the brain uses incoming signals of the velocity of head movement to control eye position. For example, if we want to keep our gaze fixed on a particular location while our head is moving, the brain must continuously calculate and adjust the amount of tension needed in the muscles surrounding the eyes, to compensate for the movement of the head.

“Bizarre” cells

After earning her PhD, Fiete and her husband, a theoretical physicist, went to the Kavli Institute for Theoretical Physics at the University of California at Santa Barbara, where they each held fellowships for independent research. While there, Fiete began working on a research topic that she still studies today — grid cells. These cells, located in the entorhinal cortex of the brain, enable us to navigate our surroundings by helping the brain to create a neural representation of space.

Midway through her position there, she learned of a new discovery, that when a rat moves across an open room, a grid cell in its brain fires at many different locations arranged geometrically in a regular pattern of repeating triangles. Together, a population of grid cells forms a lattice of triangles representing the entire room. These cells have also been found in the brains of various other mammals, including humans.

“It’s amazing. It’s this very crystalline response,” Fiete says. “When I read about that, I fell out of my chair. At that point I knew this was something bizarre that would generate so many questions about development, function, and brain circuitry that could be studied computationally.”

One question Fiete and others have investigated is why the brain needs grid cells at all, since it also has so-called place cells that each fire in one specific location in the environment. A possible explanation that Fiete has explored is that grid cells of different scales, working together, can represent a vast number of possible positions in space and also multiple dimensions of space.

“If you have a few cells that can parsimoniously generate a very large coding space, then you can afford to not use most of that coding space,” she says. “You can afford to waste most of it, which means you can separate things out very well, in which case it becomes not so susceptible to noise.”

Since returning to MIT, she has also pursued a research theme related to what she explored in her PhD thesis — how the brain maintains neural representations of where the head is located in space. In a paper published last year, she uncovered that the brain generates a one-dimensional ring of neural activity that acts as a compass, allowing the brain to calculate the current direction of the head relative to the external world.

Her lab also studies cognitive flexibility — the brain’s ability to perform so many different types of cognitive tasks.

“How it is that we can repurpose the same circuits and flexibly use them to solve many different problems, and what are the neural codes that are amenable to that kind of reuse?” she says. “We’re also investigating the principles that allow the brain to hook multiple circuits together to solve new problems without a lot of reconfiguration.”

Looking into the black box of deep learning networks

Deep learning systems are revolutionizing technology around us, from voice recognition that pairs you with your phone to autonomous vehicles that are increasingly able to see and recognize obstacles ahead. But much of this success involves trial and error when it comes to the deep learning networks themselves. A group of MIT researchers recently reviewed their contributions to a better theoretical understanding of deep learning networks, providing direction for the field moving forward.

“Deep learning was in some ways an accidental discovery,” explains Tomaso Poggio, investigator at the McGovern Institute for Brain Research, director of the Center for Brains, Minds, and Machines (CBMM), and the Eugene McDermott Professor in Brain and Cognitive Sciences. “We still do not understand why it works. A theoretical framework is taking form, and I believe that we are now close to a satisfactory theory. It is time to stand back and review recent insights.”

Climbing data mountains

Our current era is marked by a superabundance of data — data from inexpensive sensors of all types, text, the internet, and large amounts of genomic data being generated in the life sciences. Computers nowadays ingest these multidimensional datasets, creating a set of problems dubbed the “curse of dimensionality” by the late mathematician Richard Bellman.

One of these problems is that representing a smooth, high-dimensional function requires an astronomically large number of parameters. We know that deep neural networks are particularly good at learning how to represent, or approximate, such complex data, but why? Understanding why could potentially help advance deep learning applications.

“Deep learning is like electricity after Volta discovered the battery, but before Maxwell,” explains Poggio.

“Useful applications were certainly possible after Volta, but it was Maxwell’s theory of electromagnetism, this deeper understanding that then opened the way to the radio, the TV, the radar, the transistor, the computers, and the internet,” says Poggio, who is the founding scientific advisor of The Core, MIT Quest for Intelligence, and an investigator in the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT.

The theoretical treatment by Poggio, Andrzej Banburski, and Qianli Liao points to why deep learning might overcome data problems such as “the curse of dimensionality.” Their approach starts with the observation that many natural structures are hierarchical. To model the growth and development of a tree doesn’t require that we specify the location of every twig. Instead, a model can use local rules to drive branching hierarchically. The primate visual system appears to do something similar when processing complex data. When we look at natural images — including trees, cats, and faces — the brain successively integrates local image patches, then small collections of patches, and then collections of collections of patches.

“The physical world is compositional — in other words, composed of many local physical interactions,” explains Qianli Liao, an author of the study, and a graduate student in the Department of Electrical Engineering and Computer Science and a member of the CBMM. “This goes beyond images. Language and our thoughts are compositional, and even our nervous system is compositional in terms of how neurons connect with each other. Our review explains theoretically why deep networks are so good at representing this complexity.”

The intuition is that a hierarchical neural network should be better at approximating a compositional function than a single “layer” of neurons, even if the total number of neurons is the same. The technical part of their work identifies what “better at approximating” means and proves that the intuition is correct.

Generalization puzzle

There is a second puzzle about what is sometimes called the unreasonable effectiveness of deep networks. Deep network models often have far more parameters than data to fit them, despite the mountains of data we produce these days. This situation ought to lead to what is called “overfitting,” where your current data fit the model well, but any new data fit the model terribly. This is dubbed poor generalization in conventional models. The conventional solution is to constrain some aspect of the fitting procedure. However, deep networks do not seem to require this constraint. Poggio and his colleagues prove that, in many cases, the process of training a deep network implicitly “regularizes” the solution, providing constraints.

The work has a number of implications going forward. Though deep learning is actively being applied in the world, this has so far occurred without a comprehensive underlying theory. A theory of deep learning that explains why and how deep networks work, and what their limitations are, will likely allow development of even much more powerful learning approaches.

“In the long term, the ability to develop and build better intelligent machines will be essential to any technology-based economy,” explains Poggio. “After all, even in its current — still highly imperfect — state, deep learning is impacting, or about to impact, just about every aspect of our society and life.”

Mapping the brain’s sensory gatekeeper

Many people with autism experience sensory hypersensitivity, attention deficits, and sleep disruption. One brain region that has been implicated in these symptoms is the thalamic reticular nucleus (TRN), which is believed to act as a gatekeeper for sensory information flowing to the cortex.

A team of researchers from MIT and the Broad Institute of MIT and Harvard has now mapped the TRN in unprecedented detail, revealing that the region contains two distinct subnetworks of neurons with different functions. The findings could offer researchers more specific targets for designing drugs that could alleviate some of the sensory, sleep, and attention symptoms of autism, says Guoping Feng, one of the leaders of the research team.

These cross-sections of the thalamic reticular nucleus (TRN) show two distinct populations of neurons, labeled in purple and green. A team of researchers from MIT and the Broad Institute of MIT and Harvard has now mapped the TRN in unprecedented detail.
Image: courtesy of the researchers

“The idea is that you could very specifically target one group of neurons, without affecting the whole brain and other cognitive functions,” says Feng, the James W. and Patricia Poitras Professor of Neuroscience at MIT and a member of MIT’s McGovern Institute for Brain Research.

Feng; Zhanyan Fu, associate director of neurobiology at the Broad Institute’s Stanley Center for Psychiatric Research; and Joshua Levin, a senior group leader at the Broad Institute, are the senior authors of the study, which appears today in Nature. The paper’s lead authors are former MIT postdoc Yinqing Li, former Broad Institute postdoc Violeta Lopez-Huerta, and Broad Institute research scientist Xian Adiconis.

Distinct populations

When sensory input from the eyes, ears, or other sensory organs arrives in our brains, it goes first to the thalamus, which then relays it to the cortex for higher-level processing. Impairments of these thalamo-cortical circuits can lead to attention deficits, hypersensitivity to noise and other stimuli, and sleep problems.

One of the major pathways that controls information flow between the thalamus and the cortex is the TRN, which is responsible for blocking out distracting sensory input. In 2016, Feng and MIT Assistant Professor Michael Halassa, who is also an author of the new Nature paper, discovered that loss of a gene called Ptchd1 significantly affects TRN function. In boys, loss of this gene, which is carried on the X chromosome, can lead to attention deficits, hyperactivity, aggression, intellectual disability, and autism spectrum disorders.

In that study, the researchers found that when the Ptchd1 gene was knocked out in mice, the animals showed many of the same behavioral defects seen in human patients. When it was knocked out only in the TRN, the mice showed only hyperactivity, attention deficits, and sleep disruption, suggesting that the TRN is responsible for those symptoms.

In the new study, the researchers wanted to try to learn more about the specific types of neurons found in the TRN, in hopes of finding new ways to treat hyperactivity and attention deficits. Currently, those symptoms are most often treated with stimulant drugs such as Ritalin, which have widespread effects throughout the brain.

“Our goal was to find some specific ways to modulate the function of thalamo-cortical output and relate it to neurodevelopmental disorders,” Feng says. “We decided to try using single-cell technology to dissect out what cell types are there, and what genes are expressed. Are there specific genes that are druggable as a target?”

To explore that possibility, the researchers sequenced the messenger RNA molecules found in neurons of the TRN, which reveals genes that are being expressed in those cells. This allowed them to identify hundreds of genes that could be used to differentiate the cells into two subpopulations, based on how strongly they express those particular genes.

They found that one of these cell populations is located in the core of the TRN, while the other forms a very thin layer surrounding the core. These two populations also form connections to different parts of the thalamus, the researchers found. Based on those connections, the researchers hypothesize that cells in the core are involved in relaying sensory information to the brain’s cortex, while cells in the outer layer appear to help coordinate information that comes in through different senses, such as vision and hearing.

“Druggable targets”

The researchers now plan to study the varying roles that these two populations of neurons may have in a variety of neurological symptoms, including attention deficits, hypersensitivity, and sleep disruption. Using genetic and optogenetic techniques, they hope to determine the effects of activating or inhibiting different TRN cell types, or genes expressed in those cells.

“That can help us in the future really develop specific druggable targets that can potentially modulate different functions,” Feng says. “Thalamo-cortical circuits control many different things, such as sensory perception, sleep, attention, and cognition, and it may be that these can be targeted more specifically.”

This approach could also be useful for treating attention or hypersensitivity disorders even when they aren’t caused by defects in TRN function, the researchers say.

“TRN is a target where if you enhance its function, you might be able to correct problems caused by impairments of the thalamo-cortical circuits,” Feng says. “Of course we are far away from the development of any kind of treatment, but the potential is that we can use single-cell technology to not only understand how the brain organizes itself, but also how brain functions can be segregated, allowing you to identify much more specific targets that modulate specific functions.”

The research was funded by the Simons Center for the Social Brain at MIT, the Hock E. Tan and K. Lisa Yang Center for Autism Research at MIT, the James and Patricia Poitras Center for Psychiatric Disorders Research at MIT, the Stanley Center for Psychiatric Research at the Broad Institute, the National Institutes of Health/National Institute for Mental Health, the Klarman Cell Observatory at the Broad Institute, the Pew Foundation, and the Human Frontiers Science Program.

A mechanical way to stimulate neurons

In addition to responding to electrical and chemical stimuli, many of the body’s neural cells can also respond to mechanical effects, such as pressure or vibration. But these responses have been more difficult for researchers to study, because there has been no easily controllable method for inducing such mechanical stimulation of the cells. Now, researchers at MIT and elsewhere have found a new method for doing just that.

The finding might offer a step toward new kinds of therapeutic treatments, similar to electrically based neurostimulation that has been used to treat Parkinson’s disease and other conditions. Unlike those systems, which require an external wire connection, the new system would be completely contact-free after an initial injection of particles, and could be reactivated at will through an externally applied magnetic field.

The finding is reported in the journal ACS Nano, in a paper by former MIT postdoc Danijela Gregurec, Alexander Senko PhD ’19, Associate Professor Polina Anikeeva, and nine others at MIT, at Boston’s Brigham and Women’s Hospital, and in Spain.

The new method opens a new pathway for the stimulation of nerve cells within the body, which has so far almost entirely relied on either chemical pathways, through the use of pharmaceuticals, or on electrical pathways, which require invasive wires to deliver voltage into the body. This mechanical stimulation, which activates entirely different signaling pathways within the neurons themselves, could provide a significant area of study, the researchers say.

“An interesting thing about the nervous system is that neurons can actually detect forces,” Senko says. “That’s how your sense of touch works, and also your sense of hearing and balance.” The team targeted a particular group of neurons within a structure known as the dorsal root ganglion, which forms an interface between the central and peripheral nervous systems, because these cells are particularly sensitive to mechanical forces.

The applications of the technique could be similar to those being developed in the field of bioelectronic medicines, Senko says, but those require electrodes that are typically much bigger and stiffer than the neurons being stimulated, limiting their precision and sometimes damaging cells.

The key to the new process was developing minuscule discs with an unusual magnetic property, which can cause them to start fluttering when subjected to a certain kind of varying magnetic field. Though the particles themselves are only 100 or so nanometers across, roughly a hundredth of the size of the neurons they are trying to stimulate, they can be made and injected in great quantities, so that collectively their effect is strong enough to activate the cell’s pressure receptors. “We made nanoparticles that actually produce forces that cells can detect and respond to,” Senko says.

Anikeeva says that conventional magnetic nanoparticles would have required impractically large magnetic fields to be activated, so finding materials that could provide sufficient force with just moderate magnetic activation was “a very hard problem.” The solution proved to be a new kind of magnetic nanodiscs.

These discs, which are hundreds of nanometers in diameter, contain a vortex configuration of atomic spins when there are no external magnetic fields applied. This makes the particles behave as if they were not magnetic at all, making them exceptionally stable in solutions. When these discs are subjected to a very weak varying magnetic field of a few millitesla, with a low frequency of just several hertz, they switch to a state where the internal spins are all aligned in the disc plane. This allows these nanodiscs to act as levers — wiggling up and down with the direction of the field.

Anikeeva, who is an associate professor in the departments of Materials Science and Engineering and Brain and Cognitive Sciences, says this work combines several disciplines, including new chemistry that led to development of these nanodiscs, along with electromagnetic effects and work on the biology of neurostimulation.

The team first considered using particles of a magnetic metal alloy that could provide the necessary forces, but these were not biocompatible materials, and they were prohibitively expensive. The researchers found a way to use particles made from hematite, a benign iron oxide, which can form the required disc shapes. The hematite was then converted into magnetite, which has the magnetic properties they needed and is known to be benign in the body. This chemical transformation from hematite to magnetite dramatically turns a blood-red tube of particles to jet black.

“We had to confirm that these particles indeed supported this really unusual spin state, this vortex,” Gregurec says. They first tried out the newly developed nanoparticles and proved, using holographic imaging systems provided by colleagues in Spain, that the particles really did react as expected, providing the necessary forces to elicit responses from neurons. The results came in late December and “everyone thought that was a Christmas present,” Anikeeva recalls, “when we got our first holograms, and we could really see that what we have theoretically predicted and chemically suspected actually was physically true.”

The work is still in its infancy, she says. “This is a very first demonstration that it is possible to use these particles to transduce large forces to membranes of neurons in order to stimulate them.”

She adds “that opens an entire field of possibilities. … This means that anywhere in the nervous system where cells are sensitive to mechanical forces, and that’s essentially any organ, we can now modulate the function of that organ.” That brings science a step closer, she says, to the goal of bioelectronic medicine that can provide stimulation at the level of individual organs or parts of the body, without the need for drugs or electrodes.

The work was supported by the U.S. Defense Advanced Research Projects Agency, the National Institute of Mental Health, the Department of Defense, the Air Force Office of Scientific Research, and the National Defense Science and Engineering Graduate Fellowship.

Full paper at ACS Nano