The Society for Neuroscience (SfN) has awarded the Swartz Prize for Theoretical and Computational Neuroscience to Ila Fiete, professor in the Department of Brain and Cognitive Sciences, associate member of the McGovern Institute for Brain Research, and director of the K. Lisa Yang Integrative Computational Neuroscience Center. The SfN, the world’s largest neuroscience organization, announced that Fiete received the prize for her breakthrough research modeling hippocampal grid cells, a component of the navigational system of the mammalian brain.
“Fiete’s body of work has already significantly shaped the field of neuroscience and will continue to do so for the foreseeable future,” states the announcement from SfN.
“Fiete is considered one of the strongest theorists of her generation who has conducted highly influential work demonstrating that grid cell networks have attractor-like dynamics,” says Hollis Cline, a professor at the Scripps Research Institute of California and head of the Swartz Prize selection committee.
Grid cells are found in the cortex of all mammals. Their unique firing properties, creating a neural representation of our surroundings, allow us to navigate the world. Fiete and collaborators developed computational models showing how interactions between neurons can lead to the formation of periodic lattice-like firing patterns of grid cells and stabilize these patterns to create spatial memory. They showed that as we move around in space, these neural patterns can integrate velocity signals to provide a constantly updated estimate of our position, as well as detect and correct errors in the estimated position.
Fiete also proposed that multiple copies of these patterns at different spatial scales enabled efficient and high-capacity representation. Next, Fiete and colleagues worked with multiple collaborators to design experimental tests and establish rare evidence that these pattern-forming mechanisms underlie the function of memory pattern dynamics in the brain.
“I’m truly honored to receive the Swartz Prize,” says Fiete. “This prize recognizes my group’s efforts to decipher the circuit-level mechanisms of cognitive functions involving navigation, integration, and memory. It also recognizes, in its focus, the bearing-of-fruit of dynamical circuit models from my group and others that explain how individually simple elements combine to generate the longer-lasting memory states and complex computations of the brain. I am proud to be able to represent, in some measure, the work of my incredible students, postdocs, collaborators, and intellectual mentors. I am indebted to them and grateful for the chance to work together.”
According to the SfN announcement, Fiete has contributed to the field in many other ways, including modeling “how entorhinal cortex could interact with the hippocampus to efficiently and robustly store large numbers of memories and developed a remarkable method to discern the structure of intrinsic dynamics in neuronal circuits.” This modeling led to the discovery of an internal compass that tracks the direction of one’s head, even in the absence of external sensory input.
“Recently, Fiete’s group has explored the emergence of modular organization, a line of work that elucidates how grid cell modularity and general cortical modules might self-organize from smooth genetic gradients,” states the SfN announcement. Fiete and her research group have shown that even if the biophysical properties underlying grid cells of different scale are mostly similar, continuous variations in these properties can result in discrete groupings of grid cells, each with a different function.
Fiete was recognized with the Swartz Prize, which includes a $30,000 award, during the SfN annual meeting in San Diego.
Other recent MIT winners of the Swartz Prize include Professor Emery Brown (2020) and Professor Tomaso Poggio (2014).
Neural networks, a type of computing system loosely modeled on the organization of the human brain, form the basis of many artificial intelligence systems for applications such speech recognition, computer vision, and medical image analysis.
In the field of neuroscience, researchers often use neural networks to try to model the same kind of tasks that the brain performs, in hopes that the models could suggest new hypotheses regarding how the brain itself performs those tasks. However, a group of researchers at MIT is urging that more caution should be taken when interpreting these models.
In an analysis of more than 11,000 neural networks that were trained to simulate the function of grid cells — key components of the brain’s navigation system — the researchers found that neural networks only produced grid-cell-like activity when they were given very specific constraints that are not found in biological systems.
“What this suggests is that in order to obtain a result with grid cells, the researchers training the models needed to bake in those results with specific, biologically implausible implementation choices,” says Rylan Schaeffer, a former senior research associate at MIT.
Without those constraints, the MIT team found that very few neural networks generated grid-cell-like activity, suggesting that these models do not necessarily generate useful predictions of how the brain works.
Schaeffer, who is now a graduate student in computer science at Stanford University, is the lead author of the new study, which will be presented at the 2022 Conference on Neural Information Processing Systems this month. Ila Fiete, a professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research, is the senior author of the paper. Mikail Khona, an MIT graduate student in physics, is also an author.
Modeling grid cells
Neural networks, which researchers have been using for decades to perform a variety of computational tasks, consist of thousands or millions of processing units connected to each other. Each node has connections of varying strengths to other nodes in the network. As the network analyzes huge amounts of data, the strengths of those connections change as the network learns to perform the desired task.
In this study, the researchers focused on neural networks that have been developed to mimic the function of the brain’s grid cells, which are found in the entorhinal cortex of the mammalian brain. Together with place cells, found in the hippocampus, grid cells form a brain circuit that helps animals know where they are and how to navigate to a different location.
Place cells have been shown to fire whenever an animal is in a specific location, and each place cell may respond to more than one location. Grid cells, on the other hand, work very differently. As an animal moves through a space such as a room, grid cells fire only when the animal is at one of the vertices of a triangular lattice. Different groups of grid cells create lattices of slightly different dimensions, which overlap each other. This allows grid cells to encode a large number of unique positions using a relatively small number of cells.
This type of location encoding also makes it possible to predict an animal’s next location based on a given starting point and a velocity. In several recent studies, researchers have trained neural networks to perform this same task, which is known as path integration.
To train neural networks to perform this task, researchers feed into it a starting point and a velocity that varies over time. The model essentially mimics the activity of an animal roaming through a space, and calculates updated positions as it moves. As the model performs the task, the activity patterns of different units within the network can be measured. Each unit’s activity can be represented as a firing pattern, similar to the firing patterns of neurons in the brain.
In several previous studies, researchers have reported that their models produced units with activity patterns that closely mimic the firing patterns of grid cells. These studies concluded that grid-cell-like representations would naturally emerge in any neural network trained to perform the path integration task.
However, the MIT researchers found very different results. In an analysis of more than 11,000 neural networks that they trained on path integration, they found that while nearly 90 percent of them learned the task successfully, only about 10 percent of those networks generated activity patterns that could be classified as grid-cell-like. That includes networks in which even only a single unit achieved a high grid score.
The earlier studies were more likely to generate grid-cell-like activity only because of the constraints that researchers build into those models, according to the MIT team.
“Earlier studies have presented this story that if you train networks to path integrate, you’re going to get grid cells. What we found is that instead, you have to make this long sequence of choices of parameters, which we know are inconsistent with the biology, and then in a small sliver of those parameters, you will get the desired result,” Schaeffer says.
More biological models
One of the constraints found in earlier studies is that the researchers required the model to convert velocity into a unique position, reported by one network unit that corresponds to a place cell. For this to happen, the researchers also required that each place cell correspond to only one location, which is not how biological place cells work: Studies have shown that place cells in the hippocampus can respond to up to 20 different locations, not just one.
When the MIT team adjusted the models so that place cells were more like biological place cells, the models were still able to perform the path integration task, but they no longer produced grid-cell-like activity. Grid-cell-like activity also disappeared when the researchers instructed the models to generate different types of location output, such as location on a grid with X and Y axes, or location as a distance and angle relative to a home point.
“If the only thing that you ask this network to do is path integrate, and you impose a set of very specific, not physiological requirements on the readout unit, then it’s possible to obtain grid cells,” says Fiete, who is also the director of the K. Lisa Yang Integrative Computational Neuroscience Center at MIT. “But if you relax any of these aspects of this readout unit, that strongly degrades the ability of the network to produce grid cells. In fact, usually they don’t, even though they still solve the path integration task.”
Therefore, if the researchers hadn’t already known of the existence of grid cells, and guided the model to produce them, it would be very unlikely for them to appear as a natural consequence of the model training.
The researchers say that their findings suggest that more caution is warranted when interpreting neural network models of the brain.
“When you use deep learning models, they can be a powerful tool, but one has to be very circumspect in interpreting them and in determining whether they are truly making de novo predictions, or even shedding light on what it is that the brain is optimizing,” Fiete says.
Kenneth Harris, a professor of quantitative neuroscience at University College London, says he hopes the new study will encourage neuroscientists to be more careful when stating what can be shown by analogies between neural networks and the brain.
“Neural networks can be a useful source of predictions. If you want to learn how the brain solves a computation, you can train a network to perform it, then test the hypothesis that the brain works the same way. Whether the hypothesis is confirmed or not, you will learn something,” says Harris, who was not involved in the study. “This paper shows that ‘postdiction’ is less powerful: Neural networks have many parameters, so getting them to replicate an existing result is not as surprising.”
When using these models to make predictions about how the brain works, it’s important to take into account realistic, known biological constraints when building the models, the MIT researchers say. They are now working on models of grid cells that they hope will generate more accurate predictions of how grid cells in the brain work.
“Deep learning models will give us insight about the brain, but only after you inject a lot of biological knowledge into the model,” Khona says. “If you use the correct constraints, then the models can give you a brain-like solution.”
The research was funded by the Office of Naval Research, the National Science Foundation, the Simons Foundation through the Simons Collaboration on the Global Brain, and the Howard Hughes Medical Institute through the Faculty Scholars Program. Mikail Khona was supported by the MathWorks Science Fellowship.
In January, as the Charles River was starting to freeze over, Keith Murray and the other members of MIT’s men’s heavyweight crew team took to erging on the indoor rowing machine. For 80 minutes at a time, Murray endured one of the most grueling workouts of his college experience. To distract himself from the pain, he would talk with his teammates, covering everything from great philosophical ideas to personal coffee preferences.
For Murray, virtually any conversation is an opportunity to explore how people think and why they think in certain ways. Currently a senior double majoring in computation and cognition, and linguistics and philosophy, Murray tries to understand the human experience based on knowledge from all of these fields.
“I’m trying to blend different approaches together to understand the complexities of human cognition,” he says. “For example, from a physiological perspective, the brain is just billions of neurons firing all at once, but this hardly scratches the surface of cognition.”
Murray grew up in Corydon, Indiana, where he attended the Indiana Academy for Science, Mathematics, and Humanities during his junior year of high school. He was exposed to philosophy there, learning the ideas of Plato, Socrates, and Thomas Aquinas, to name a few. When looking at colleges, Murray became interested in MIT because he wanted to learn about human thought processes from different perspectives. “Coming to MIT, I knew I wanted to do something philosophical. But I wanted to also be on the more technical side of things,” he says.
Once on campus, Murray immediately pursued an opportunity through the Undergraduate Research Opportunity Program (UROP) in the Digital Humanities Lab. There he worked with language-processing technology to analyze gendered language in various novels, with the end goal of displaying the data for an online audience. He learned about the basic mathematical models used for analyzing and presenting data online, to study the social implications of linguistic phrases and expressions.
Murray also joined the Concourse learning community, which brought together different perspectives from the humanities, sciences, and math in a weekly seminar. “I was exposed to some excellent examples of how to do interdisciplinary work,” he recalls.
In the summer before his sophomore year, Murray took a position as a researcher in the Harnett Lab, where instead of working with novels, he was working with mice. Alongside postdoc Lucas Fisher, Murray trained mice to do navigational tasks using virtual reality equipment. His goal was to explore neural encoding in navigation, understanding why the mice behaved in certain ways after being shown certain stimuli on the screens. Spending time in the lab, Murray became increasingly interested in neuroscience and the biological components behind human thought processes.
He sought out other neuroscience-related research experiences, which led him to explore a SuperUROP project in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). Working under Professor Nancy Lynch, he designed theoretical models of the retina using machine learning. Murray was excited to apply the techniques he learned in 9.40 (Introduction to Neural Computation) to address complex neurological problems. Murray considers this one of his most challenging research experiences, as the experience was entirely online.
“It was during the pandemic, so I had to learn a lot on my own; I couldn’t exactly do research in a lab. It was a big challenge, but at the end, I learned a lot and ended up getting a publication out of it,” he reflects.
This past semester, Murray has worked in the lab of Professor Ila Fiete in the McGovern Institute for Brain Research, constructing deep-learning models of animals performing navigational tasks. Through this UROP, which builds on his final project from Fiete’s class 9.49 (Neural Circuits for Cognition), Murray has been working to incorporate existing theoretical models of the hippocampus to investigate the intersection between artificial intelligence and neuroscience.
Reflecting on his varied research experiences, Murray says they have shown him new ways to explore the human brain from multiple perspectives, something he finds helpful as he tries to understand the complexity of human behavior.
Outside of his academic pursuits, Murray has continued to row with the crew team, where he walked on his first year. He sees rowing as a way to build up his strength, both physically and mentally. “When I’m doing my class work or I’m thinking about projects, I am using the same mental toughness that I developed during rowing,” he says. “That’s something I learned at MIT, to cultivate the dedication you put toward something. It’s all the same mental toughness whether you apply it to physical activities like rowing, or research projects.”
Looking ahead, Murray hopes to pursue a PhD in neuroscience, looking to find ways to incorporate his love of philosophy and human thought into his cognitive research. “I think there’s a lot more to do with neuroscience, especially with artificial intelligence. There are so many new technological developments happening right now,” he says.
With the tools of modern neuroscience, data accumulates quickly. Recording devices listen in on the electrical conversations between neurons, picking up the voices of hundreds of cells at a time. Microscopes zoom in to illuminate the brain’s circuitry, capturing thousands of images of cells’ elaborately branched paths. Functional MRIs detect changes in blood flow to map activity within a person’s brain, generating a complete picture by compiling hundreds of scans.
“When I entered neuroscience about 20 years ago, data were extremely precious, and ideas, as the expression went, were cheap. That’s no longer true,” says McGovern Associate Investigator Ila Fiete. “We have an embarrassment of wealth in the data but lack sufficient conceptual and mathematical scaffolds to understand it.”
Fiete will lead the McGovern Institute’s new K. Lisa Yang Integrative Computational Neuroscience (ICoN) Center, whose scientists will create mathematical models and other computational tools to confront the current deluge of data and advance our understanding of the brain and mental health. The center, funded by a $24 million donation from philanthropist Lisa Yang, will take a uniquely collaborative approach to computational neuroscience, integrating data from MIT labs to explain brain function at every level, from the molecular to the behavioral.
“Driven by technologies that generate massive amounts of data, we are entering a new era of translational neuroscience research,” says Yang, whose philanthropic investment in MIT research now exceeds $130 million. “I am confident that the multidisciplinary expertise convened by this center will revolutionize how we synthesize this data and ultimately understand the brain in health and disease.”
Data integration
Fiete says computation is particularly crucial to neuroscience because the brain is so staggeringly complex. Its billions of neurons, which are themselves complicated and diverse, interact with one other through trillions of connections.
“Conceptually, it’s clear that all these interactions are going to lead to pretty complex things. And these are not going to be things that we can explain in stories that we tell,” Fiete says. “We really will need mathematical models. They will allow us to ask about what changes when we perturb one or several components — greatly accelerating the rate of discovery relative to doing those experiments in real brains.”
By representing the interactions between the components of a neural circuit, a model gives researchers the power to explore those interactions, manipulate them, and predict the circuit’s behavior under different conditions.
“You can observe these neurons in the same way that you would observe real neurons. But you can do even more, because you have access to all the neurons and you have access to all the connections and everything in the network,” explains computational neuroscientist and McGovern Associate Investigator Guangyu Robert Yang (no relation to Lisa Yang), who joined MIT as a junior faculty member in July 2021.
Many neuroscience models represent specific functions or parts of the brain. But with advances in computation and machine learning, along with the widespread availability of experimental data with which to test and refine models, “there’s no reason that we should be limited to that,” he says.
Robert Yang’s team at the McGovern Institute is working to develop models that integrate multiple brain areas and functions. “The brain is not just about vision, just about cognition, just about motor control,” he says. “It’s about all of these things. And all these areas, they talk to one another.” Likewise, he notes, it’s impossible to separate the molecules in the brain from their effects on behavior – although those aspects of neuroscience have traditionally been studied independently, by researchers with vastly different expertise.
The ICoN Center will eliminate the divides, bringing together neuroscientists and software engineers to deal with all types of data about the brain. To foster interdisciplinary collaboration, every postdoctoral fellow and engineer at the center will work with multiple faculty mentors. Working in three closely interacting scientific cores, fellows will develop computational technologies for analyzing molecular data, neural circuits, and behavior, such as tools to identify pat-terns in neural recordings or automate the analysis of human behavior to aid psychiatric diagnoses. These technologies will also help researchers model neural circuits, ultimately transforming data into knowledge and understanding.
“Lisa is focused on helping the scientific community realize its goals in translational research,” says Nergis Mavalvala, dean of the School of Science and the Curtis and Kathleen Marble Professor of Astrophysics. “With her generous support, we can accelerate the pace of research by connecting the data to the delivery of tangible results.”
Computational modeling
In its first five years, the ICoN Center will prioritize four areas of investigation: episodic memory and exploration, including functions like navigation and spatial memory; complex or stereotypical behavior, such as the perseverative behaviors associated with autism and obsessive-compulsive disorder; cognition and attention; and sleep. The goal, Fiete says, is to model the neuronal interactions that underlie these functions so that researchers can predict what will happen when something changes — when certain neurons become more active or when a genetic mutation is introduced, for example. When paired with experimental data from MIT labs, the center’s models will help explain not just how these circuits work, but also how they are altered by genes, the environment, aging, and disease.
These focus areas encompass circuits and behaviors often affected by psychiatric disorders and neurodegeneration, and models will give researchers new opportunities to explore their origins and potential treatment strategies. “I really think that the future of treating disorders of the mind is going to run through computational modeling,” says McGovern Associate Investigator Josh McDermott.
In McDermott’s lab, researchers are modeling the brain’s auditory circuits. “If we had a perfect model of the auditory system, we would be able to understand why when somebody loses their hearing, auditory abilities degrade in the very particular ways in which they degrade,” he says. Then, he says, that model could be used to optimize hearing aids by predicting how the brain would interpret sound altered in various ways by the device.
Similar opportunities will arise as researchers model other brain systems, McDermott says, noting that computational models help researchers grapple with a dauntingly vast realm of possibilities. “There’s lots of different ways the brain can be set up, and lots of different potential treatments, but there is a limit to the number of neuroscience or behavioral experiments you can run,” he says. “Doing experiments on a computational system is cheap, so you can explore the dynamics of the system in a very thorough way.”
The ICoN Center will speed the development of the computational tools that neuroscientists need, both for basic understanding of the brain and clinical advances. But Fiete hopes for a culture shift within neuroscience, as well. “There are a lot of brilliant students and postdocs who have skills that are mathematics and computational and modeling based,” she says. “I think once they know that there are these possibilities to collaborate to solve problems related to psychiatric disorders and how we think, they will see that this is an exciting place to apply their skills, and we can bring them in.”
With the tools of modern neuroscience, researchers can peer into the brain with unprecedented accuracy. Recording devices listen in on the electrical conversations between neurons, picking up the voices of hundreds of cells at a time. Genetic tools allow us to focus on specific types of neurons based on their molecular signatures. Microscopes zoom in to illuminate the brain’s circuitry, capturing thousands of images of elaborately branched dendrites. Functional MRIs detect changes in blood flow to map activity within a person’s brain, generating a complete picture by compiling hundreds of scans.
This deluge of data provides insights into brain function and dynamics at different levels – molecules, cells, circuits, and behavior — but the insights often remain compartmentalized in separate research silos. An innovative new center at MIT’s McGovern Institute aims to leverage them into powerful revelations of the brain’s inner workings.
The center, funded by a $24 million donation from philanthropist Lisa Yang and led by McGovern Institute Associate Investigator Ila Fiete, will take a collaborative approach to computational neuroscience, integrating cutting-edge modeling techniques and data from MIT labs to explain brain function at every level, from the molecular to the behavioral.
“Our goal is that sophisticated, truly integrated computational models of the brain will make it possible to identify how ‘control knobs’ such as genes, proteins, chemicals, and environment drive thoughts and behavior, and to make inroads toward urgent unmet needs in understanding and treating brain disorders,” says Fiete, who is also a brain and cognitive sciences professor at MIT.
“Driven by technologies that generate massive amounts of data, we are entering a new era of translational neuroscience research,” says Yang, whose philanthropic investment in MIT research now exceeds $130 million. “I am confident that the multidisciplinary expertise convened by the ICoN center will revolutionize how we synthesize this data and ultimately understand the brain in health and disease.”
Connecting the data
It is impossible to separate the molecules in the brain from their effects on behavior – although those aspects of neuroscience have traditionally been studied independently, by researchers with vastly different expertise. The ICoN Center will eliminate the divides, bringing together neuroscientists and software engineers to deal with all types of data about the brain.
“The center’s highly collaborative structure, which is essential for unifying multiple levels of understanding, will enable us to recruit talented young scientists eager to revolutionize the field of computational neuroscience,” says Robert Desimone, director of the McGovern Institute. “It is our hope that the ICoN Center’s unique research environment will truly demonstrate a new academic research structure that catalyzes bold, creative research.”
To foster interdisciplinary collaboration, every postdoctoral fellow and engineer at the center will work with multiple faculty mentors. In order to attract young scientists and engineers to the field of computational neuroscience, the center will also provide four graduate fellowships to MIT students each year in perpetuity. Interacting closely with three scientific cores, engineers and fellows will develop computational models and technologies for analyzing molecular data, neural circuits, and behavior, such as tools to identify patterns in neural recordings or automate the analysis of human behavior to aid psychiatric diagnoses. These technologies and models will be instrumental in synthesizing data into knowledge and understanding.
Center priorities
In its first five years, the ICoN Center will prioritize four areas of investigation: episodic memory and exploration, including functions like navigation and spatial memory; complex or stereotypical behavior, such as the perseverative behaviors associated with autism and obsessive-compulsive disorder; cognition and attention; and sleep. Models of complex behavior will be created in collaboration with clinicians and researchers at Children’s Hospital of Philadelphia.
The goal, Fiete says, is to model the neuronal interactions that underlie these functions so that researchers can predict what will happen when something changes — when certain neurons become more active or when a genetic mutation is introduced, for example. When paired with experimental data from MIT labs, the center’s models will help explain not just how these circuits work, but also how they are altered by genes, the environment, aging, and disease. These focus areas encompass circuits and behaviors often affected by psychiatric disorders and neurodegeneration, and models will give researchers new opportunities to explore their origins and potential treatment strategies.
“Lisa Yang is focused on helping the scientific community realize its goals in translational research,” says Nergis Mavalvala, dean of the School of Science and the Curtis and Kathleen Marble Professor of Astrophysics. “With her generous support, we can accelerate the pace of research by connecting the data to the delivery of tangible results.”
While doing a postdoc about 15 years ago, Ila Fiete began searching for faculty jobs in computational neuroscience — a field that uses mathematical tools to investigate brain function. However, there were no advertised positions in theoretical or computational neuroscience at that time in the United States.
“It wasn’t really a field,” she recalls. “That has changed completely, and [now] there are 15 to 20 openings advertised per year.” She ended up finding a position in the Center for Learning and Memory at the University of Texas at Austin, which along with a small handful of universities including MIT, was open to neurobiologists with a computational background.
Computation is the cornerstone of Fiete’s research at MIT’s McGovern Institute for Brain Research, where she has been a faculty member since 2018. Using computational and mathematical techniques, she studies how the brain encodes information in ways that enable cognitive tasks such as learning, memory, and reasoning about our surroundings.
One major research area in Fiete’s lab is how the brain is able to continuously compute the body’s position in space and make constant adjustments to that estimate as we move about.
“When we walk through the world, we can close our eyes and still have a pretty good estimate of where we are,” she says. “This involves being able to update our estimate based on our sense of self-motion. There are also many computations in the brain that involve moving through abstract or mental rather than physical space, and integrating velocity signals of some variety or another. Some of the same ideas and even circuits for spatial navigation might be involved in navigating through these mental spaces.”
No better fit
Fiete spent her childhood between Mumbai, India, and the United States, where her mathematician father held a series of visiting or permanent appointments at the Institute for Advanced Study in Princeton, NJ, the University of California at Berkeley, and the University of Michigan at Ann Arbor.
In India, Fiete’s father did research at the Tata Institute of Fundamental Research, and she grew up spending time with many other children of academics. She was always interested in biology, but also enjoyed math, following in her father’s footsteps.
“My father was not a hands-on parent, wanting to teach me a lot of mathematics, or even asking me about how my math schoolwork was going, but the influence was definitely there. There’s a certain aesthetic to thinking mathematically, which I absorbed very indirectly,” she says. “My parents did not push me into academics, but I couldn’t help but be influenced by the environment.”
She spent her last two years of high school in Ann Arbor and then went to the University of Michigan, where she majored in math and physics. While there, she worked on undergraduate research projects, including two summer stints at Indiana University and the University of Virginia, which gave her firsthand experience in physics research. Those projects covered a range of topics, including proton radiation therapy, the magnetic properties of single crystal materials, and low-temperature physics.
“Those three experiences are what really made me sure that I wanted to go into academics,” Fiete says. “It definitely seemed like the path that I knew the best, and I think it also best suited my temperament. Even now, with more exposure to other fields, I cannot think of a better fit.”
Although she was still interested in biology, she took only one course in the subject in college, holding back because she did not know how to marry quantitative approaches with biological sciences. She began her graduate studies at Harvard University planning to study low-temperature physics, but while there, she decided to start explore quantitative classes in biology. One of those was a systems biology course taught by then-MIT professor Sebastian Seung, which transformed her career trajectory.
“It was truly inspiring,” she recalls. “Thinking mathematically about interacting systems in biology was really exciting. It was really my first introduction to systems biology, and it had me hooked immediately.”
She ended up doing most of her PhD research in Seung’s lab at MIT, where she studied how the brain uses incoming signals of the velocity of head movement to control eye position. For example, if we want to keep our gaze fixed on a particular location while our head is moving, the brain must continuously calculate and adjust the amount of tension needed in the muscles surrounding the eyes, to compensate for the movement of the head.
“Bizarre” cells
After earning her PhD, Fiete and her husband, a theoretical physicist, went to the Kavli Institute for Theoretical Physics at the University of California at Santa Barbara, where they each held fellowships for independent research. While there, Fiete began working on a research topic that she still studies today — grid cells. These cells, located in the entorhinal cortex of the brain, enable us to navigate our surroundings by helping the brain to create a neural representation of space.
Midway through her position there, she learned of a new discovery, that when a rat moves across an open room, a grid cell in its brain fires at many different locations arranged geometrically in a regular pattern of repeating triangles. Together, a population of grid cells forms a lattice of triangles representing the entire room. These cells have also been found in the brains of various other mammals, including humans.
“It’s amazing. It’s this very crystalline response,” Fiete says. “When I read about that, I fell out of my chair. At that point I knew this was something bizarre that would generate so many questions about development, function, and brain circuitry that could be studied computationally.”
One question Fiete and others have investigated is why the brain needs grid cells at all, since it also has so-called place cells that each fire in one specific location in the environment. A possible explanation that Fiete has explored is that grid cells of different scales, working together, can represent a vast number of possible positions in space and also multiple dimensions of space.
“If you have a few cells that can parsimoniously generate a very large coding space, then you can afford to not use most of that coding space,” she says. “You can afford to waste most of it, which means you can separate things out very well, in which case it becomes not so susceptible to noise.”
Since returning to MIT, she has also pursued a research theme related to what she explored in her PhD thesis — how the brain maintains neural representations of where the head is located in space. In a paper published last year, she uncovered that the brain generates a one-dimensional ring of neural activity that acts as a compass, allowing the brain to calculate the current direction of the head relative to the external world.
Her lab also studies cognitive flexibility — the brain’s ability to perform so many different types of cognitive tasks.
“How it is that we can repurpose the same circuits and flexibly use them to solve many different problems, and what are the neural codes that are amenable to that kind of reuse?” she says. “We’re also investigating the principles that allow the brain to hook multiple circuits together to solve new problems without a lot of reconfiguration.”
Even before MIT sent out its first official announcement about the COVID-19 crisis, I had already asked permission from my supervisor and taken my computer home so that I could start working from home.
My first and foremost concern was my family and friends. I was born and brought up in India, and then immigrated to Canada, so I have a big and wonderful family spread across both those countries. These countries had a lower number of COVID-19 cases at the time, but I could see what would be coming their way. I was anxious, very anxious. In India, my dad being an anesthetist could be exposed while working in the hospital. In Canada, my uncle who is a physician could be exposed, and on top of that he lives in the same house as my grandparents who are even more vulnerable due to their age. I knew I had to do something.
We started having regular video calls as a family. My mom even led daily online yoga sessions, and the discussions that followed those sessions ensured that we didn’t feel lonely and gave us a sense of purpose. Together, we looked at the statistics in the data from China and Italy, and learned that we needed to flatten the curve due to the lack of medical resources required to meet the need of the hour. We could foresee that more infections would lead to more patients, thus raising the demand for medical resources beyond the amount we had available.
We had several discussions around developing products for helping medical professionals and the general public during this pandemic.
We learned that since no government has enough resources to cope at the time of pandemics, we have to be innovative in trying to make the best use of the limited resources available to us.
Through our discussions and experiences of some of us in the field, we came to the conclusion that the only way to effectively fight COVID-19 is prevention at source. Hence, we started working on a mobile app that uses AI and advanced data analytics to trace contact, determine the risk of infection, and thereby suggest precautions. Luckily we have engineers and computer scientists in our family (my own background is in electrical engineering), so it was easy for us to divide the work. In our prototype, when people sign-up, they are asked to fill out a short self-assessment form that can be used to identify any symptoms of COVID-19. This data is then used to predict vulnerable areas and to give recommendations to people who might have taken a certain route as shown below.
We ended up submitting our proposal and prototype to the COVID-19 challenge launched by Vale (a global mining company) and the winners will be announced in May.
Personally, to be completely honest, I had my times when I broke down due to everything that was going on in the world around me. It’s not easy to see people dying, and losing jobs. My way of staying strong was to make sure that I was doing my best to contribute.
I have set up a beautiful home office for myself and I am focusing on my PhD research, being grateful that I can still continue to do it from home. I have also restarted the joint MIT-Harvard computational neuroscience journal club meetings online, so that members can get access to this wonderful community once again! It was amazing to see from a poll we conducted that 92% of the members of the club wanted the meetings to be re-started online.
These times are unprecedented for my generation, my mom’s generation and even for my grandmother’s generation. I have never seen the world come together in a way I have seen during this pandemic. The kind of response we have seen from our societies and governments across the globe shows that we can make intelligent decisions for the collective good of humanity. For once, we’re all on the same side!
Sugandha (Su) Sharma is a graduate student in the labs of Ila Fiete and Josh Tenenbaum. When she’s not developing a mobile app to fight COVID-19, Su explores the computational and theoretical principles underlying higher level cognition and intelligence in the human brain.
The world is constantly bombarding our senses with information, but the ways in which our brain extracts meaning from this information remains elusive. How do neurons transform raw visual input into a mental representation of an object – like a chair or a dog?
In work published today in Nature Neuroscience, MIT neuroscientists have identified a brain circuit in mice that distills “high-dimensional” complex information about the environment into a simple abstract object in the brain.
“There are no degree markings in the external world, our current head direction has to be extracted, computed, and estimated by the brain,” explains Ila Fiete, an associate member of the McGovern Institute and senior author of the paper. “The approaches we used allowed us to demonstrate the emergence of a low-dimensional concept, essentially an abstract compass in the brain.”
This abstract compass, according to the researchers, is a one-dimensional ring that represents the current direction of the head relative to the external world.
Schooling fish
Trying to show that a data cloud has a simple shape, like a ring, is a bit like watching a school of fish. By tracking one or two sardines, you might not see a pattern. But if you could map all of the sardines, and transform the noisy dataset into points representing the positions of the whole school of sardines over time, and where each fish is relative to its neighbors, a pattern would emerge. This model would reveal a ring shape, a simple shape formed by the activity of hundreds of individual fish.
Fiete, who is also an associate professor in MIT’s Department of Brain and Cognitive Sciences, used a similar approach, called topological modeling, to transform the activity of large populations of noisy neurons into a data cloud the shape of a ring.
Simple and persistent ring
Previous work in fly brains revealed a physical ellipsoid ring of neurons representing changes in the direction of the fly’s head, and researchers suspected that such a system might also exist in mammals.
In this new mouse study, Fiete and her colleagues measured hours of neural activity from scores of neurons in the anterodorsal thalamic nucleus (ADN) – a region believed to play a role in spatial navigation – as the animals moved freely around their environment. They mapped how the neurons in the ADN circuit fired as the animal’s head changed direction.
Together these data points formed a cloud in the shape of a simple and persistent ring.
“In the absence of this ring,” Fiete explains, “we would be lost in the world.”
“This tells us a lot about how neural networks are organized in the brain,” explains Edvard Moser, Director of the Kavli Institute of Systems Neuroscience in Norway, who was not involved in the study. “Past data have indirectly pointed towards such a ring-like organization but only now has it been possible, with the right cell numbers and methods, to demonstrate it convincingly,” says Moser.
Their method for characterizing the shape of the data cloud allowed Fiete and colleagues to determine which variable the circuit was devoted to representing, and to decode this variable over time, using only the neural responses.
“The animal’s doing really complicated stuff,” explains Fiete, “but this circuit is devoted to integrating the animal’s speed along a one-dimensional compass that encodes head direction,” explains Fiete. “Without a manifold approach, which captures the whole state space, you wouldn’t know that this circuit of thousands of neurons is encoding only this one aspect of the complex behavior, and not encoding any other variables at the same time.”
Even during sleep, when the circuit is not being bombarded with external information, this circuit robustly traces out the same one-dimensional ring, as if dreaming of past head direction trajectories.
Further analysis revealed that the ring acts an attractor. If neurons stray off trajectory, they are drawn back to it, quickly correcting the system. This attractor property of the ring means that the representation of head direction in abstract space is reliably stable over time, a key requirement if we are to understand and maintain a stable sense of where our head is relative to the world around us.
“In the absence of this ring,” Fiete explains, “we would be lost in the world.”
Shaping the future
Fiete’s work provides a first glimpse into how complex sensory information is distilled into a simple concept in the mind, and how that representation autonomously corrects errors, making it exquisitely stable.
But the implications of this study go beyond coding of head direction.
“Similar organization is probably present for other cognitive functions so the paper is likely to inspire numerous new studies,” says Moser.
Fiete sees these analyses and related studies carried out by colleagues at the Norwegian University of Science and Technology, Princeton University, the Weitzman Institute, and elsewhere as fundamental to the future of neural decoding studies.
With this approach, she explains, it is possible to extract abstract representations of the mind from the brain, potentially even thoughts and dreams.
“We’ve found that the brain deconstructs and represents complex things in the world with simple shapes,” explains Fiete. “Manifold-level analysis can help us to find those shapes, and they almost certainly exist beyond head direction circuits.”
Ila Fiete, an associate professor in the Department of Brain and Cognitive Sciences at MIT recently joined the McGovern Institute as an associate investigator. Fiete is working to understand the circuits that underlie short-term memory, integration, and inference in the brain.
Think about the simple act of visiting a new town and getting to know its layout as you explore it. What places are reachable from others? Where are landmarks relative to each other? Where are you relative to these landmarks? How do you get from here to where you want to go next?
The process that occurs as your brain tries to transform the few routes you follow into a coherent map of the world is just one of myriad examples of hard computations that the brain is constantly performing. Fiete’s goal is to understand how the brain is able to carry out such computations, and she is developing and using multiple tools to this end. These approaches include pure theoretical approaches to examine neural codes, building numerical dynamical models of circuit operation, and techniques to extract information about the underlying circuit dynamics from neural data.
Spatial navigation is a particularly interesting nut to crack from a neural perspective: The mapping devices on your phone have access to global satellite data, previously constructed detailed maps of the town, various additional sensors, and excellent non-leaky memory. By contrast, the brain must build maps, plan routes, and determine goals all using noisy, local sensors, no externally provided maps, and with noisy, forgetful or leaky neurons. Fiete is particularly interested in elucidating how the brain deals with noisy and ambiguous cues from the world to arrive at robust estimates about the world that resolve ambiguity. She is also interested in how the networks that are important for memory and integration develop through plasticity, learning, and development in the brain.
Fiete earned a BS in mathematics and physics at the University of Michigan then obtained her PhD in 2004 at Harvard University in the Department of Physics. She held a postdoctoral appointment at the Kavli Institute for Theoretical Physics at the University of California, Santa Barbara from 2004 to 2006, while she was also a visiting member of the Center for Theoretical Biophysics at the University of California at San Diego. Fiete subsequently spent two years at Caltech as a Broad Fellow in brain circuitry, and in 2008 joined the faculty of the University of Texas at Austin. She is currently an HHMI faculty scholar.
Ila Fiete builds tools and mathematical models to expand our knowledge of the brain’s computations. Specifically, her lab focuses on how the brain develops and reshapes its neural connections to perform high-level computations, like those involved in memory and learning. The Fiete lab applies cutting-edge theoretical and quantitative methods—wielding the vast capabilities of computational models, informed by mathematics, machine learning, and physics—digging deeper into how the brain represents and manipulates information. Through these strategies, Fiete hopes to shed new light onto the neural ensembles behind learning, integration of new information, inference-making, and spatial navigation.
Her lab’s findings are pushing the frontiers of neuroscience—while advancing the utility of computational tools in this space—and are building a more robust understanding of complex brain processes.
Biography
Ila Fiete is a professor of brain and cognitive sciences, associate member of the McGovern Institute, and director of the K. Lisa Yang ICoN Center at MIT. Fiete earned a BS in mathematics and physics at the University of Michigan, obtaining her PhD in physics at Harvard University in 2004. She conducted her postdoctoral work at the Kavli Institute for Theoretical Physics at the University of California, Santa Barbara while she was also a visiting member of the Center for Theoretical Biophysics at the University of California, San Diego. Fiete subsequently spent two years at Caltech as a Broad Fellow in brain circuitry, then joined the faculty of the University of Texas at Austin before coming to MIT in 2019.
Honors and Awards
Honors
2015 – Advisory Board Member, Kavli Institute for Theoretical Physics
Awards
2022 – Swartz Prize for Theoretical and Computational Neuroscience, Society for Neuroscience
2016 – Faculty Scholar Award, Howard Hughes Medical Institute
2013 – Teaching Excellence Award, College of Natural Sciences, University of Texas at Austin
2013 – Young Investigator Award, Office of Naval Research