The brain may learn about the world the same way some computational models do

To make our way through the world, our brain must develop an intuitive understanding of the physical world around us, which we then use to interpret sensory information coming into the brain.

How does the brain develop that intuitive understanding? Many scientists believe that it may use a process similar to what’s known as “self-supervised learning.” This type of machine learning, originally developed as a way to create more efficient models for computer vision, allows computational models to learn about visual scenes based solely on the similarities and differences between them, with no labels or other information.

A pair of studies from researchers at the K. Lisa Yang Integrative Computational Neuroscience (ICoN) Center at MIT offers new evidence supporting this hypothesis. The researchers found that when they trained models known as neural networks using a particular type of self-supervised learning, the resulting models generated activity patterns very similar to those seen in the brains of animals that were performing the same tasks as the models.

The findings suggest that these models are able to learn representations of the physical world that they can use to make accurate predictions about what will happen in that world, and that the mammalian brain may be using the same strategy, the researchers say.

“The theme of our work is that AI designed to help build better robots ends up also being a framework to better understand the brain more generally,” says Aran Nayebi, a postdoc in the ICoN Center. “We can’t say if it’s the whole brain yet, but across scales and disparate brain areas, our results seem to be suggestive of an organizing principle.”

Nayebi is the lead author of one of the studies, co-authored with Rishi Rajalingham, a former MIT postdoc now at Meta Reality Labs, and senior authors Mehrdad Jazayeri, an associate professor of brain and cognitive sciences and a member of the McGovern Institute for Brain Research; and Robert Yang, an assistant professor of brain and cognitive sciences and an associate member of the McGovern Institute. Ila Fiete, director of the ICoN Center, a professor of brain and cognitive sciences, and an associate member of the McGovern Institute, is the senior author of the other study, which was co-led by Mikail Khona, an MIT graduate student, and Rylan Schaeffer, a former senior research associate at MIT.

Both studies will be presented at the 2023 Conference on Neural Information Processing Systems (NeurIPS) in December.

Modeling the physical world

Early models of computer vision mainly relied on supervised learning. Using this approach, models are trained to classify images that are each labeled with a name — cat, car, etc. The resulting models work well, but this type of training requires a great deal of human-labeled data.

To create a more efficient alternative, in recent years researchers have turned to models built through a technique known as contrastive self-supervised learning. This type of learning allows an algorithm to learn to classify objects based on how similar they are to each other, with no external labels provided.

“This is a very powerful method because you can now leverage very large modern data sets, especially videos, and really unlock their potential,” Nayebi says. “A lot of the modern AI that you see now, especially in the last couple years with ChatGPT and GPT-4, is a result of training a self-supervised objective function on a large-scale dataset to obtain a very flexible representation.”

These types of models, also called neural networks, consist of thousands or millions of processing units connected to each other. Each node has connections of varying strengths to other nodes in the network. As the network analyzes huge amounts of data, the strengths of those connections change as the network learns to perform the desired task.

As the model performs a particular task, the activity patterns of different units within the network can be measured. Each unit’s activity can be represented as a firing pattern, similar to the firing patterns of neurons in the brain. Previous work from Nayebi and others has shown that self-supervised models of vision generate activity similar to that seen in the visual processing system of mammalian brains.

In both of the new NeurIPS studies, the researchers set out to explore whether self-supervised computational models of other cognitive functions might also show similarities to the mammalian brain. In the study led by Nayebi, the researchers trained self-supervised models to predict the future state of their environment across hundreds of thousands of naturalistic videos depicting everyday scenarios.

“For the last decade or so, the dominant method to build neural network models in cognitive neuroscience is to train these networks on individual cognitive tasks. But models trained this way rarely generalize to other tasks,” Yang says. “Here we test whether we can build models for some aspect of cognition by first training on naturalistic data using self-supervised learning, then evaluating in lab settings.”

Once the model was trained, the researchers had it generalize to a task they call “Mental-Pong.” This is similar to the video game Pong, where a player moves a paddle to hit a ball traveling across the screen. In the Mental-Pong version, the ball disappears shortly before hitting the paddle, so the player has to estimate its trajectory in order to hit the ball.

The researchers found that the model was able to track the hidden ball’s trajectory with accuracy similar to that of neurons in the mammalian brain, which had been shown in a previous study by Rajalingham and Jazayeri to simulate its trajectory — a cognitive phenomenon known as “mental simulation.” Furthermore, the neural activation patterns seen within the model were similar to those seen in the brains of animals as they played the game — specifically, in a part of the brain called the dorsomedial frontal cortex. No other class of computational model has been able to match the biological data as closely as this one, the researchers say.

“There are many efforts in the machine learning community to create artificial intelligence,” Jazayeri says. “The relevance of these models to neurobiology hinges on their ability to additionally capture the inner workings of the brain. The fact that Aran’s model predicts neural data is really important as it suggests that we may be getting closer to building artificial systems that emulate natural intelligence.”

Navigating the world

The study led by Khona, Schaeffer, and Fiete focused on a type of specialized neurons known as grid cells. These cells, located in the entorhinal cortex, help animals to navigate, working together with place cells located in the hippocampus.

While place cells fire whenever an animal is in a specific location, grid cells fire only when the animal is at one of the vertices of a triangular lattice. Groups of grid cells create overlapping lattices of different sizes, which allows them to encode a large number of positions using a relatively small number of cells.

In recent studies, researchers have trained supervised neural networks to mimic grid cell function by predicting an animal’s next location based on its starting point and velocity, a task known as path integration. However, these models hinged on access to privileged information about absolute space at all times — information that the animal does not have.

Inspired by the striking coding properties of the multiperiodic grid-cell code for space, the MIT team trained a contrastive self-supervised model to both perform this same path integration task and represent space efficiently while doing so. For the training data, they used sequences of velocity inputs. The model learned to distinguish positions based on whether they were similar or different — nearby positions generated similar codes, but further positions generated more different codes.

“It’s similar to training models on images, where if two images are both heads of cats, their codes should be similar, but if one is the head of a cat and one is a truck, then you want their codes to repel,” Khona says. “We’re taking that same idea but applying it to spatial trajectories.”

Once the model was trained, the researchers found that the activation patterns of the nodes within the model formed several lattice patterns with different periods, very similar to those formed by grid cells in the brain.

“What excites me about this work is that it makes connections between mathematical work on the striking information-theoretic properties of the grid cell code and the computation of path integration,” Fiete says. “While the mathematical work was analytic — what properties does the grid cell code possess? — the approach of optimizing coding efficiency through self-supervised learning and obtaining grid-like tuning is synthetic: It shows what properties might be necessary and sufficient to explain why the brain has grid cells.”

The research was funded by the K. Lisa Yang ICoN Center, the National Institutes of Health, the Simons Foundation, the McKnight Foundation, the McGovern Institute, and the Helen Hay Whitney Foundation.

School of Science presents 2023 Infinite Expansion Awards

The MIT School of Science has announced seven postdocs and research scientists as recipients of the 2023 Infinite Expansion Award. Nominated by their peers and mentors, the awardees are recognized not only for their exceptional science, but for mentoring and advising junior colleagues, supporting educational programs, working with the MIT Postdoctoral Association, or contributing some other way to the Institute.

The 2023 Infinite Expansion award winners in the School of Science are:

  • Kyle Jenks, a postdoc in the Picower Institute for Learning and Memory, nominated by professor and Picower Institute investigator Mriganka Sur;
  • Matheus Victor, a postdoc in the Picower Institute, nominated by professor and Picower Institute director Li-Huei Tsai.

A monetary award is granted to recipients, and a celebratory reception will be held for the winners this spring with family, friends, nominators, and recipients of the Infinite Expansion Award.

Understanding reality through algorithms

Although Fernanda De La Torre still has several years left in her graduate studies, she’s already dreaming big when it comes to what the future has in store for her.

“I dream of opening up a school one day where I could bring this world of understanding of cognition and perception into places that would never have contact with this,” she says.

It’s that kind of ambitious thinking that’s gotten De La Torre, a doctoral student in MIT’s Department of Brain and Cognitive Sciences, to this point. A recent recipient of the prestigious Paul and Daisy Soros Fellowship for New Americans, De La Torre has found at MIT a supportive, creative research environment that’s allowed her to delve into the cutting-edge science of artificial intelligence. But she’s still driven by an innate curiosity about human imagination and a desire to bring that knowledge to the communities in which she grew up.

An unconventional path to neuroscience

De La Torre’s first exposure to neuroscience wasn’t in the classroom, but in her daily life. As a child, she watched her younger sister struggle with epilepsy. At 12, she crossed into the United States from Mexico illegally to reunite with her mother, exposing her to a whole new language and culture. Once in the States, she had to grapple with her mother’s shifting personality in the midst of an abusive relationship. “All of these different things I was seeing around me drove me to want to better understand how psychology works,” De La Torre says, “to understand how the mind works, and how it is that we can all be in the same environment and feel very different things.”

But finding an outlet for that intellectual curiosity was challenging. As an undocumented immigrant, her access to financial aid was limited. Her high school was also underfunded and lacked elective options. Mentors along the way, though, encouraged the aspiring scientist, and through a program at her school, she was able to take community college courses to fulfill basic educational requirements.

It took an inspiring amount of dedication to her education, but De La Torre made it to Kansas State University for her undergraduate studies, where she majored in computer science and math. At Kansas State, she was able to get her first real taste of research. “I was just fascinated by the questions they were asking and this entire space I hadn’t encountered,” says De La Torre of her experience working in a visual cognition lab and discovering the field of computational neuroscience.

Although Kansas State didn’t have a dedicated neuroscience program, her research experience in cognition led her to a machine learning lab led by William Hsu, a computer science professor. There, De La Torre became enamored by the possibilities of using computation to model the human brain. Hsu’s support also convinced her that a scientific career was a possibility. “He always made me feel like I was capable of tackling big questions,” she says fondly.

With the confidence imparted in her at Kansas State, De La Torre came to MIT in 2019 as a post-baccalaureate student in the lab of Tomaso Poggio, the Eugene McDermott Professor of Brain and Cognitive Sciences and an investigator at the McGovern Institute for Brain Research. With Poggio, also the director of the Center for Brains, Minds and Machines, De La Torre began working on deep-learning theory, an area of machine learning focused on how artificial neural networks modeled on the brain can learn to recognize patterns and learn.

“It’s a very interesting question because we’re starting to use them everywhere,” says De La Torre of neural networks, listing off examples from self-driving cars to medicine. “But, at the same time, we don’t fully understand how these networks can go from knowing nothing and just being a bunch of numbers to outputting things that make sense.”

Her experience as a post-bac was De La Torre’s first real opportunity to apply the technical computer skills she developed as an undergraduate to neuroscience. It was also the first time she could fully focus on research. “That was the first time that I had access to health insurance and a stable salary. That was, in itself, sort of life-changing,” she says. “But on the research side, it was very intimidating at first. I was anxious, and I wasn’t sure that I belonged here.”

Fortunately, De La Torre says she was able to overcome those insecurities, both through a growing unabashed enthusiasm for the field and through the support of Poggio and her other colleagues in MIT’s Department of Brain and Cognitive Sciences. When the opportunity came to apply to the department’s PhD program, she jumped on it. “It was just knowing these kinds of mentors are here and that they cared about their students,” says De La Torre of her decision to stay on at MIT for graduate studies. “That was really meaningful.”

Expanding notions of reality and imagination

In her two years so far in the graduate program, De La Torre’s work has expanded the understanding of neural networks and their applications to the study of the human brain. Working with Guangyu Robert Yang, an associate investigator at the McGovern Institute and an assistant professor in the departments of Brain and Cognitive Sciences and Electrical Engineering and Computer Sciences, she’s engaged in what she describes as more philosophical questions about how one develops a sense of self as an independent being. She’s interested in how that self-consciousness develops and why it might be useful.

De La Torre’s primary advisor, though, is Professor Josh McDermott, who leads the Laboratory for Computational Audition. With McDermott, De La Torre is attempting to understand how the brain integrates vision and sound. While combining sensory inputs may seem like a basic process, there are many unanswered questions about how our brains combine multiple signals into a coherent impression, or percept, of the world. Many of the questions are raised by audiovisual illusions in which what we hear changes what we see. For example, if one sees a video of two discs passing each other, but the clip contains the sound of a collision, the brain will perceive that the discs are bouncing off, rather than passing through each other. Given an ambiguous image, that simple auditory cue is all it takes to create a different perception of reality.

There’s something interesting happening where our brains are receiving two signals telling us different things and, yet, we have to combine them somehow to make sense of the world.

De La Torre is using behavioral experiments to probe how the human brain makes sense of multisensory cues to construct a particular perception. To do so, she’s created various scenes of objects interacting in 3D space over different sounds, asking research participants to describe characteristics of the scene. For example, in one experiment, she combines visuals of a block moving across a surface at different speeds with various scraping sounds, asking participants to estimate how rough the surface is. Eventually she hopes to take the experiment into virtual reality, where participants will physically push blocks in response to how rough they perceive the surface to be, rather than just reporting on what they experience.

Once she’s collected data, she’ll move into the modeling phase of the research, evaluating whether multisensory neural networks perceive illusions the way humans do. “What we want to do is model exactly what’s happening,” says De La Torre. “How is it that we’re receiving these two signals, integrating them and, at the same time, using all of our prior knowledge and inferences of physics to really make sense of the world?”

Although her two strands of research with Yang and McDermott may seem distinct, she sees clear connections between the two. Both projects are about grasping what artificial neural networks are capable of and what they tell us about the brain. At a more fundamental level, she says that how the brain perceives the world from different sensory cues might be part of what gives people a sense of self. Sensory perception is about constructing a cohesive, unitary sense of the world from multiple sources of sensory data. Similarly, she argues, “the sense of self is really a combination of actions, plans, goals, emotions, all of these different things that are components of their own, but somehow create a unitary being.”

It’s a fitting sentiment for De La Torre, who has been working to make sense of and integrate different aspects of her own life. Working in the Computational Audition lab, for example, she’s started experimenting with combining electronic music with folk music from her native Mexico, connecting her “two worlds,” as she says. Having the space to undertake those kinds of intellectual explorations, and colleagues who encourage it, is one of De La Torre’s favorite parts of MIT.

“Beyond professors, there’s also a lot of students whose way of thinking just amazes me,” she says. “I see a lot of goodness and excitement for science and a little bit of — it’s not nerdiness, but a love for very niche things — and I just kind of love that.”

Lindsay Case and Guangyu Robert Yang named 2022 Searle Scholars

MIT cell biologist Lindsay Case and computational neuroscientist Guangyu Robert Yang have been named 2022 Searle Scholars, an award given annually to 15 outstanding U.S. assistant professors who have high potential for ongoing innovative research contributions in medicine, chemistry, or the biological sciences.

Case is an assistant professor of biology, while Yang is an assistant professor of brain and cognitive sciences and electrical engineering and computer science, and an associate investigator at the McGovern Institute for Brain Research. They will each receive $300,000 in flexible funding to support their high-risk, high-reward work over the next three years.

Lindsay Case

Case arrived at MIT in 2021, after completing a postdoc at the University of Texas Southwestern Medical Center in the lab of Michael Rosen. Prior to that, she earned her PhD from the University of North Carolina at Chapel Hill, working in the lab of Clare Waterman at the National Heart Lung and Blood Institute.

Situated in MIT’s Building 68, Case’s lab studies how molecules within cells organize themselves, and how such organization begets cellular function. Oftentimes, molecules will assemble at the cell’s plasma membrane — a complex signaling platform where hundreds of receptors sense information from outside the cell and initiate cellular changes in response. Through her experiments, Case has found that molecules at the plasma membrane can undergo a process known as phase separation, condensing to form liquid-like droplets.

As a Searle Scholar, Case is investigating the role that phase separation plays in regulating a specific class of signaling molecules called kinases. Her team will take a multidisciplinary approach to probe what happens when kinases phase separate into signaling clusters, and what cellular changes occur as a result. Because phase separation is emerging as a promising new target for small molecule therapies, this work will help identify kinases that are strong candidates for new therapeutic interventions to treat diseases such as cancer.

“I am honored to be recognized by the Searle Scholars Program, and thrilled to join such an incredible community of scientists,” Case says. “This support will enable my group to broaden our research efforts and take our preliminary findings in exciting new directions. I look forward to better understanding how phase separation impacts cellular function.”

Guangyu Robert Yang

Before coming to MIT in 2021, Yang trained in physics at Peking University, obtained a PhD in computational neuroscience at New York University with Xiao-Jing Wang, and further trained as a postdoc at the Center for Theoretical Neuroscience of Columbia University, as an intern at Google Brain, and as a junior fellow at the Simons Society of Fellows.

His research team at MIT, the MetaConscious Group, develops models of mental functions by incorporating multiple interacting modules. They are designing pipelines to process and compare large-scale experimental datasets that span modalities ranging from behavioral data to neural activity data to molecular data. These datasets are then be integrated to train individual computational modules based on the experimental tasks that were evaluated such as vision, memory, or movement.

Ultimately, Yang seeks to combine these modules into a “network of networks” that models higher-level brain functions such as the ability to flexibly and rapidly learn a variety of tasks. Such integrative models are rare because, until recently, it was not possible to acquire data that spans modalities and brain regions in real time as animals perform tasks. The time is finally right for integrative network models. Computational models that incorporate such multisystem, multilevel datasets will allow scientists to make new predictions about the neural basis of cognition and open a window to a mathematical understanding the mind.

“This is a new research direction for me, and I think for the field too. It comes with many exciting opportunities as well as challenges. Having this recognition from the Searle Scholars program really gives me extra courage to take on the uncertainties and challenges,” says Yang.

Since 1981, 647 scientists have been named Searle Scholars. Including this year, the program has awarded more than $147 million. Eighty-five Searle Scholars have been inducted into the National Academy of Sciences. Twenty scholars have been recognized with a MacArthur Fellowship, known as the “genius grant,” and two Searle Scholars have been awarded the Nobel Prize in Chemistry. The Searle Scholars Program is funded through the Searle Funds at The Chicago Community Trust and administered by Kinship Foundation.

Three from MIT awarded 2022 Paul and Daisy Soros Fellowships for New Americans

MIT graduate student Fernanda De La Torre, alumna Trang Luu ’18, SM ’20, and senior Syamantak Payra are recipients of the 2022 Paul and Daisy Soros Fellowships for New Americans.

De La Torre, Luu, and Payra are among 30 New Americans selected from a pool of over 1,800 applicants. The fellowship honors the contributions of immigrants and children of immigrants by providing $90,000 in funding for graduate school.

Students interested in applying to the P.D. Soros Fellowship for future years may contact Kim Benard, associate dean of distinguished fellowships in Career Advising and Professional Development.

Fernanda De La Torre

Fernanda De La Torre is a PhD student in the Department of Brain and Cognitive Sciences. With Professor Josh McDermott, she studies how we integrate vision and sound, and with Professor Robert Yang, she develops computational models of imagination.

De La Torre spent her early childhood with her younger sister and grandmother in Guadalajara, Mexico. At age 12, she crossed the Mexican border to reunite with her mother in Kansas City, Missouri. Shortly after, an abusive home environment forced De La Torre to leave her family and support herself throughout her early teens.

Despite her difficult circumstances, De La Torre excelled academically in high school. By winning various scholarships that would discretely take applications from undocumented students, she was able to continue her studies in computer science and mathematics at Kansas State University. There, she became intrigued by the mysteries of the human mind. During college, De La Torre received invaluable mentorship from her former high school principal, Thomas Herrera, who helped her become documented through the Violence Against Women Act. Her college professor, William Hsu, supported her interests in artificial intelligence and encouraged her to pursue a scientific career.

After her undergraduate studies, De La Torre won a post-baccalaureate fellowship from the Department of Brain and Cognitive Sciences at MIT, where she worked with Professor Tomaso Poggio on the theory of deep learning. She then transitioned into the department’s PhD program. Beyond contributing to scientific knowledge, De La Torre plans to use science to create spaces where all people, including those from backgrounds like her own, can innovate and thrive.

She says: “Immigrants face many obstacles, but overcoming them gives us a unique strength: We learn to become resilient, while relying on friends and mentors. These experiences foster both the desire and the ability to pay it forward to our community.”

Trang Luu

Trang Luu graduated from MIT with a BS in mechanical engineering in 2018, and a master of engineering degree in 2020. Her Soros award will support her graduate studies at Harvard University in the MBA/MS engineering sciences program.

Born in Saigon, Vietnam, Luu was 3 when her family immigrated to Houston, Texas. Watching her parents’ efforts to make a living in a land where they did not understand the culture or speak the language well, Luu wanted to alleviate hardship for her family. She took full responsibility for her education and found mentors to help her navigate the American education system. At home, she assisted her family in making and repairing household items, which fueled her excitement for engineering.

As an MIT undergraduate, Luu focused on assistive technology projects, applying her engineering background to solve problems impeding daily living. These projects included a new adaptive socket liner for below-the-knee amputees in Kenya, Ethiopia, and Thailand; a walking stick adapter for wheelchairs; a computer head pointer for patients with limited arm mobility, a safer makeshift cook stove design for street vendors in South Africa; and a quicker method to test new drip irrigation designs. As a graduate student in MIT D-Lab under the direction of Professor Daniel Frey, Luu was awarded a National Science Foundation Graduate Research Fellowship. In her graduate studies, Luu researched methods to improve evaporative cooling devices for off-grid farmers to reduce rapid fruit and vegetable deterioration.

These projects strengthened Luu’s commitment to innovating new technology and devices for people struggling with basic daily tasks. During her senior year, Luu collaborated on developing a working prototype of a wearable device that noninvasively reduces hand tremors associated with Parkinson’s disease or essential tremor. Observing patients’ joy after their tremors stopped compelled Luu and three co-founders to continue developing the device after college. Four years later, Encora Therapeutics has accomplished major milestones, including Breakthrough Device designation by the U.S. Food and Drug Administration.

Syamantak Payra

Hailing from Houston, Texas, Syamantak Payra is a senior majoring in electrical engineering and computer science, with minors in public policy and entrepreneurship and innovation. He will be pursuing a PhD in engineering at Stanford University, with the goal of creating new biomedical devices that can help improve daily life for patients worldwide and enhance health care outcomes for decades to come.

Payra’s parents had emigrated from India, and he grew up immersed in his grandparents’ rich Bengali culture. As a high school student, he conducted projects with NASA engineers at Johnson Space Center, experimented at home with his scientist parents, and competed in spelling bees and science fairs across the United States. Through these avenues and activities, Syamantak not only gained perspectives on bridging gaps between people, but also found passions for language, scientific discovery, and teaching others.

After watching his grandmother struggle with asthma and chronic obstructive pulmonary disease and losing his baby brother to brain cancer, Payra devoted himself to trying to use technology to solve health-care challenges. Payra’s proudest accomplishments include building a robotic leg brace for his paralyzed teacher and conducting free literacy workshops and STEM outreach programs that reached nearly a thousand underprivileged students across the Greater Houston Area.

At MIT, Payra has worked in Professor Yoel Fink’s research laboratory, creating digital sensor fibers that have been woven into intelligent garments that can assist in diagnosing illnesses, and in Professor Joseph Paradiso’s research laboratory, where he contributed to next-generation spacesuit prototypes that better protect astronauts on spacewalks. Payra’s research has been published by multiple scientific journals, and he was inducted into the National Gallery of America’s Young Inventors.

Data transformed

With the tools of modern neuroscience, data accumulates quickly. Recording devices listen in on the electrical conversations between neurons, picking up the voices of hundreds of cells at a time. Microscopes zoom in to illuminate the brain’s circuitry, capturing thousands of images of cells’ elaborately branched paths. Functional MRIs detect changes in blood flow to map activity within a person’s brain, generating a complete picture by compiling hundreds of scans.

“When I entered neuroscience about 20 years ago, data were extremely precious, and ideas, as the expression went, were cheap. That’s no longer true,” says McGovern Associate Investigator Ila Fiete. “We have an embarrassment of wealth in the data but lack sufficient conceptual and mathematical scaffolds to understand it.”

Fiete will lead the McGovern Institute’s new K. Lisa Yang Integrative Computational Neuroscience (ICoN) Center, whose scientists will create mathematical models and other computational tools to confront the current deluge of data and advance our understanding of the brain and mental health. The center, funded by a $24 million donation from philanthropist Lisa Yang, will take a uniquely collaborative approach to computational neuroscience, integrating data from MIT labs to explain brain function at every level, from the molecular to the behavioral.

“Driven by technologies that generate massive amounts of data, we are entering a new era of translational neuroscience research,” says Yang, whose philanthropic investment in MIT research now exceeds $130 million. “I am confident that the multidisciplinary expertise convened by this center will revolutionize how we synthesize this data and ultimately understand the brain in health and disease.”

Data integration

Fiete says computation is particularly crucial to neuroscience because the brain is so staggeringly complex. Its billions of neurons, which are themselves complicated and diverse, interact with one other through trillions of connections.

“Conceptually, it’s clear that all these interactions are going to lead to pretty complex things. And these are not going to be things that we can explain in stories that we tell,” Fiete says. “We really will need mathematical models. They will allow us to ask about what changes when we perturb one or several components — greatly accelerating the rate of discovery relative to doing those experiments in real brains.”

By representing the interactions between the components of a neural circuit, a model gives researchers the power to explore those interactions, manipulate them, and predict the circuit’s behavior under different conditions.

“You can observe these neurons in the same way that you would observe real neurons. But you can do even more, because you have access to all the neurons and you have access to all the connections and everything in the network,” explains computational neuroscientist and McGovern Associate Investigator Guangyu Robert Yang (no relation to Lisa Yang), who joined MIT as a junior faculty member in July 2021.

Many neuroscience models represent specific functions or parts of the brain. But with advances in computation and machine learning, along with the widespread availability of experimental data with which to test and refine models, “there’s no reason that we should be limited to that,” he says.

Robert Yang’s team at the McGovern Institute is working to develop models that integrate multiple brain areas and functions. “The brain is not just about vision, just about cognition, just about motor control,” he says. “It’s about all of these things. And all these areas, they talk to one another.” Likewise, he notes, it’s impossible to separate the molecules in the brain from their effects on behavior – although those aspects of neuroscience have traditionally been studied independently, by researchers with vastly different expertise.

The ICoN Center will eliminate the divides, bringing together neuroscientists and software engineers to deal with all types of data about the brain. To foster interdisciplinary collaboration, every postdoctoral fellow and engineer at the center will work with multiple faculty mentors. Working in three closely interacting scientific cores, fellows will develop computational technologies for analyzing molecular data, neural circuits, and behavior, such as tools to identify pat-terns in neural recordings or automate the analysis of human behavior to aid psychiatric diagnoses. These technologies will also help researchers model neural circuits, ultimately transforming data into knowledge and understanding.

“Lisa is focused on helping the scientific community realize its goals in translational research,” says Nergis Mavalvala, dean of the School of Science and the Curtis and Kathleen Marble Professor of Astrophysics. “With her generous support, we can accelerate the pace of research by connecting the data to the delivery of tangible results.”

Computational modeling

In its first five years, the ICoN Center will prioritize four areas of investigation: episodic memory and exploration, including functions like navigation and spatial memory; complex or stereotypical behavior, such as the perseverative behaviors associated with autism and obsessive-compulsive disorder; cognition and attention; and sleep. The goal, Fiete says, is to model the neuronal interactions that underlie these functions so that researchers can predict what will happen when something changes — when certain neurons become more active or when a genetic mutation is introduced, for example. When paired with experimental data from MIT labs, the center’s models will help explain not just how these circuits work, but also how they are altered by genes, the environment, aging, and disease.

These focus areas encompass circuits and behaviors often affected by psychiatric disorders and neurodegeneration, and models will give researchers new opportunities to explore their origins and potential treatment strategies. “I really think that the future of treating disorders of the mind is going to run through computational modeling,” says McGovern Associate Investigator Josh McDermott.

In McDermott’s lab, researchers are modeling the brain’s auditory circuits. “If we had a perfect model of the auditory system, we would be able to understand why when somebody loses their hearing, auditory abilities degrade in the very particular ways in which they degrade,” he says. Then, he says, that model could be used to optimize hearing aids by predicting how the brain would interpret sound altered in various ways by the device.

Similar opportunities will arise as researchers model other brain systems, McDermott says, noting that computational models help researchers grapple with a dauntingly vast realm of possibilities. “There’s lots of different ways the brain can be set up, and lots of different potential treatments, but there is a limit to the number of neuroscience or behavioral experiments you can run,” he says. “Doing experiments on a computational system is cheap, so you can explore the dynamics of the system in a very thorough way.”

The ICoN Center will speed the development of the computational tools that neuroscientists need, both for basic understanding of the brain and clinical advances. But Fiete hopes for a culture shift within neuroscience, as well. “There are a lot of brilliant students and postdocs who have skills that are mathematics and computational and modeling based,” she says. “I think once they know that there are these possibilities to collaborate to solve problems related to psychiatric disorders and how we think, they will see that this is an exciting place to apply their skills, and we can bring them in.”

Artificial networks learn to smell like the brain

Using machine learning, a computer model can teach itself to smell in just a few minutes. When it does, researchers have found, it builds a neural network that closely mimics the olfactory circuits that animal brains use to process odors.

Animals from fruit flies to humans all use essentially the same strategy to process olfactory information in the brain. But neuroscientists who trained an artificial neural network to take on a simple odor classification task were surprised to see it replicate biology’s strategy so faithfully.

“The algorithm we use has no resemblance to the actual process of evolution,” says Guangyu Robert Yang, an associate investigator at MIT’s McGovern Institute, who led the work as a postdoctoral fellow at Columbia University. The similarities between the artificial and biological systems suggest that the brain’s olfactory network is optimally suited to its task.

Yang and his collaborators, who reported their findings October 6, 2021, in the journal Neuron, say their artificial network will help researchers learn more about the brain’s olfactory circuits. The work also helps demonstrate artificial neural networks’ relevance to neuroscience. “By showing that we can match the architecture [of the biological system] very precisely, I think that gives more confidence that these neural networks can continue to be useful tools for modeling the brain,” says Yang, who is also an assistant professor in MIT’s Departments of Brain and Cognitive Sciences and Electrical Engineering and Computer Science and a member of the Center for Brains, Minds and Machines.

Mapping natural olfactory circuits

For fruit flies, the organism in which the brain’s olfactory circuitry has been best mapped, smell begins in the antennae. Sensory neurons there, each equipped with odor receptors specialized to detect specific scents, transform the binding of odor molecules into electrical activity. When an odor is detected, these neurons, which make up the first layer of the olfactory network, signal to the second-layer: a set of neurons that reside in a part of the brain called the antennal lobe. In the antennal lobe, sensory neurons that share the same receptor converge onto the same second-layer neuron. “They’re very choosy,” Yang says. “They don’t receive any input from neurons expressing other receptors.” Because it has fewer neurons than the first layer, this part of the network is considered a compression layer. These second-layer neurons, in turn, signal to a larger set of neurons in the third layer. Puzzlingly, those connections appear to be random.

For Yang, a computational neuroscientist, and Columbia University graduate student Peter Yiliu Wang, this knowledge of the fly’s olfactory system represented a unique opportunity. Few parts of the brain have been mapped as comprehensively, and that has made it difficult to evaluate how well certain computational models represent the true architecture of neural circuits, they say.

Building an artificial smell network

Neural networks, in which artificial neurons rewire themselves to perform specific tasks, are computational tools inspired by the brain. They can be trained to pick out patterns within complex datasets, making them valuable for speech and image recognition and other forms of artificial intelligence. There are hints that the neural networks that do this best replicate the activity of the nervous system. But, says Wang, who is now a postdoctoral researcher at Stanford University, differently structured networks could generate similar results, and neuroscientists still need to know whether artificial neural networks reflect the actual structure of biological circuits. With comprehensive anatomical data about fruit fly olfactory circuits, he says: “We’re able to ask this question: Can artificial neural networks truly be used to study the brain?”

Collaborating closely with Columbia neuroscientists Richard Axel and Larry Abbott, Yang and Wang constructed a network of artificial neurons comprising an input layer, a compression layer, and an expansion layer—just like the fruit fly olfactory system. They gave it the same number of neurons as the fruit fly system, but no inherent structure: connections between neurons would be rewired as the model learned to classify odors.

The scientists asked the network to assign data representing different odors to categories, and to correctly categorize not just single odors, but also mixtures of odors. This is something that the brain’s olfactory system is uniquely good at, Yang says. If you combine the scents of two different apples, he explains, the brain still smells apple. In contrast, if two photographs of cats are blended pixel by pixel, the brain no longer sees a cat. This ability is just one feature of the brain’s odor-processing circuits, but captures the essence of the system, Yang says.

It took the artificial network only minutes to organize itself. The structure that emerged was stunningly similar to that found in the fruit fly brain. Each neuron in the compression layer received inputs from a particular type of input neuron and connected, seemingly randomly, to multiple neurons in the expansion layer. What’s more, each neuron in the expansion layer receives connections, on average, from six compression-layer neurons—exactly as occurs in the fruit fly brain.

“It could have been one, it could have been 50. It could have been anywhere in between,” Yang says. “Biology finds six, and our network finds about six as well.” Evolution found this organization through random mutation and natural selection; the artificial network found it through standard machine learning algorithms.

The surprising convergence provides strong support that the brain circuits that interpret olfactory information are optimally organized for their task, he says. Now, researchers can use the model to further explore that structure, exploring how the network evolves under different conditions and manipulating the circuitry in ways that cannot be done experimentally.

School of Science welcomes new faculty

This fall, MIT welcomes new faculty members — six assistant professors and two tenured professors — to the departments of Biology; Brain and Cognitive Sciences; Chemistry; Earth, Atmospheric and Planetary Sciences; and Physics.

A physicist, Soonwon Choi is interested in dynamical phenomena that occur in strongly interacting quantum many-body systems far from equilibrium and designing their applications for quantum information science. He takes a variety of interdisciplinary approaches from analytic theory and numerical computations to collaborations on experiments with controlled quantum degrees of freedom. Recently, Choi’s research has encompassed studying the phenomenon of a phase transition in the dynamics of quantum entanglement and information, drawing on machine learning to introduce a quantum convolutional neural network that can recognize quantum states associated with a one-dimensional symmetry-protected topological phase, and exploring a range of quantum applications of the nitrogen-vacancy color center of diamond.

After completing his undergraduate study in physics at Caltech in 2012, Choi received his PhD degree in physics from Harvard University in 2018. He then worked as a Miller Postdoctoral Fellow at the University of California at Berkeley before joining the Department of Physics and the Center for Theoretical Physics as an assistant professor in July 2021.

Olivia Corradin investigates how genetic variants contribute to disease. She focuses on non-coding DNA variants — changes in DNA sequence that can alter the regulation of gene expression — to gain insight into pathogenesis. With her novel outside-variant approach, Corradin’s lab singled out a type of brain cell involved in multiple sclerosis, increasing total heritability identified by three- to five-fold. A recipient of the Avenir Award through the NIH Director’s Pioneer Award Program, Corradin also scrutinizes how genetic and epigenetic variation influence susceptibility to substance abuse disorders. These critical insights into multiple sclerosis, opioid use disorder, and other diseases have the potential to improve risk assessment, diagnosis, treatment, and preventative care for patients.

Corradin completed a bachelor’s degree in biochemistry from Marquette University in 2010 and a PhD in genetics from Case Western Reserve University in 2016. A Whitehead Institute Fellow since 2016, she also became an institute member in July 2021. The Department of Biology welcomes Corradin as an assistant professor.

Arlene Fiore seeks to understand processes that control two-way interactions between air pollutants and the climate system, as well as the sensitivity of atmospheric chemistry to different chemical, physical, and biological sources and sinks at scales ranging from urban to global and daily to decadal. Combining chemistry-climate models and observations from ground, airborne, and satellite platforms, Fiore has identified global dimensions to ground-level ozone smog and particulate haze that arise from linkages with the climate system, global atmospheric composition, and the terrestrial biosphere. She also investigates regional meteorology and climate feedbacks due to aerosols versus greenhouse gases, future air pollution responses to climate change, and drivers of atmospheric oxidizing capacity. A new research direction involves using chemistry-climate model ensemble simulations to identify imprints of climate variability on observational records of trace gases in the troposphere.

After earning a bachelor’s degree and PhD from Harvard University, Fiore held a research scientist position at the Geophysical Fluid Dynamics Laboratory and was appointed as an associate professor with tenure at Columbia University in 2011. Over the last decade, she has worked with air and health management partners to develop applications of satellite and other Earth science datasets to address their emerging needs. Fiore’s honors include the American Geophysical Union (AGU) James R. Holton Junior Scientist Award, Presidential Early Career Award for Scientists and Engineers (the highest honor bestowed by the United States government on outstanding scientists and engineers in the early stages of their independent research careers), and AGU’s James B. Macelwane Medal. The Department of Earth, Atmospheric and Planetary Sciences welcomes Fiore as the first Peter H. Stone and Paola Malanotte Stone Professor.

With a background in magnetism, Danna Freedman leverages inorganic chemistry to solve problems in physics. Within this paradigm, she is creating the next generation of materials for quantum information by designing spin-based quantum bits, or qubits, based in molecules. These molecular qubits can be precisely controlled, opening the door for advances in quantum computation, sensing, and more. She also harnesses high pressure to synthesize new emergent materials, exploring the possibilities of intermetallic compounds and solid-state bonding. Among other innovations, Freedman has realized millisecond coherence times in molecular qubits, created a molecular analogue of an NV center featuring optical read-out of spin, and discovered the first iron-bismuth binary compound.

Freedman received her bachelor’s degree from Harvard University and her PhD from the University of California at Berkeley, then conducted postdoctoral research at MIT before joining the faculty at Northwestern University as an assistant professor in 2012, earning an NSF CAREER Award, the Presidential Early Career Award for Scientists and Engineers, the ACS Award in Pure Chemistry, and more. She was promoted to associate professor in 2018 and full professor with tenure in 2020. Freedman returns to MIT as the Frederick George Keyes Professor of Chemistry.

Kristin Knouse PhD ’17 aims to understand how tissues sense and respond to damage, with the goal of developing new approaches for regenerative medicine. She focuses on the mammalian liver — which has the unique ability to completely regenerate itself — to ask how organisms react to organ injury, how certain cells retain the ability to grow and divide while others do not, and what genes regulate this process. Knouse creates innovative tools, such as a genome-wide CRISPR screening within a living mouse, to examine liver regeneration from the level of a single-cell to the whole organism.

Knouse received a bachelor’s degree in biology from Duke University in 2010 and then enrolled in the Harvard and MIT MD-PhD Program, where she earned a PhD through the MIT Department of Biology in 2016 and an MD through the Harvard-MIT Program in Health Sciences and Technology in 2018. In 2018, she established her independent laboratory at the Whitehead Institute for Biomedical Research and was honored with the NIH Director’s Early Independence Award. Knouse joins the Department of Biology and the Koch Institute for Integrative Cancer Research as an assistant professor.

Lina Necib PhD ’17 is an astroparticle physicist exploring the origin of dark matter through a combination of simulations and observational data that correlate the dynamics of dark matter with that of the stars in the Milky Way. She has investigated the local dynamic structures in the solar neighborhood using the Gaia satellite, contributed to building a catalog of local accreted stars using machine learning techniques, and discovered a new stream called Nyx, after the Greek goddess of the night. Necib is interested in employing Gaia in conjunction with other spectroscopic surveys to understand the dark matter profile in the local solar neighborhood, the center of the galaxy, and in dwarf galaxies.

After obtaining a bachelor’s degree in mathematics and physics from Boston University in 2012 and a PhD in theoretical physics from MIT in 2017, Necib was a Sherman Fairchild Fellow at Caltech, a Presidential Fellow at the University of California at Irvine, and a fellow in theoretical astrophysics at Carnegie Observatories. She returns to MIT as an assistant professor in the Department of Physics and a member of the MIT Kavli Institute for Astrophysics and Space Research.

Andrew Vanderburg studies exoplanets, or planets that orbit stars other than the sun. Conducting astronomical observations from Earth as well as space, he develops cutting-edge methods to learn about planets outside of our solar system. Recently, he has leveraged machine learning to optimize searches and identify planets that were missed by previous techniques. With collaborators, he discovered the eighth planet in the Kepler-90 solar system, a Jupiter-like planet with unexpectedly close orbiting planets, and rocky bodies disintegrating near a white dwarf, providing confirmation of a theory that such stars may accumulate debris from their planetary systems.

Vanderburg received a bachelor’s degree in physics and astrophysics from the University of California at Berkeley in 2013 and a PhD in Astronomy from Harvard University in 2017. Afterward, Vanderburg moved to the University of Texas at Austin as a NASA Sagan Postdoctoral Fellow, then to the University of Wisconsin at Madison as a faculty member. He joins MIT as an assistant professor in the Department of Physics and a member of the Kavli Institute for Astrophysics and Space Research.

A computational neuroscientist, Guangyu Robert Yang is interested in connecting artificial neural networks to the actual functions of cognition. His research incorporates computational and biological systems and uses computational modeling to understand the optimization of neural systems which function to accomplish multiple tasks. As a postdoc, Yang applied principles of machine learning to study the evolution and organization of the olfactory system. The neural networks his models generated show important similarities to the biological circuitry, suggesting that the structure of the olfactory system evolved in order to optimally enable the specific tasks needed for odor recognition.

Yang received a bachelor’s degree in physics from Peking University before obtaining a PhD in computational neuroscience at New York University, followed by an internship in software engineering at Google Brain. Before coming to MIT, he conducted postdoctoral research at the Center for Theoretical Neuroscience of Columbia University, where he was a junior fellow at the Simons Society of Fellows. Yang is an assistant professor in the Department of Brain and Cognitive Sciences with a shared appointment in the Department of Electrical Engineering and Computer Science in the School of Engineering and the MIT Schwarzman College of Computing as well as an associate investigator with the McGovern Institute.

Guangyu Robert Yang

Building Networks

Robert Yang is interested in building neural network and circuit models of brain functions. He has contributed to the modern use of recurrent neural networks as a modeling tool in neuroscience. His work has shed light on neural mechanisms for cognitive flexibility. The Yang lab focuses on building multi-scale, multi-system integrative models of higher cognitive functions.