Mehrdad Jazayeri wants to know how our brains model the external world

Much of our daily life requires us to make inferences about the world around us. As you think about which direction your tennis opponent will hit the ball, or try to figure out why your child is crying, your brain is searching for answers about possibilities that are not directly accessible through sensory experiences.

MIT Associate Professor Mehrdad Jazayeri has devoted most of his career to exploring how the brain creates internal representations, or models, of the external world to make intelligent inferences about hidden states of the world.

“The one question I am most interested in is how does the brain form internal models of the external world? Studying inference is really a powerful way of gaining insight into these internal models,” says Jazayeri, who recently earned tenure in the Department of Brain and Cognitive Sciences and is also a member of MIT’s McGovern Institute for Brain Research.

Using a variety of approaches, including detailed analysis of behavior, direct recording of activity of neurons in the brain, and mathematical modeling, he has discovered how the brain builds models of statistical regularities in the environment. He has also found circuits and mechanisms that enable the brain to capture the causal relationships between observations and outcomes.

An unusual path

Jazayeri, who has been on the faculty at MIT since 2013, took an unusual path to a career in neuroscience. Growing up in Tehran, Iran, he was an indifferent student until his second year of high school when he got interested in solving challenging geometry puzzles. He also started programming with the ZX Spectrum, an early 8-bit personal computer, that his father had given him.

During high school, he was chosen to train for Iran’s first ever National Physics Olympiad team, but when he failed to make it to the international team, he became discouraged and temporarily gave up on the idea of going to college. Eventually, he participated in the University National Entrance Exam and was admitted to the electrical engineering department at Sharif University of Technology.

Jazayeri didn’t enjoy his four years of college education. The experience mostly helped him realize that he was not meant to become an engineer. “I realized that I’m not an inventor. What inspires me is the process of discovery,” he says. “I really like to figure things out, not build things, so those four years were not very inspiring.”

After graduating from college, Jazayeri spent a few years working on a banana farm near the Caspian Sea, along with two friends. He describes those years as among the best and most formative of his life. He would wake by 4 a.m., work on the farm until late afternoon, and spend the rest of the day thinking and reading. One topic he read about with great interest was neuroscience, which led him a few years later to apply to graduate school.

He immigrated to Canada and was admitted to the University of Toronto, where he earned a master’s degree in physiology and neuroscience. While there, he worked on building small circuit models that would mimic the activity of neurons in the hippocampus.

From there, Jazayeri went on to New York University to earn a PhD in neuroscience, where he studied how signals in the visual cortex support perception and decision-making. “I was less interested in how the visual cortex encodes the external world,” he says. “I wanted to understand how the rest of the brain decodes the signals in visual cortex, which is, in effect, an inference problem.”

He continued pursuing his interest in the neurobiology of inference as a postdoc at the University of Washington, where he investigated how the brain uses temporal regularities in the environment to estimate time intervals, and uses knowledge about those intervals to plan for future actions.

Building internal models to make inferences

Inference is the process of drawing conclusions based on information that is not readily available. Making rich inferences from scarce data is one of humans’ core mental capacities, one that is central to what makes us the most intelligent species on Earth. To do so, our nervous system builds internal models of the external world, and those models that help us think through possibilities without directly experiencing them.

The problem of inferences presents itself in many behavioral settings.

“Our nervous system makes all sorts of internal models for different behavioral goals, some that capture the statistical regularities in the environment, some that link potential causes to effects, some that reflect relationships between entities, and some that enable us to think about others,” Jazayeri says.

Jazayeri’s lab at MIT is made up of a group of cognitive scientists, electrophysiologists, engineers, and physicists with a shared interest in understanding the nature of internal models in the brain and how those models enable us to make inferences in different behavioral tasks.

Early work in the lab focused on a simple timing task to examine the problem of statistical inference, that is, how we use statistical regularities in the environment to make accurate inference. First, they found that the brain coordinates movements in time using a dynamic process, akin to an analog timer. They also found that the neural representation of time in the frontal cortex is being continuously calibrated based on prior experience so that we can make more accurate time estimates in the presence of uncertainty.

Later, the lab developed a complex decision-making task to examine the neural basis of causal inference, or the process of deducing a hidden cause based on its effects. In a paper that appeared in 2019, Jazayeri and his colleagues identified a hierarchical and distributed brain circuit in the frontal cortex that helps the brain to determine the most probable cause of failure within a hierarchy of decisions.

More recently, the lab has extended its investigation to other behavioral domains, including relational inference and social inference. Relational inference is about situating an ambiguous observation using relational memory. For example, coming out of a subway in a new neighborhood, we may use our knowledge of the relationship between visible landmarks to infer which way is north. Social inference, which is extremely difficult to study, involves deducing other people’s beliefs and goals based on their actions.

Along with studies in human volunteers and animal models, Jazayeri’s lab develops computational models based on neural networks, which helps them to test different possible hypotheses of how the brain performs specific tasks. By comparing the activity of those models with neural activity data from animals, the researchers can gain insight into how the brain actually performs a particular type of inference task.

“My main interest is in how the brain makes inferences about the world based on the neural signals,” Jazayeri says. “All of my work is about looking inside the brain, measuring signals, and using mathematical tools to try to understand how those signals are manifestations of an internal model within the brain.”

Storytelling brings MIT neuroscience community together

When the coronavirus pandemic shut down offices, labs, and classrooms across the MIT campus last spring, many members of the MIT community found it challenging to remain connected to one another in meaningful ways. Motivated by a desire to bring the neuroscience community back together, the McGovern Institute hosted a virtual storytelling competition featuring a selection of postdocs, grad students, and staff from across the institute.

“This has been an unprecedented year for us all,” says McGovern Institute Director Robert Desimone. “It has been twenty years since Pat and Lore McGovern founded the McGovern Institute, and despite the challenges this anniversary year has brought to our community, I have been inspired by the strength and perseverance demonstrated by our faculty, postdocs, students and staff. The resilience of this neuroscience community – and MIT as a whole – is indeed something to celebrate.”

The McGovern Institute had initially planned to hold a large 20th anniversary celebration in the atrium of Building 46 in the fall of 2020, but the pandemic made a gathering of this size impossible. The institute instead held a series of virtual events, including the November 12 story slam on the theme of resilience.

Nine MIT School of Science professors receive tenure for 2020

Beginning July 1, nine faculty members in the MIT School of Science have been granted tenure by MIT. They are appointed in the departments of Brain and Cognitive Sciences, Chemistry, Mathematics, and Physics.

Physicist Ibrahim Cisse investigates living cells to reveal and study collective behaviors and biomolecular phase transitions at the resolution of single molecules. The results of his work help determine how disruptions in genes can cause diseases like cancer. Cisse joined the Department of Physics in 2014 and now holds a joint appointment with the Department of Biology. His education includes a bachelor’s degree in physics from North Carolina Central University, concluded in 2004, and a doctoral degree in physics from the University of Illinois at Urbana-Champaign, achieved in 2009. He followed his PhD with a postdoc at the École Normale Supérieure of Paris and a research specialist appointment at the Howard Hughes Medical Institute’s Janelia Research Campus.

Jörn Dunkel is a physical applied mathematician. His research focuses on the mathematical description of complex nonlinear phenomena in a variety of fields, especially biophysics. The models he develops help predict dynamical behaviors and structure formation processes in developmental biology, fluid dynamics, and even knot strengths for sailing, rock climbing and construction. He joined the Department of Mathematics in 2013 after completing postdoctoral appointments at Oxford University and Cambridge University. He received diplomas in physics and mathematics from Humboldt University of Berlin in 2004 and 2005, respectively. The University of Augsburg awarded Dunkel a PhD in statistical physics in 2008.

A cognitive neuroscientist, Mehrdad Jazayeri studies the neurobiological underpinnings of mental functions such as planning, inference, and learning by analyzing brain signals in the lab and using theoretical and computational models, including artificial neural networks. He joined the Department of Brain and Cognitive Sciences in 2013. He achieved a BS in electrical engineering from the Sharif University of Technology in 1994, an MS in physiology at the University of Toronto in 2001, and a PhD in neuroscience from New York University in 2007. Prior to joining MIT, he was a postdoc at the University of Washington. Jazayeri is also an investigator at the McGovern Institute for Brain Research.

Yen-Jie Lee is an experimental particle physicist in the field of proton-proton and heavy-ion physics. Utilizing the Large Hadron Colliders, Lee explores matter in extreme conditions, providing new insight into strong interactions and what might have existed and occurred at the beginning of the universe and in distant star cores. His work on jets and heavy flavor particle production in nuclei collisions improves understanding of the quark-gluon plasma, predicted by quantum chromodynamics (QCD) calculations, and the structure of heavy nuclei. He also pioneered studies of high-density QCD with electron-position annihilation data. Lee joined the Department of Physics in 2013 after a fellowship at CERN and postdoc research at the Laboratory for Nuclear Science at MIT. His bachelor’s and master’s degrees were awarded by the National Taiwan University in 2002 and 2004, respectively, and his doctoral degree by MIT in 2011. Lee is a member of the Laboratory for Nuclear Science.

Josh McDermott investigates the sense of hearing. His research addresses both human and machine audition using tools from experimental psychology, engineering, and neuroscience. McDermott hopes to better understand the neural computation underlying human hearing, to improve devices to assist hearing impaired, and to enhance machine interpretation of sounds. Prior to joining MIT’s Department of Brain and Cognitive Sciences, he was awarded a BA in 1998 in brain and cognitive sciences by Harvard University, a master’s degree in computational neuroscience in 2000 by University College London, and a PhD in brain and cognitive sciences in 2006 by MIT. Between his doctoral time at MIT and returning as a faculty member, he was a postdoc at the University of Minnesota and New York University, and a visiting scientist at Oxford University. McDermott is also an associate investigator at the McGovern Institute for Brain Research and an investigator in the Center for Brains, Minds and Machines.

Solving environmental challenges by studying and manipulating chemical reactions is the focus of Yogesh Surendranath’s research. Using chemistry, he works at the molecular level to understand how to efficiently interconvert chemical and electrical energy. His fundamental studies aim to improve energy storage technologies, such as batteries, fuel cells, and electrolyzers, that can be used to meet future energy demand with reduced carbon emissions. Surendranath joined the Department of Chemistry in 2013 after a postdoc at the University of California at Berkeley. His PhD was completed in 2011 at MIT, and BS in 2006 at the University of Virginia. Suendranath is also a collaborator in the MIT Energy Initiative.

A theoretical astrophysicist, Mark Vogelsberger is interested in large-scale structures of the universe, such as galaxy formation. He combines observational data, theoretical models, and simulations that require high-performance supercomputers to improve and develop detailed models that simulate galaxy diversity, clustering, and their properties, including a plethora of physical effects like magnetic fields, cosmic dust, and thermal conduction. Vogelsberger also uses simulations to generate scenarios involving alternative forms of dark matter. He joined the Department of Physics in 2014 after a postdoc at the Harvard-Smithsonian Center for Astrophysics. Vogelsberger is a 2006 graduate of the University of Mainz undergraduate program in physics, and a 2010 doctoral graduate of the University of Munich and the Max Plank Institute for Astrophysics. He is also a principal investigator in the MIT Kavli Institute for Astrophysics and Space Research.

Adam Willard is a theoretical chemist with research interests that fall across molecular biology, renewable energy, and material science. He uses theory, modeling, and molecular simulation to study the disorder that is inherent to systems over nanometer-length scales. His recent work has highlighted the fundamental and unexpected role that such disorder plays in phenomena such as microscopic energy transport in semiconducting plastics, ion transport in batteries, and protein hydration. Joining the Department of Chemistry in 2013, Willard was formerly a postdoc at Lawrence Berkeley National Laboratory and then the University of Texas at Austin. He holds a PhD in chemistry from the University of California at Berkeley, achieved in 2009, and a BS in chemistry and mathematics from the University of Puget Sound, granted in 2003.

Lindley Winslow seeks to understand the fundamental particles shaped the evolution of our universe. As an experimental particle and nuclear physicist, she develops novel detection technology to search for axion dark matter and a proposed nuclear decay that makes more matter than antimatter. She started her faculty position in the Department of Physics in 2015 following a postdoc at MIT and a subsequent faculty position at the University of California at Los Angeles. Winslow achieved her BA in physics and astronomy in 2001 and PhD in physics in 2008, both at the University of California at Berkeley. She is also a member of the Laboratory for Nuclear Science.

Empowering faculty partnerships across the globe

MIT faculty share their creative and technical talent on campus as well as across the globe, compounding the Institute’s impact through strong international partnerships. Thanks to the MIT Global Seed Funds (GSF) program, managed by the MIT International Science and Technology Initiatives (MISTI), more of these faculty members will be able to build on these relationships to develop ideas and create new projects.

“This MISTI fund was extremely helpful in consolidating our collaboration and has been the start of a long-term interaction between the two teams,” says 2017 GSF awardee Mehrdad Jazayeri, associate professor of brain and cognitive sciences and investigator at the McGovern Institute for Brain Research. “We have already submitted multiple abstracts to conferences together, mapped out several ongoing projects, and secured international funding thanks to the preliminary progress this seed fund enabled.”

This year, the 28 funds that comprise MISTI GSF received 232 MIT applications. Over $2.3 million was awarded to 107 projects from 23 departments across the entire Institute. This brings the amount awarded to $22 million over the 12-year life of the program. Besides supporting faculty, these funds also provide meaningful educational opportunities for students. The majority of GSF teams include students from MIT and international collaborators, bolstering both their research portfolios and global experience.

“This project has had important impact on my grad student’s education and development. She was able to apply techniques she has learned to a new and challenging system, mentor an international student, participate in a major international meeting, and visit CEA,” says Professor of Chemistry Elizabeth Nolan, a 2017 GSF awardee.

On top of these academic and research goals, students are actively broadening their cultural experience and scope. “The environment at CEA differs enormously from MIT because it is a national lab and because lab structure and graduate education in France is markedly different than at MIT,” Nolan continues. “At CEA, she had the opportunity to present research to distinguished international colleagues.”

These impactful partnerships unite faculty teams behind common goals to tackle worldwide challenges, helping to develop solutions that would not be possible without international collaboration. 2017 GSF winner Emilio Bizzi, professor emeritus of brain and cognitive sciences and emeritus investigator at the McGovern Institute, articulated the advantage of combining these individual skills within a high-level team. “The collaboration among researchers was valuable in sharing knowledge, experience, skills and techniques … as well as offering the probability of future development of systems to aid in rehabilitation of patients suffering TBI.”

The research opportunities that grow from these seed funds often lead to published papers and additional funding leveraged from early results. The next call for proposals will be in mid-May.

MISTI creates applied international learning opportunities for MIT students that increase their ability to understand and address real-world problems. MISTI collaborates with partners at MIT and beyond, serving as a vital nexus of international activity and bolstering the Institute’s research mission by promoting collaborations between MIT faculty members and their counterparts abroad.

McGovern lab manager creates art inspired by science

Michal De-Medonsa, technical associate and manager of the Jazayeri lab, created a large wood mosaic for her lab. We asked Michal to tell us a bit about the mosaic, her inspiration, and how in the world she found the time to create such an exquisitely detailed piece of art.

______

Jazayeri lab manager Michal De-Medonsa holds her wood mosaic entitled “JazLab.” Photo: Caitlin Cunningham

Describe this piece of art for us.

To make a piece this big (63″ x 15″), I needed several boards of padauk wood. I could have just etched each board as a whole unit and glued the 13 or so boards to each other, but I didn’t like the aesthetic. The grain and color within each board would look beautiful, but the line between each board would become obvious, segmented, and jarring when contrasted with the uniformity within each board. Instead, I cut out about 18 separate squares out of each board, shuffled all 217 pieces around, and glued them to one another in a mosaic style with a larger pattern (inspired by my grandfather’s work in granite mosaics).

What does this mosaic mean to you?

Once every piece was shuffled, the lines between single squares were certainly visible, but as a feature, were far less salient than had the full boards been glued to one another. As I was working on the piece, I was thinking about how the same concept holds true in society. Even if there is diversity within a larger piece (an institution, for example), there is a tendency for groups to form within the larger piece (like a full board), diversity becomes separated. This isn’t a criticism of any institution, it is human nature to form in-groups. It’s subconscious (so perhaps the criticism is that we, as a society, don’t give that behavior enough thought and try to ameliorate our reflex to group with those who are “like us”). The grain of the wood is uniform, oriented in the same direction, the two different cutting patterns create a larger pattern within the piece, and there are smaller patterns between and within single pieces. I love creating and finding patterns in my art (and life). Alfred North Whitehead wrote that “understanding is the apperception of pattern as such.” True, I believe, in science, art, and the humanities. What a great goal – to understand.​

Tell us about the name of this piece.

Every large piece I make is inspired by the people I make it for, and is therefore named after them. This piece is called JazLab. Having lived around the world, and being a descendant of a nomadic people, I don’t consider any one place home, but am inspired by every place I’ve lived. In all of my work, you can see elements of my Jewish heritage, antiquity, the Middle East, Africa, and now MIT.

How has MIT influenced your art?

MIT has influenced me in the most obvious way MIT could influence anyone – technology. Before this series, I made very small versions of this type of work, designing everything on a piece of paper with a pencil and a ruler, and making every cut by hand. Each of those small squares would take ~2 hours (depending on the design), and I was limited to softer woods.

Since coming to MIT, I learned that I had access to the Hobby Shop with a huge array of power tools and software. I began designing my patterns on the computer and used power tools to make the cuts. I actually struggled a lot with using the tech – not because it was hard (which, it really is when you just start out), but rather because it felt like I was somehow “cheating.” How is this still art? And although this is something I still think about often, I’ve tried to look at it in this way: every generation, in their time, used the most advanced technology. The beauty and value of the piece doesn’t come from how many bruises, cuts, and blisters your machinery gave you, or whether you scraped the wood out with your nails, but rather, once you were given a tool, what did you decide to do with it? My pieces still have a huge hand-on-material work, but I am working on accepting that using technology in no way devalues the work.

Given your busy schedule with the Jazayeri lab, how did you find the time to create this piece of art?

I took advantage of any free hour I could. Two days out of the week, the hobby shop is open until 9pm, and I would additionally go every Saturday. For the parts that didn’t require the shop (adjusting each piece individually with a carving knife, assembling them, even most of the glueing) I would just work  at home – often very late into the night.

______

JazLab is on display in the Jazayeri lab in MIT Bldg 46.

Controlling our internal world

Olympic skaters can launch, perform multiple aerial turns, and land gracefully, anticipating imperfections and reacting quickly to correct course. To make such elegant movements, the brain must have an internal model of the body to control, predict, and make almost instantaneous adjustments to motor commands. So-called “internal models” are a fundamental concept in engineering and have long been suggested to underlie control of movement by the brain, but what about processes that occur in the absence of movement, such as contemplation, anticipation, planning?

Using a novel combination of task design, data analysis, and modeling, MIT neuroscientist Mehrdad Jazayeri and colleagues now provide compelling evidence that the core elements of an internal model also control purely mental processes in a study published in Nature Neuroscience.

“During my thesis I realized that I’m interested, not so much in how our senses react to sensory inputs, but instead in how my internal model of the world helps me make sense of those inputs,”says Jazayeri, the Robert A. Swanson Career Development Professor of Life Sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Indeed, understanding the building blocks exerting control of such mental processes could help to paint a better picture of disruptions in mental disorders, such as schizophrenia.

Internal models for mental processes

Scientists working on the motor system have long theorized that the brain overcomes noisy and slow signals using an accurate internal model of the body. This internal model serves three critical functions: it provides motor to control movement, simulates upcoming movement to overcome delays, and uses feedback to make real-time adjustments.

“The framework that we currently use to think about how the brain controls our actions is one that we have borrowed from robotics: we use controllers, simulators, and sensory measurements to control machines and train operators,” explains Reza Shadmehr, a professor at the Johns Hopkins School of Medicine who was not involved with the study. “That framework has largely influenced how we imagine our brain controlling our movements.”

Jazazyeri and colleagues wondered whether the same framework might explain the control principles governing mental states in the absence of any movement.

“When we’re simply sitting, thoughts and images run through our heads and, fundamental to intellect, we can control them,” explains lead author Seth Egger, a former postdoctoral associate in the Jazayeri lab and now at Duke University.

“We wanted to find out what’s happening between our ears when we are engaged in thinking,” says Egger.

Imagine, for example, a sign language interpreter keeping up with a fast speaker. To track speech accurately, the translator continuously anticipates where the speech is going, rapidly adjusting when the actual words deviate from the prediction. The interpreter could be using an internal model to anticipate upcoming words, and use feedback to make adjustments on the fly.

1-2-3…Go

Hypothesizing about how the components of an internal model function in scenarios such as translation is one thing. Cleanly measuring and proving the existence of these elements is much more complicated as the activity of the controller, simulator, and feedback are intertwined. To tackle this problem, Jazayeri and colleagues devised a clever task with primate models in which the controller, simulator, and feedback act at distinct times.

In this task, called “1-2-3-Go,” the animal sees three consecutive flashes (1, 2, and 3) that form a regular beat, and learns to make an eye movement (Go) when they anticipate the 4th flash should occur. During the task, researchers measured neural activity in a region of the frontal cortex they had previously linked to the timing of movement.

Jazayeri and colleagues had clear predictions about when the controller would act (between the third flash and “Go”) and when feedback would be engaged (with each flash of light). The key surprise came when researchers saw evidence for the simulator anticipating the third flash. This unexpected neural activity has dynamics that resemble the controller, but was not associated with a response. In other words, the researchers uncovered a covert plan that functions as the simulator, thus uncovering all three elements of an internal model for a mental process, the planning and anticipation of “Go” in the “1-2-3-Go” sequence.

“Jazayeri’s work is important because it demonstrates how to study mental simulation in animals,” explains Shadmehr, “and where in the brain that simulation is taking place.”

Having found how and where to measure an internal model in action, Jazayeri and colleagues now plan to ask whether these control strategies can explain how primates effortlessly generalize their knowledge from one behavioral context to another. For example, how does an interpreter rapidly adjust when someone with widely different speech habits takes the podium? This line of investigation promises to shed light on high-level mental capacities of the primate brain that simpler animals seem to lack, that go awry in mental disorders, and that designers of artificial intelligence systems so fondly seek.

Mehrdad Jazayeri and Hazel Sive awarded 2019 School of Science teaching prizes

The School of Science has announced that the recipients of the school’s 2019 Teaching Prizes for Graduate and Undergraduate Education are Mehrdad Jazayeri and Hazel Sive. Nominated by peers and students, the faculty members chosen to receive these prizes are selected to acknowledge their exemplary efforts in teaching graduate and undergraduate students.

Mehrdad Jazayeri, an associate professor in the Department of Brain and Cognitive Sciences and investigator at the McGovern Institute for Brain Research, is awarded the prize for graduate education for 9.014 (Quantitative Methods and Computational Models in Neuroscience). Earlier this year, he was recognized for excellence in graduate teaching by the Department of Brain and Cognitive Sciences and won a Graduate Student Council teaching award in 2016. In their nomination letters, peers and students alike remarked that he displays not only great knowledge, but extraordinary skill in teaching, most notably by ensuring everyone learns the material. Jazayeri does so by considering students’ diverse backgrounds and contextualizing subject material to relatable applications in various fields of science according to students’ interests. He also improves and adjusts the course content, pace, and intensity in response to student input via surveys administered throughout the semester.

Hazel Sive, a professor in the Department of Biology, member of the Whitehead Institute for Biomedical Research, and associate member of the Broad Institute of MIT and Harvard, is awarded the prize for undergraduate education. A MacVicar Faculty Fellow, she has been recognized with MIT’s highest undergraduate teaching award in the past, as well as the 2003 School of Science Teaching Prize for Graduate Education. Exemplified by her nominations, Sive’s laudable teaching career at MIT continues to receive praise from undergraduate students who take her classes. In recent post-course evaluations, students commended her exemplary and dedicated efforts to her field and to their education.

The School of Science welcomes nominations for the teaching prize in the spring semester of each academic year. Nominations can be submitted at the school’s website.

Do thoughts have mass?

As part of our Ask the Brain series, we received the question, “Do thoughts have mass?” The following is a guest blog post by Michal De-Medonsa, technical associate and manager of the Jazayeri lab, who tapped into her background in philosophy to answer this intriguing question.

_____

Portrat of Michal De-Medonsa
Jazayeri lab manager (and philosopher) Michal De-Medonsa.

To answer the question, “Do thoughts have mass?” we must, like any good philosopher, define something that already has a definition – “thoughts.”

Logically, we can assert that thoughts are either metaphysical or physical (beyond that, we run out of options). If our definition of thought is metaphysical, it is safe to say that metaphysical thoughts do not have mass since they are by definition not physical, and mass is a property of a physical things. However, if we define a thought as a physical thing, it becomes a little trickier to determine whether or not it has mass.

A physical definition of thoughts falls into (at least) two subgroups – physical processes and physical parts. Take driving a car, for example – a parts definition describes the doors, motor, etc. and has mass. A process definition of a car being driven, turning the wheel, moving from point A to point B, etc. does not have mass. The process of driving is a physical process that involves moving physical matter, but we wouldn’t say that the act of driving has mass. The car itself, however, is an example of physical matter, and as any cyclist in the city of Boston is well aware  – cars have mass. It’s clear that if we define a thought as a process, it does not have mass, and if we define a thought as physical parts, it does have mass – so, which one is it? In order to resolve our issue, we have to be incredibly precise with our definition. Is a thought a process or parts? That is, is a thought more like driving or more like a car?

In order to resolve our issue, we have to be incredibly precise with our definition of the word thought.

Both physical definitions (process and parts) have merit. For a parts definition, we can look at what is required for a thought – neurons, electrical signals, and neurochemicals, etc. This type of definition becomes quite imprecise and limiting. It doesn’t seem too problematic to say that the neurons, neurochemicals, etc. are themselves the thought, but this style of definition starts to fall apart when we try to include all the parts involved (e.g. blood flow, connective tissue, outside stimuli). When we look at a face, the stimuli received by the visual cortex is part of the thought – is the face part of a thought? When we look at our phone, is the phone itself part of a thought? A parts definition either needs an arbitrary limit, or we end up having to include all possible parts involved in the thought, ending up with an incredibly convoluted and effectively useless definition.

A process definition is more versatile and precise, and it allows us to include all the physical parts in a more elegant way. We can now say that all the moving parts are included in the process without saying that they themselves are the thought. That is, we can say blood flow is included in the process without saying that blood flow itself is part of the thought. It doesn’t sound ridiculous to say that a phone is part of the thought process. If we subscribe to the parts definition, however, we’re forced to say that part of the mass of a thought comes from the mass of a phone. A process definition allows us to be precise without being convoluted, and allows us to include outside influences without committing to absurd definitions.

Typical of a philosophical endeavor, we’re left with more questions and no simple answer. However, we can walk away with three conclusions.

  1. A process definition of “thought” allows for elegance and the involvement of factors outside the “vacuum” of our physical body, however, we lose out on some function by not describing a thought by its physical parts.
  2. The colloquial definition of “thought” breaks down once we invite a philosopher over to break it down, but this is to be expected – when we try to break something down, sometimes, it will break down. What we should be aware of is that if we want to use the word in a rigorous scientific framework, we need a rigorous scientific definition.
  3. Most importantly, it’s clear that we need to put a lot of work into defining exactly what we mean by “thought” – a job well suited to a scientifically-informed philosopher.

Michal De-Medonsa earned her bachelor’s degree in neuroscience and philosophy from Johns Hopkins University in 2012 and went on to receive her master’s degree in history and philosophy of science at the University of Pittsburgh in 2015. She joined the Jazayeri lab in 2018 as a lab manager/technician and spends most of her free time rock climbing, doing standup comedy, and woodworking at the MIT Hobby Shop. 

_____

Do you have a question for The Brain? Ask it here.

How expectation influences perception

For decades, research has shown that our perception of the world is influenced by our expectations. These expectations, also called “prior beliefs,” help us make sense of what we are perceiving in the present, based on similar past experiences. Consider, for instance, how a shadow on a patient’s X-ray image, easily missed by a less experienced intern, jumps out at a seasoned physician. The physician’s prior experience helps her arrive at the most probable interpretation of a weak signal.

The process of combining prior knowledge with uncertain evidence is known as Bayesian integration and is believed to widely impact our perceptions, thoughts, and actions. Now, MIT neuroscientists have discovered distinctive brain signals that encode these prior beliefs. They have also found how the brain uses these signals to make judicious decisions in the face of uncertainty.

“How these beliefs come to influence brain activity and bias our perceptions was the question we wanted to answer,” says Mehrdad Jazayeri, the Robert A. Swanson Career Development Professor of Life Sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

The researchers trained animals to perform a timing task in which they had to reproduce different time intervals. Performing this task is challenging because our sense of time is imperfect and can go too fast or too slow. However, when intervals are consistently within a fixed range, the best strategy is to bias responses toward the middle of the range. This is exactly what animals did. Moreover, recording from neurons in the frontal cortex revealed a simple mechanism for Bayesian integration: Prior experience warped the representation of time in the brain so that patterns of neural activity associated with different intervals were biased toward those that were within the expected range.

MIT postdoc Hansem Sohn, former postdoc Devika Narain, and graduate student Nicolas Meirhaeghe are the lead authors of the study, which appears in the July 15 issue of Neuron.

Ready, set, go

Statisticians have known for centuries that Bayesian integration is the optimal strategy for handling uncertain information. When we are uncertain about something, we automatically rely on our prior experiences to optimize behavior.

“If you can’t quite tell what something is, but from your prior experience you have some expectation of what it ought to be, then you will use that information to guide your judgment,” Jazayeri says. “We do this all the time.”

In this new study, Jazayeri and his team wanted to understand how the brain encodes prior beliefs, and put those beliefs to use in the control of behavior. To that end, the researchers trained animals to reproduce a time interval, using a task called “ready-set-go.” In this task, animals measure the time between two flashes of light (“ready” and “set”) and then generate a “go” signal by making a delayed response after the same amount of time has elapsed.

They trained the animals to perform this task in two contexts. In the “Short” scenario, intervals varied between 480 and 800 milliseconds, and in the “Long” context, intervals were between 800 and 1,200 milliseconds. At the beginning of the task, the animals were given the information about the context (via a visual cue), and therefore knew to expect intervals from either the shorter or longer range.

Jazayeri had previously shown that humans performing this task tend to bias their responses toward the middle of the range. Here, they found that animals do the same. For example, if animals believed the interval would be short, and were given an interval of 800 milliseconds, the interval they produced was a little shorter than 800 milliseconds. Conversely, if they believed it would be longer, and were given the same 800-millisecond interval, they produced an interval a bit longer than 800 milliseconds.

“Trials that were identical in almost every possible way, except the animal’s belief led to different behaviors,” Jazayeri says. “That was compelling experimental evidence that the animal is relying on its own belief.”

Once they had established that the animals relied on their prior beliefs, the researchers set out to find how the brain encodes prior beliefs to guide behavior. They recorded activity from about 1,400 neurons in a region of the frontal cortex, which they have previously shown is involved in timing.

During the “ready-set” epoch, the activity profile of each neuron evolved in its own way, and about 60 percent of the neurons had different activity patterns depending on the context (Short versus Long). To make sense of these signals, the researchers analyzed the evolution of neural activity across the entire population over time, and found that prior beliefs bias behavioral responses by warping the neural representation of time toward the middle of the expected range.

“We have never seen such a concrete example of how the brain uses prior experience to modify the neural dynamics by which it generates sequences of neural activities, to correct for its own imprecision. This is the unique strength of this paper: bringing together perception, neural dynamics, and Bayesian computation into a coherent framework, supported by both theory and measurements of behavior and neural activities,” says Mate Lengyel, a professor of computational neuroscience at Cambridge University, who was not involved in the study.

Embedded knowledge

Researchers believe that prior experiences change the strength of connections between neurons. The strength of these connections, also known as synapses, determines how neurons act upon one another and constrains the patterns of activity that a network of interconnected neurons can generate. The finding that prior experiences warp the patterns of neural activity provides a window onto how experience alters synaptic connections. “The brain seems to embed prior experiences into synaptic connections so that patterns of brain activity are appropriately biased,” Jazayeri says.

As an independent test of these ideas, the researchers developed a computer model consisting of a network of neurons that could perform the same ready-set-go task. Using techniques borrowed from machine learning, they were able to modify the synaptic connections and create a model that behaved like the animals.

These models are extremely valuable as they provide a substrate for the detailed analysis of the underlying mechanisms, a procedure that is known as “reverse-engineering.” Remarkably, reverse-engineering the model revealed that it solved the task the same way the monkeys’ brain did. The model also had a warped representation of time according to prior experience.

The researchers used the computer model to further dissect the underlying mechanisms using perturbation experiments that are currently impossible to do in the brain. Using this approach, they were able to show that unwarping the neural representations removes the bias in the behavior. This important finding validated the critical role of warping in Bayesian integration of prior knowledge.

The researchers now plan to study how the brain builds up and slowly fine-tunes the synaptic connections that encode prior beliefs as an animal is learning to perform the timing task.

The research was funded by the Center for Sensorimotor Neural Engineering, the Netherlands Scientific Organization, the Marie Sklodowska Curie Reintegration Grant, the National Institutes of Health, the Sloan Foundation, the Klingenstein Foundation, the Simons Foundation, the McKnight Foundation, and the McGovern Institute.

How we make complex decisions

When making a complex decision, we often break the problem down into a series of smaller decisions. For example, when deciding how to treat a patient, a doctor may go through a hierarchy of steps — choosing a diagnostic test, interpreting the results, and then prescribing a medication.

Making hierarchical decisions is straightforward when the sequence of choices leads to the desired outcome. But when the result is unfavorable, it can be tough to decipher what went wrong. For example, if a patient doesn’t improve after treatment, there are many possible reasons why: Maybe the diagnostic test is accurate only 75 percent of the time, or perhaps the medication only works for 50 percent of the patients. To decide what do to next, the doctor must take these probabilities into account.

In a new study, MIT neuroscientists explored how the brain reasons about probable causes of failure after a hierarchy of decisions. They discovered that the brain performs two computations using a distributed network of areas in the frontal cortex. First, the brain computes confidence over the outcome of each decision to figure out the most likely cause of a failure, and second, when it is not easy to discern the cause, the brain makes additional attempts to gain more confidence.

“Creating a hierarchy in one’s mind and navigating that hierarchy while reasoning about outcomes is one of the exciting frontiers of cognitive neuroscience,” says Mehrdad Jazayeri, the Robert A. Swanson Career Development Professor of Life Sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

MIT graduate student Morteza Sarafyzad is the lead author of the paper, which appears in Science on May 16.

Hierarchical reasoning

Previous studies of decision-making in animal models have focused on relatively simple tasks. One line of research has focused on how the brain makes rapid decisions by evaluating momentary evidence. For example, a large body of work has characterized the neural substrates and mechanisms that allow animals to categorize unreliable stimuli on a trial-by-trial basis. Other research has focused on how the brain chooses among multiple options by relying on previous outcomes across multiple trials.

“These have been very fruitful lines of work,” Jazayeri says. “However, they really are the tip of the iceberg of what humans do when they make decisions. As soon as you put yourself in any real decision-making situation, be it choosing a partner, choosing a car, deciding whether to take this drug or not, these become really complicated decisions. Oftentimes there are many factors that influence the decision, and those factors can operate at different timescales.”

The MIT team devised a behavioral task that allowed them to study how the brain processes information at multiple timescales to make decisions. The basic design was that animals would make one of two eye movements depending on whether the time interval between two flashes of light was shorter or longer than 850 milliseconds.

A twist required the animals to solve the task through hierarchical reasoning: The rule that determined which of the two eye movements had to be made switched covertly after 10 to 28 trials. Therefore, to receive reward, the animals had to choose the correct rule, and then make the correct eye movement depending on the rule and interval. However, because the animals were not instructed about the rule switches, they could not straightforwardly determine whether an error was caused because they chose the wrong rule or because they misjudged the interval.

The researchers used this experimental design to probe the computational principles and neural mechanisms that support hierarchical reasoning. Theory and behavioral experiments in humans suggest that reasoning about the potential causes of errors depends in large part on the brain’s ability to measure the degree of confidence in each step of the process. “One of the things that is thought to be critical for hierarchical reasoning is to have some level of confidence about how likely it is that different nodes [of a hierarchy] could have led to the negative outcome,” Jazayeri says.

The researchers were able to study the effect of confidence by adjusting the difficulty of the task. In some trials, the interval between the two flashes was much shorter or longer than 850 milliseconds. These trials were relatively easy and afforded a high degree of confidence. In other trials, the animals were less confident in their judgments because the interval was closer to the boundary and difficult to discriminate.

As they had hypothesized, the researchers found that the animals’ behavior was influenced by their confidence in their performance. When the interval was easy to judge, the animals were much quicker to switch to the other rule when they found out they were wrong. When the interval was harder to judge, the animals were less confident in their performance and applied the same rule a few more times before switching.

“They know that they’re not confident, and they know that if they’re not confident, it’s not necessarily the case that the rule has changed. They know they might have made a mistake [in their interval judgment],” Jazayeri says.

Decision-making circuit

By recording neural activity in the frontal cortex just after each trial was finished, the researchers were able to identify two regions that are key to hierarchical decision-making. They found that both of these regions, known as the anterior cingulate cortex (ACC) and dorsomedial frontal cortex (DMFC), became active after the animals were informed about an incorrect response. When the researchers analyzed the neural activity in relation to the animals’ behavior, it became clear that neurons in both areas signaled the animals’ belief about a possible rule switch. Notably, the activity related to animals’ belief was “louder” when animals made a mistake after an easy trial, and after consecutive mistakes.

The researchers also found that while these areas showed similar patterns of activity, it was activity in the ACC in particular that predicted when the animal would switch rules, suggesting that ACC plays a central role in switching decision strategies. Indeed, the researchers found that direct manipulation of neural activity in ACC was sufficient to interfere with the animals’ rational behavior.

“There exists a distributed circuit in the frontal cortex involving these two areas, and they seem to be hierarchically organized, just like the task would demand,” Jazayeri says.

Daeyeol Lee, a professor of neuroscience, psychology, and psychiatry at Yale School of Medicine, says the study overcomes what has been a major obstacle in studying this kind of decision-making, namely, a lack of animal models to study the dynamics of brain activity at single-neuron resolution.

“Sarafyazd and Jazayeri have developed an elegant decision-making task that required animals to evaluate multiple types of evidence, and identified how the two separate regions in the medial frontal cortex are critically involved in handling different sources of errors in decision making,” says Lee, who was not involved in the research. “This study is a tour de force in both rigor and creativity, and peels off another layer of mystery about the prefrontal cortex.”