Modeling the social mind

Typically, it would take two graduate students to do the research that Setayesh Radkani is doing.

Driven by an insatiable curiosity about the human mind, she is working on two PhD thesis projects in two different cognitive neuroscience labs at MIT. For one, she is studying punishment as a social tool to influence others. For the other, she is uncovering the neural processes underlying social learning — that is, learning from others. By piecing together these two research programs, Radkani is hoping to gain a better understanding of the mechanisms underpinning social influence in the mind and brain.

Radkani lived in Iran for most of her life, growing up alongside her younger brother in Tehran. The two spent a lot of time together and have long been each other’s best friends. Her father is a civil engineer, and her mother is a midwife. Her parents always encouraged her to explore new things and follow her own path, even if it wasn’t quite what they imagined for her. And her uncle helped cultivate her sense of curiosity, teaching her to “always ask why” as a way to understand how the world works.

Growing up, Radkani most loved learning about human psychology and using math to model the world around her. But she thought it was impossible to combine her two interests. Prioritizing math, she pursued a bachelor’s degree in electrical engineering at the Sharif University of Technology in Iran.

Then, late in her undergraduate studies, Radkani took a psychology course and discovered the field of cognitive neuroscience, in which scientists mathematically model the human mind and brain. She also spent a summer working in a computational neuroscience lab at the Swiss Federal Institute of Technology in Lausanne. Seeing a way to combine her interests, she decided to pivot and pursue the subject in graduate school.

An experience leading a project in her engineering ethics course during her final year of undergrad further helped her discover some of the questions that would eventually form the basis of her PhD. The project investigated why some students cheat and how to change this.

“Through this project I learned how complicated it is to understand the reasons that people engage in immoral behavior, and even more complicated than that is how to devise policies and react in these situations in order to change people’s attitudes,” Radkani says. “It was this experience that made me realize that I’m interested in studying the human social and moral mind.”

She began looking into social cognitive neuroscience research and stumbled upon a relevant TED talk by Rebecca Saxe, the John W. Jarve Professor in Brain and Cognitive Sciences at MIT, who would eventually become one of Radkani’s research advisors. Radkani knew immediately that she wanted to work with Saxe. But she needed to first get into the BCS PhD program at MIT, a challenging obstacle given her minimal background in the field.

After two application cycles and a year’s worth of graduate courses in cognitive neuroscience, Radkani was accepted into the program. But to come to MIT, she had to leave her family behind. Coming from Iran, Radkani has a single-entry visa, making it difficult for her to travel outside the U.S. She hasn’t been able to visit her family since starting her PhD and won’t be able to until at least after she graduates. Her visa also limits her research contributions, restricting her from attending conferences outside the U.S. “That is definitely a huge burden on my education and on my mental health,” she says.

Still, Radkani is grateful to be at MIT, indulging her curiosity in the human social mind. And she’s thankful for her supportive family, who she calls over FaceTime every day.

Modeling how people think about punishment

In Saxe’s lab, Radkani is researching how people approach and react to punishment, through behavioral studies and neuroimaging. By synthesizing these findings, she’s developing a computational model of the mind that characterizes how people make decisions in situations involving punishment, such as when a parent disciplines a child, when someone punishes their romantic partner, or when the criminal justice system sentences a defendant. With this model, Radkani says she hopes to better understand “when and why punishment works in changing behavior and influencing beliefs about right and wrong, and why sometimes it fails.”

Punishment isn’t a new research topic in cognitive neuroscience, Radkani says, but in previous studies, scientists have often only focused on people’s behavior in punitive situations and haven’t considered the thought processes that underlie those behaviors. Characterizing these thought processes, though, is key to understanding whether punishment in a situation can be effective in changing people’s attitudes.

People bring their prior beliefs into a punitive situation. Apart from moral beliefs about the appropriateness of different behaviors, “you have beliefs about the characteristics of the people involved, and you have theories about their intentions and motivations,” Radkani says. “All those come together to determine what you do or how you are influenced by punishment,” given the circumstances. Punishers decide a suitable punishment based on their interpretation of the situation, in light of their beliefs. Targets of punishment then decide whether they’ll change their attitude as a result of the punishment, depending on their own beliefs. Even outside observers make decisions, choosing whether to keep or change their moral beliefs based on what they see.

To capture these decision-making processes, Radkani is developing a computational model of the mind for punitive situations. The model mathematically represents people’s beliefs and how they interact with certain features of the situation to shape their decisions. The model then predicts a punisher’s decisions, and how punishment will influence the target and observers. Through this model, Radkani will provide a foundational understanding of how people think in various punitive situations.

Researching the neural mechanisms of social learning

In parallel, working in the lab of Professor Mehrdad Jazayeri, Radkani is studying social learning, uncovering its underlying neural processes. Through social learning, people learn from other people’s experiences and decisions, and incorporate this socially acquired knowledge into their own decisions or beliefs.

Humans are extraordinary in their social learning abilities, however our primary form of learning, shared by all other animals, is learning from self-experience. To investigate how learning from others is similar to or different from learning from our own experiences, Radkani has designed a two-player video game that involves both types of learning. During the game, she and her collaborators in Jazayeri’s lab record neural activity in the brain. By analyzing these neural measurements, they plan to uncover the computations carried out by neural circuits during social learning, and compare those to learning from self-experience.

Radkani first became curious about this comparison as a way to understand why people sometimes draw contrasting conclusions from very similar situations. “For example, if I get Covid from going to a restaurant, I’ll blame the restaurant and say it was not clean,” Radkani says. “But if I hear the same thing happen to my friend, I’ll say it’s because they were not careful.” Radkani wanted to know the root causes of this mismatch in how other people’s experiences affect our beliefs and judgements differently from our own similar experiences, particularly because it can lead to “errors that color the way that we judge other people,” she says.

By combining her two research projects, Radkani hopes to better understand how social influence works, particularly in moral situations. From there, she has a slew of research questions that she’s eager to investigate, including: How do people choose who to trust? And which types of people tend to be the most influential? As Radkani’s research grows, so does her curiosity.

Tracking time in the brain

By studying how primates mentally measure time, scientists at MIT’s McGovern Institute have discovered that the brain runs an internal clock whose speed is set by prior experience. In new experiences, the brain closely tracks how elapsed time intervals differ from its preset expectation—indicating that for the brain, time is relative.

The findings, reported September 15, 2021, in the journal Neuron, help explain how the brain uses past experience to make predictions—a powerful strategy for navigating a complex and ever-changing world. The research was led by McGovern Investigator Mehrdad Jazayeri, who is working to understand how the brain forms internal models of the world.

Internal clock

Sensory information tells us a lot about our environment, but the brain needs more than data, Jazayeri says. Internal models are vital for understanding the relationships between things, making generalizations, and interpreting and acting on our perceptions. They help us focus on what’s most important and make predictions about our surroundings, as well as the consequences of our actions. “To be efficient in learning about the world and interacting with the world, we need those predictions,” Jazayeri says. When we enter a new grocery store, for example, we don’t have to check every aisle for the peanut butter, because we know it is likely to be near the jam. Likewise, an experienced racquetball player knows how the ball will move when her paddle hits it a certain way.

Jazayeri’s team was interested in how the brain might make predictions about time. Previously, his team showed how neurons in the frontal cortex—a part of the brain involved in planning—can tick off the passage of time like a metronome. By training monkeys to use an eye movement to indicate the duration of time that separated two flashes of light, they found that cells that track time during this task cooperate to form an adjustable internal clock. Those cells generate a pattern of activity that can be drawn out to measure long time intervals or compressed to track shorter ones. The changes in these signal dynamics reflect elapsed time so precisely that by monitoring the right neurons, Jazayeri’s team can determine exactly how fast a monkey’s internal clock is running.

Predictive processing

Nicolas Meirhaeghe, a graduate student in Mehrdad Jazayeri’s lab, studies how we plan and perform movements in the face of uncertainty. He is pictured here as part of the McGovern Institute 20th anniversary “Rising Stars” photo series. Photo: Michael Spencer

For their most recent experiments, graduate student Nicolas Meirhaeghe designed a series of experiments in which the delay between the two flashes of light changed as the monkeys repeated the task. Sometimes the flashes were separated by just a fraction of a second, sometimes the delay was a bit longer. He found that the time-keeping activity pattern in the frontal cortex occurred over different time scales as the monkeys came to expect delays of different durations. As the duration of the delay fluctuated, the brain appeared to take all prior experience into account, setting the clock to measure the average of those times in anticipation of the next interval.

The behavior of the neurons told the researchers that as a monkey waited for a new set of light cues, it already had an expectation about how long the delay would be. To make such a prediction, Meirhaeghe says, “the brain has no choice but to use all the different values that you perceive from your experience, average those out, and use this as the expectation.”

By analyzing neuronal behavior during their experiments, Jazayeri and Meirhaeghe determined that the brain’s signals were not encoding the full time elapsed between light cues, but instead how that time differed from the predicted time. Calculating this prediction error enabled the monkeys to report back how much time had elapsed.

Neuroscientists have suspected that this strategy, known as predictive processing, is widely used by the brain—although until now there has been little evidence of it outside early sensory areas. “You have a lot of stimuli that are coming from the environment, but lots of stimuli are actually predictable,” Meirhaeghe says. “The idea is that your brain is learning through experience patterns in the environment, and is subtracting your expectation from the incoming signal. What the brain actually processes in the end is the result of this subtraction.”

Finally, the researchers investigated the brain’s ability to update its expectations about time. After presenting monkeys with delays within a particular time range, they switched without warning to times that fluctuated within a new range. The brain responded quickly, updating its internal clock. “If you look inside the brain, after about 100 trials the monkeys have already figured out that these statistics have changed,” says Jazayeri.

It took longer, however—as many as 1,000 trials—for the monkeys to change their behavior in response to the change. “It seems like this prediction, and updating the internal model about the statistics of the world, is way faster than our muscles are able to implement,” Jazayeri says. “Our motor system is kind of lagging behind what our cognitive abilities tell us.” This makes sense, he says, because not every change in the environment merits a change in behavior. “You don’t want to be distracted by every small thing that deviates from your prediction. You want to pay attention to things that have a certain level of consistency.”

Mehrdad Jazayeri wants to know how our brains model the external world

Much of our daily life requires us to make inferences about the world around us. As you think about which direction your tennis opponent will hit the ball, or try to figure out why your child is crying, your brain is searching for answers about possibilities that are not directly accessible through sensory experiences.

MIT Associate Professor Mehrdad Jazayeri has devoted most of his career to exploring how the brain creates internal representations, or models, of the external world to make intelligent inferences about hidden states of the world.

“The one question I am most interested in is how does the brain form internal models of the external world? Studying inference is really a powerful way of gaining insight into these internal models,” says Jazayeri, who recently earned tenure in the Department of Brain and Cognitive Sciences and is also a member of MIT’s McGovern Institute for Brain Research.

Using a variety of approaches, including detailed analysis of behavior, direct recording of activity of neurons in the brain, and mathematical modeling, he has discovered how the brain builds models of statistical regularities in the environment. He has also found circuits and mechanisms that enable the brain to capture the causal relationships between observations and outcomes.

An unusual path

Jazayeri, who has been on the faculty at MIT since 2013, took an unusual path to a career in neuroscience. Growing up in Tehran, Iran, he was an indifferent student until his second year of high school when he got interested in solving challenging geometry puzzles. He also started programming with the ZX Spectrum, an early 8-bit personal computer, that his father had given him.

During high school, he was chosen to train for Iran’s first ever National Physics Olympiad team, but when he failed to make it to the international team, he became discouraged and temporarily gave up on the idea of going to college. Eventually, he participated in the University National Entrance Exam and was admitted to the electrical engineering department at Sharif University of Technology.

Jazayeri didn’t enjoy his four years of college education. The experience mostly helped him realize that he was not meant to become an engineer. “I realized that I’m not an inventor. What inspires me is the process of discovery,” he says. “I really like to figure things out, not build things, so those four years were not very inspiring.”

After graduating from college, Jazayeri spent a few years working on a banana farm near the Caspian Sea, along with two friends. He describes those years as among the best and most formative of his life. He would wake by 4 a.m., work on the farm until late afternoon, and spend the rest of the day thinking and reading. One topic he read about with great interest was neuroscience, which led him a few years later to apply to graduate school.

He immigrated to Canada and was admitted to the University of Toronto, where he earned a master’s degree in physiology and neuroscience. While there, he worked on building small circuit models that would mimic the activity of neurons in the hippocampus.

From there, Jazayeri went on to New York University to earn a PhD in neuroscience, where he studied how signals in the visual cortex support perception and decision-making. “I was less interested in how the visual cortex encodes the external world,” he says. “I wanted to understand how the rest of the brain decodes the signals in visual cortex, which is, in effect, an inference problem.”

He continued pursuing his interest in the neurobiology of inference as a postdoc at the University of Washington, where he investigated how the brain uses temporal regularities in the environment to estimate time intervals, and uses knowledge about those intervals to plan for future actions.

Building internal models to make inferences

Inference is the process of drawing conclusions based on information that is not readily available. Making rich inferences from scarce data is one of humans’ core mental capacities, one that is central to what makes us the most intelligent species on Earth. To do so, our nervous system builds internal models of the external world, and those models that help us think through possibilities without directly experiencing them.

The problem of inferences presents itself in many behavioral settings.

“Our nervous system makes all sorts of internal models for different behavioral goals, some that capture the statistical regularities in the environment, some that link potential causes to effects, some that reflect relationships between entities, and some that enable us to think about others,” Jazayeri says.

Jazayeri’s lab at MIT is made up of a group of cognitive scientists, electrophysiologists, engineers, and physicists with a shared interest in understanding the nature of internal models in the brain and how those models enable us to make inferences in different behavioral tasks.

Early work in the lab focused on a simple timing task to examine the problem of statistical inference, that is, how we use statistical regularities in the environment to make accurate inference. First, they found that the brain coordinates movements in time using a dynamic process, akin to an analog timer. They also found that the neural representation of time in the frontal cortex is being continuously calibrated based on prior experience so that we can make more accurate time estimates in the presence of uncertainty.

Later, the lab developed a complex decision-making task to examine the neural basis of causal inference, or the process of deducing a hidden cause based on its effects. In a paper that appeared in 2019, Jazayeri and his colleagues identified a hierarchical and distributed brain circuit in the frontal cortex that helps the brain to determine the most probable cause of failure within a hierarchy of decisions.

More recently, the lab has extended its investigation to other behavioral domains, including relational inference and social inference. Relational inference is about situating an ambiguous observation using relational memory. For example, coming out of a subway in a new neighborhood, we may use our knowledge of the relationship between visible landmarks to infer which way is north. Social inference, which is extremely difficult to study, involves deducing other people’s beliefs and goals based on their actions.

Along with studies in human volunteers and animal models, Jazayeri’s lab develops computational models based on neural networks, which helps them to test different possible hypotheses of how the brain performs specific tasks. By comparing the activity of those models with neural activity data from animals, the researchers can gain insight into how the brain actually performs a particular type of inference task.

“My main interest is in how the brain makes inferences about the world based on the neural signals,” Jazayeri says. “All of my work is about looking inside the brain, measuring signals, and using mathematical tools to try to understand how those signals are manifestations of an internal model within the brain.”

Storytelling brings MIT neuroscience community together

When the coronavirus pandemic shut down offices, labs, and classrooms across the MIT campus last spring, many members of the MIT community found it challenging to remain connected to one another in meaningful ways. Motivated by a desire to bring the neuroscience community back together, the McGovern Institute hosted a virtual storytelling competition featuring a selection of postdocs, grad students, and staff from across the institute.

“This has been an unprecedented year for us all,” says McGovern Institute Director Robert Desimone. “It has been twenty years since Pat and Lore McGovern founded the McGovern Institute, and despite the challenges this anniversary year has brought to our community, I have been inspired by the strength and perseverance demonstrated by our faculty, postdocs, students and staff. The resilience of this neuroscience community – and MIT as a whole – is indeed something to celebrate.”

The McGovern Institute had initially planned to hold a large 20th anniversary celebration in the atrium of Building 46 in the fall of 2020, but the pandemic made a gathering of this size impossible. The institute instead held a series of virtual events, including the November 12 story slam on the theme of resilience.

Nine MIT School of Science professors receive tenure for 2020

Beginning July 1, nine faculty members in the MIT School of Science have been granted tenure by MIT. They are appointed in the departments of Brain and Cognitive Sciences, Chemistry, Mathematics, and Physics.

Physicist Ibrahim Cisse investigates living cells to reveal and study collective behaviors and biomolecular phase transitions at the resolution of single molecules. The results of his work help determine how disruptions in genes can cause diseases like cancer. Cisse joined the Department of Physics in 2014 and now holds a joint appointment with the Department of Biology. His education includes a bachelor’s degree in physics from North Carolina Central University, concluded in 2004, and a doctoral degree in physics from the University of Illinois at Urbana-Champaign, achieved in 2009. He followed his PhD with a postdoc at the École Normale Supérieure of Paris and a research specialist appointment at the Howard Hughes Medical Institute’s Janelia Research Campus.

Jörn Dunkel is a physical applied mathematician. His research focuses on the mathematical description of complex nonlinear phenomena in a variety of fields, especially biophysics. The models he develops help predict dynamical behaviors and structure formation processes in developmental biology, fluid dynamics, and even knot strengths for sailing, rock climbing and construction. He joined the Department of Mathematics in 2013 after completing postdoctoral appointments at Oxford University and Cambridge University. He received diplomas in physics and mathematics from Humboldt University of Berlin in 2004 and 2005, respectively. The University of Augsburg awarded Dunkel a PhD in statistical physics in 2008.

A cognitive neuroscientist, Mehrdad Jazayeri studies the neurobiological underpinnings of mental functions such as planning, inference, and learning by analyzing brain signals in the lab and using theoretical and computational models, including artificial neural networks. He joined the Department of Brain and Cognitive Sciences in 2013. He achieved a BS in electrical engineering from the Sharif University of Technology in 1994, an MS in physiology at the University of Toronto in 2001, and a PhD in neuroscience from New York University in 2007. Prior to joining MIT, he was a postdoc at the University of Washington. Jazayeri is also an investigator at the McGovern Institute for Brain Research.

Yen-Jie Lee is an experimental particle physicist in the field of proton-proton and heavy-ion physics. Utilizing the Large Hadron Colliders, Lee explores matter in extreme conditions, providing new insight into strong interactions and what might have existed and occurred at the beginning of the universe and in distant star cores. His work on jets and heavy flavor particle production in nuclei collisions improves understanding of the quark-gluon plasma, predicted by quantum chromodynamics (QCD) calculations, and the structure of heavy nuclei. He also pioneered studies of high-density QCD with electron-position annihilation data. Lee joined the Department of Physics in 2013 after a fellowship at CERN and postdoc research at the Laboratory for Nuclear Science at MIT. His bachelor’s and master’s degrees were awarded by the National Taiwan University in 2002 and 2004, respectively, and his doctoral degree by MIT in 2011. Lee is a member of the Laboratory for Nuclear Science.

Josh McDermott investigates the sense of hearing. His research addresses both human and machine audition using tools from experimental psychology, engineering, and neuroscience. McDermott hopes to better understand the neural computation underlying human hearing, to improve devices to assist hearing impaired, and to enhance machine interpretation of sounds. Prior to joining MIT’s Department of Brain and Cognitive Sciences, he was awarded a BA in 1998 in brain and cognitive sciences by Harvard University, a master’s degree in computational neuroscience in 2000 by University College London, and a PhD in brain and cognitive sciences in 2006 by MIT. Between his doctoral time at MIT and returning as a faculty member, he was a postdoc at the University of Minnesota and New York University, and a visiting scientist at Oxford University. McDermott is also an associate investigator at the McGovern Institute for Brain Research and an investigator in the Center for Brains, Minds and Machines.

Solving environmental challenges by studying and manipulating chemical reactions is the focus of Yogesh Surendranath’s research. Using chemistry, he works at the molecular level to understand how to efficiently interconvert chemical and electrical energy. His fundamental studies aim to improve energy storage technologies, such as batteries, fuel cells, and electrolyzers, that can be used to meet future energy demand with reduced carbon emissions. Surendranath joined the Department of Chemistry in 2013 after a postdoc at the University of California at Berkeley. His PhD was completed in 2011 at MIT, and BS in 2006 at the University of Virginia. Suendranath is also a collaborator in the MIT Energy Initiative.

A theoretical astrophysicist, Mark Vogelsberger is interested in large-scale structures of the universe, such as galaxy formation. He combines observational data, theoretical models, and simulations that require high-performance supercomputers to improve and develop detailed models that simulate galaxy diversity, clustering, and their properties, including a plethora of physical effects like magnetic fields, cosmic dust, and thermal conduction. Vogelsberger also uses simulations to generate scenarios involving alternative forms of dark matter. He joined the Department of Physics in 2014 after a postdoc at the Harvard-Smithsonian Center for Astrophysics. Vogelsberger is a 2006 graduate of the University of Mainz undergraduate program in physics, and a 2010 doctoral graduate of the University of Munich and the Max Plank Institute for Astrophysics. He is also a principal investigator in the MIT Kavli Institute for Astrophysics and Space Research.

Adam Willard is a theoretical chemist with research interests that fall across molecular biology, renewable energy, and material science. He uses theory, modeling, and molecular simulation to study the disorder that is inherent to systems over nanometer-length scales. His recent work has highlighted the fundamental and unexpected role that such disorder plays in phenomena such as microscopic energy transport in semiconducting plastics, ion transport in batteries, and protein hydration. Joining the Department of Chemistry in 2013, Willard was formerly a postdoc at Lawrence Berkeley National Laboratory and then the University of Texas at Austin. He holds a PhD in chemistry from the University of California at Berkeley, achieved in 2009, and a BS in chemistry and mathematics from the University of Puget Sound, granted in 2003.

Lindley Winslow seeks to understand the fundamental particles shaped the evolution of our universe. As an experimental particle and nuclear physicist, she develops novel detection technology to search for axion dark matter and a proposed nuclear decay that makes more matter than antimatter. She started her faculty position in the Department of Physics in 2015 following a postdoc at MIT and a subsequent faculty position at the University of California at Los Angeles. Winslow achieved her BA in physics and astronomy in 2001 and PhD in physics in 2008, both at the University of California at Berkeley. She is also a member of the Laboratory for Nuclear Science.

Empowering faculty partnerships across the globe

MIT faculty share their creative and technical talent on campus as well as across the globe, compounding the Institute’s impact through strong international partnerships. Thanks to the MIT Global Seed Funds (GSF) program, managed by the MIT International Science and Technology Initiatives (MISTI), more of these faculty members will be able to build on these relationships to develop ideas and create new projects.

“This MISTI fund was extremely helpful in consolidating our collaboration and has been the start of a long-term interaction between the two teams,” says 2017 GSF awardee Mehrdad Jazayeri, associate professor of brain and cognitive sciences and investigator at the McGovern Institute for Brain Research. “We have already submitted multiple abstracts to conferences together, mapped out several ongoing projects, and secured international funding thanks to the preliminary progress this seed fund enabled.”

This year, the 28 funds that comprise MISTI GSF received 232 MIT applications. Over $2.3 million was awarded to 107 projects from 23 departments across the entire Institute. This brings the amount awarded to $22 million over the 12-year life of the program. Besides supporting faculty, these funds also provide meaningful educational opportunities for students. The majority of GSF teams include students from MIT and international collaborators, bolstering both their research portfolios and global experience.

“This project has had important impact on my grad student’s education and development. She was able to apply techniques she has learned to a new and challenging system, mentor an international student, participate in a major international meeting, and visit CEA,” says Professor of Chemistry Elizabeth Nolan, a 2017 GSF awardee.

On top of these academic and research goals, students are actively broadening their cultural experience and scope. “The environment at CEA differs enormously from MIT because it is a national lab and because lab structure and graduate education in France is markedly different than at MIT,” Nolan continues. “At CEA, she had the opportunity to present research to distinguished international colleagues.”

These impactful partnerships unite faculty teams behind common goals to tackle worldwide challenges, helping to develop solutions that would not be possible without international collaboration. 2017 GSF winner Emilio Bizzi, professor emeritus of brain and cognitive sciences and emeritus investigator at the McGovern Institute, articulated the advantage of combining these individual skills within a high-level team. “The collaboration among researchers was valuable in sharing knowledge, experience, skills and techniques … as well as offering the probability of future development of systems to aid in rehabilitation of patients suffering TBI.”

The research opportunities that grow from these seed funds often lead to published papers and additional funding leveraged from early results. The next call for proposals will be in mid-May.

MISTI creates applied international learning opportunities for MIT students that increase their ability to understand and address real-world problems. MISTI collaborates with partners at MIT and beyond, serving as a vital nexus of international activity and bolstering the Institute’s research mission by promoting collaborations between MIT faculty members and their counterparts abroad.

McGovern lab manager creates art inspired by science

Michal De-Medonsa, technical associate and manager of the Jazayeri lab, created a large wood mosaic for her lab. We asked Michal to tell us a bit about the mosaic, her inspiration, and how in the world she found the time to create such an exquisitely detailed piece of art.

______

Jazayeri lab manager Michal De-Medonsa holds her wood mosaic entitled “JazLab.” Photo: Caitlin Cunningham

Describe this piece of art for us.

To make a piece this big (63″ x 15″), I needed several boards of padauk wood. I could have just etched each board as a whole unit and glued the 13 or so boards to each other, but I didn’t like the aesthetic. The grain and color within each board would look beautiful, but the line between each board would become obvious, segmented, and jarring when contrasted with the uniformity within each board. Instead, I cut out about 18 separate squares out of each board, shuffled all 217 pieces around, and glued them to one another in a mosaic style with a larger pattern (inspired by my grandfather’s work in granite mosaics).

What does this mosaic mean to you?

Once every piece was shuffled, the lines between single squares were certainly visible, but as a feature, were far less salient than had the full boards been glued to one another. As I was working on the piece, I was thinking about how the same concept holds true in society. Even if there is diversity within a larger piece (an institution, for example), there is a tendency for groups to form within the larger piece (like a full board), diversity becomes separated. This isn’t a criticism of any institution, it is human nature to form in-groups. It’s subconscious (so perhaps the criticism is that we, as a society, don’t give that behavior enough thought and try to ameliorate our reflex to group with those who are “like us”). The grain of the wood is uniform, oriented in the same direction, the two different cutting patterns create a larger pattern within the piece, and there are smaller patterns between and within single pieces. I love creating and finding patterns in my art (and life). Alfred North Whitehead wrote that “understanding is the apperception of pattern as such.” True, I believe, in science, art, and the humanities. What a great goal – to understand.​

Tell us about the name of this piece.

Every large piece I make is inspired by the people I make it for, and is therefore named after them. This piece is called JazLab. Having lived around the world, and being a descendant of a nomadic people, I don’t consider any one place home, but am inspired by every place I’ve lived. In all of my work, you can see elements of my Jewish heritage, antiquity, the Middle East, Africa, and now MIT.

How has MIT influenced your art?

MIT has influenced me in the most obvious way MIT could influence anyone – technology. Before this series, I made very small versions of this type of work, designing everything on a piece of paper with a pencil and a ruler, and making every cut by hand. Each of those small squares would take ~2 hours (depending on the design), and I was limited to softer woods.

Since coming to MIT, I learned that I had access to the Hobby Shop with a huge array of power tools and software. I began designing my patterns on the computer and used power tools to make the cuts. I actually struggled a lot with using the tech – not because it was hard (which, it really is when you just start out), but rather because it felt like I was somehow “cheating.” How is this still art? And although this is something I still think about often, I’ve tried to look at it in this way: every generation, in their time, used the most advanced technology. The beauty and value of the piece doesn’t come from how many bruises, cuts, and blisters your machinery gave you, or whether you scraped the wood out with your nails, but rather, once you were given a tool, what did you decide to do with it? My pieces still have a huge hand-on-material work, but I am working on accepting that using technology in no way devalues the work.

Given your busy schedule with the Jazayeri lab, how did you find the time to create this piece of art?

I took advantage of any free hour I could. Two days out of the week, the hobby shop is open until 9pm, and I would additionally go every Saturday. For the parts that didn’t require the shop (adjusting each piece individually with a carving knife, assembling them, even most of the glueing) I would just work  at home – often very late into the night.

______

JazLab is on display in the Jazayeri lab in MIT Bldg 46.

Controlling our internal world

Olympic skaters can launch, perform multiple aerial turns, and land gracefully, anticipating imperfections and reacting quickly to correct course. To make such elegant movements, the brain must have an internal model of the body to control, predict, and make almost instantaneous adjustments to motor commands. So-called “internal models” are a fundamental concept in engineering and have long been suggested to underlie control of movement by the brain, but what about processes that occur in the absence of movement, such as contemplation, anticipation, planning?

Using a novel combination of task design, data analysis, and modeling, MIT neuroscientist Mehrdad Jazayeri and colleagues now provide compelling evidence that the core elements of an internal model also control purely mental processes in a study published in Nature Neuroscience.

“During my thesis I realized that I’m interested, not so much in how our senses react to sensory inputs, but instead in how my internal model of the world helps me make sense of those inputs,”says Jazayeri, the Robert A. Swanson Career Development Professor of Life Sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Indeed, understanding the building blocks exerting control of such mental processes could help to paint a better picture of disruptions in mental disorders, such as schizophrenia.

Internal models for mental processes

Scientists working on the motor system have long theorized that the brain overcomes noisy and slow signals using an accurate internal model of the body. This internal model serves three critical functions: it provides motor to control movement, simulates upcoming movement to overcome delays, and uses feedback to make real-time adjustments.

“The framework that we currently use to think about how the brain controls our actions is one that we have borrowed from robotics: we use controllers, simulators, and sensory measurements to control machines and train operators,” explains Reza Shadmehr, a professor at the Johns Hopkins School of Medicine who was not involved with the study. “That framework has largely influenced how we imagine our brain controlling our movements.”

Jazazyeri and colleagues wondered whether the same framework might explain the control principles governing mental states in the absence of any movement.

“When we’re simply sitting, thoughts and images run through our heads and, fundamental to intellect, we can control them,” explains lead author Seth Egger, a former postdoctoral associate in the Jazayeri lab and now at Duke University.

“We wanted to find out what’s happening between our ears when we are engaged in thinking,” says Egger.

Imagine, for example, a sign language interpreter keeping up with a fast speaker. To track speech accurately, the translator continuously anticipates where the speech is going, rapidly adjusting when the actual words deviate from the prediction. The interpreter could be using an internal model to anticipate upcoming words, and use feedback to make adjustments on the fly.

1-2-3…Go

Hypothesizing about how the components of an internal model function in scenarios such as translation is one thing. Cleanly measuring and proving the existence of these elements is much more complicated as the activity of the controller, simulator, and feedback are intertwined. To tackle this problem, Jazayeri and colleagues devised a clever task with primate models in which the controller, simulator, and feedback act at distinct times.

In this task, called “1-2-3-Go,” the animal sees three consecutive flashes (1, 2, and 3) that form a regular beat, and learns to make an eye movement (Go) when they anticipate the 4th flash should occur. During the task, researchers measured neural activity in a region of the frontal cortex they had previously linked to the timing of movement.

Jazayeri and colleagues had clear predictions about when the controller would act (between the third flash and “Go”) and when feedback would be engaged (with each flash of light). The key surprise came when researchers saw evidence for the simulator anticipating the third flash. This unexpected neural activity has dynamics that resemble the controller, but was not associated with a response. In other words, the researchers uncovered a covert plan that functions as the simulator, thus uncovering all three elements of an internal model for a mental process, the planning and anticipation of “Go” in the “1-2-3-Go” sequence.

“Jazayeri’s work is important because it demonstrates how to study mental simulation in animals,” explains Shadmehr, “and where in the brain that simulation is taking place.”

Having found how and where to measure an internal model in action, Jazayeri and colleagues now plan to ask whether these control strategies can explain how primates effortlessly generalize their knowledge from one behavioral context to another. For example, how does an interpreter rapidly adjust when someone with widely different speech habits takes the podium? This line of investigation promises to shed light on high-level mental capacities of the primate brain that simpler animals seem to lack, that go awry in mental disorders, and that designers of artificial intelligence systems so fondly seek.

Mehrdad Jazayeri and Hazel Sive awarded 2019 School of Science teaching prizes

The School of Science has announced that the recipients of the school’s 2019 Teaching Prizes for Graduate and Undergraduate Education are Mehrdad Jazayeri and Hazel Sive. Nominated by peers and students, the faculty members chosen to receive these prizes are selected to acknowledge their exemplary efforts in teaching graduate and undergraduate students.

Mehrdad Jazayeri, an associate professor in the Department of Brain and Cognitive Sciences and investigator at the McGovern Institute for Brain Research, is awarded the prize for graduate education for 9.014 (Quantitative Methods and Computational Models in Neuroscience). Earlier this year, he was recognized for excellence in graduate teaching by the Department of Brain and Cognitive Sciences and won a Graduate Student Council teaching award in 2016. In their nomination letters, peers and students alike remarked that he displays not only great knowledge, but extraordinary skill in teaching, most notably by ensuring everyone learns the material. Jazayeri does so by considering students’ diverse backgrounds and contextualizing subject material to relatable applications in various fields of science according to students’ interests. He also improves and adjusts the course content, pace, and intensity in response to student input via surveys administered throughout the semester.

Hazel Sive, a professor in the Department of Biology, member of the Whitehead Institute for Biomedical Research, and associate member of the Broad Institute of MIT and Harvard, is awarded the prize for undergraduate education. A MacVicar Faculty Fellow, she has been recognized with MIT’s highest undergraduate teaching award in the past, as well as the 2003 School of Science Teaching Prize for Graduate Education. Exemplified by her nominations, Sive’s laudable teaching career at MIT continues to receive praise from undergraduate students who take her classes. In recent post-course evaluations, students commended her exemplary and dedicated efforts to her field and to their education.

The School of Science welcomes nominations for the teaching prize in the spring semester of each academic year. Nominations can be submitted at the school’s website.

Do thoughts have mass?

As part of our Ask the Brain series, we received the question, “Do thoughts have mass?” The following is a guest blog post by Michal De-Medonsa, technical associate and manager of the Jazayeri lab, who tapped into her background in philosophy to answer this intriguing question.

_____

Portrat of Michal De-Medonsa
Jazayeri lab manager (and philosopher) Michal De-Medonsa.

To answer the question, “Do thoughts have mass?” we must, like any good philosopher, define something that already has a definition – “thoughts.”

Logically, we can assert that thoughts are either metaphysical or physical (beyond that, we run out of options). If our definition of thought is metaphysical, it is safe to say that metaphysical thoughts do not have mass since they are by definition not physical, and mass is a property of a physical things. However, if we define a thought as a physical thing, it becomes a little trickier to determine whether or not it has mass.

A physical definition of thoughts falls into (at least) two subgroups – physical processes and physical parts. Take driving a car, for example – a parts definition describes the doors, motor, etc. and has mass. A process definition of a car being driven, turning the wheel, moving from point A to point B, etc. does not have mass. The process of driving is a physical process that involves moving physical matter, but we wouldn’t say that the act of driving has mass. The car itself, however, is an example of physical matter, and as any cyclist in the city of Boston is well aware  – cars have mass. It’s clear that if we define a thought as a process, it does not have mass, and if we define a thought as physical parts, it does have mass – so, which one is it? In order to resolve our issue, we have to be incredibly precise with our definition. Is a thought a process or parts? That is, is a thought more like driving or more like a car?

In order to resolve our issue, we have to be incredibly precise with our definition of the word thought.

Both physical definitions (process and parts) have merit. For a parts definition, we can look at what is required for a thought – neurons, electrical signals, and neurochemicals, etc. This type of definition becomes quite imprecise and limiting. It doesn’t seem too problematic to say that the neurons, neurochemicals, etc. are themselves the thought, but this style of definition starts to fall apart when we try to include all the parts involved (e.g. blood flow, connective tissue, outside stimuli). When we look at a face, the stimuli received by the visual cortex is part of the thought – is the face part of a thought? When we look at our phone, is the phone itself part of a thought? A parts definition either needs an arbitrary limit, or we end up having to include all possible parts involved in the thought, ending up with an incredibly convoluted and effectively useless definition.

A process definition is more versatile and precise, and it allows us to include all the physical parts in a more elegant way. We can now say that all the moving parts are included in the process without saying that they themselves are the thought. That is, we can say blood flow is included in the process without saying that blood flow itself is part of the thought. It doesn’t sound ridiculous to say that a phone is part of the thought process. If we subscribe to the parts definition, however, we’re forced to say that part of the mass of a thought comes from the mass of a phone. A process definition allows us to be precise without being convoluted, and allows us to include outside influences without committing to absurd definitions.

Typical of a philosophical endeavor, we’re left with more questions and no simple answer. However, we can walk away with three conclusions.

  1. A process definition of “thought” allows for elegance and the involvement of factors outside the “vacuum” of our physical body, however, we lose out on some function by not describing a thought by its physical parts.
  2. The colloquial definition of “thought” breaks down once we invite a philosopher over to break it down, but this is to be expected – when we try to break something down, sometimes, it will break down. What we should be aware of is that if we want to use the word in a rigorous scientific framework, we need a rigorous scientific definition.
  3. Most importantly, it’s clear that we need to put a lot of work into defining exactly what we mean by “thought” – a job well suited to a scientifically-informed philosopher.

Michal De-Medonsa earned her bachelor’s degree in neuroscience and philosophy from Johns Hopkins University in 2012 and went on to receive her master’s degree in history and philosophy of science at the University of Pittsburgh in 2015. She joined the Jazayeri lab in 2018 as a lab manager/technician and spends most of her free time rock climbing, doing standup comedy, and woodworking at the MIT Hobby Shop. 

_____

Do you have a question for The Brain? Ask it here.