A voice for change — in Spanish

Jessica Chomik-Morales had a bicultural childhood. She was born in Boca Raton, Florida, where her parents had come seeking a better education for their daughter than she would have access to in Paraguay. But when she wasn’t in school, Chomik-Morales was back in that small, South American country with her family. One of the consequences of growing up in two cultures was an early interest in human behavior. “I was always in observer mode,” Chomik-Morales says, recalling how she would tune in to the nuances of social interactions in order to adapt and fit in.

Today, that fascination with human behavior is driving Chomik-Morales as she works with MIT professor of cognitive science Laura Schulz and Walter A. Rosenblith Professor of Cognitive Neuroscience and McGovern Institute for Brain Research investigator Nancy Kanwisher as a post-baccalaureate research scholar, using functional brain imaging to investigate how the brain recognizes and understands causal relationships. Since arriving at MIT last fall, she’s worked with study volunteers to collect functional MRI (fMRI) scans and used computational approaches to interpret the images. She’s also refined her own goals for the future.

Jessica Chomik-Morales (right) with postdoctoral associate Héctor De Jesús-Cortés. Photo: Steph Stevens

She plans to pursue a career in clinical neuropsychology, which will merge her curiosity about the biological basis of behavior with a strong desire to work directly with people. “I’d love to see what kind of questions I could answer about the neural mechanisms driving outlier behavior using fMRI coupled with cognitive assessment,” she says. And she’s confident that her experience in MIT’s two-year post-baccalaureate program will help her get there. “It’s given me the tools I need, and the techniques and methods and good scientific practice,” she says. “I’m learning that all here. And I think it’s going to make me a more successful scientist in grad school.”

The road to MIT

Chomik-Morales’s path to MIT was not a straightforward trajectory through the U.S. school system. When her mom, and later her dad, were unable to return to the U.S., she started eight grade in the capital city of Asunción. It did not go well. She spent nearly every afternoon in the principal’s office, and soon her father was encouraging her to return to the United States. “You are an American,” he told her. “You have a right to the educational system there.”

Back in Florida, Chomik-Morales became a dedicated student, even while she worked assorted jobs and shuffled between the homes of families who were willing to host her. “I had to grow up,” she says. “My parents are sacrificing everything just so I can have a chance to be somebody. People don’t get out of Paraguay often, because there aren’t opportunities and it’s a very poor country. I was given an opportunity, and if I waste that, then that is disrespect not only to my parents, but to my lineage, to my country.”

As she graduated from high school and went on to earn a degree in cognitive neuroscience at Florida Atlantic University, Chomik-Morales found herself experiencing things that were completely foreign to her family. Though she spoke daily with her mom via WhatsApp, it was hard to share what she was learning in school or what she was doing in the lab. And while they celebrated her academic achievements, Chomik-Morales knew they didn’t really understand them. “Neither of my parents went to college,” she says. “My mom told me that she never thought twice about learning about neuroscience. She had this misconception that it was something that she would never be able to digest.”

Chomik-Morales believes that the wonders of neuroscience are for everybody. But she also knows that Spanish speakers like her mom have few opportunities to hear the kinds of accessible, engaging stories that might draw them in. So she’s working to change that. With support from the McGovern Institute, the National Science Foundation funded Science and Technology Center for Brains, Minds, and Machines, Chomik-Morales is hosting and producing a weekly podcast called “Mi Última Neurona” (“My Last Neuron”), which brings conversations with neuroscientists to Spanish speakers around the world.

Listeners hear how researchers at MIT and other institutions are exploring big concepts like consciousness and neurodegeneration, and learn about the approaches they use to study the brain in humans, animals, and computational models. Chomik-Morales wants listeners to get to know neuroscientists on a personal level too, so she talks with her guests about their career paths, their lives outside the lab, and often, their experiences as immigrants in the United States.

After recording an interview with Chomik-Morales that delved into science, art, and the educational system in his home country of Peru, postdoc Arturo Deza thinks “Mi Última Neurona” has the potential to inspire Spanish speakers in Latin America, as well immigrants in other countries. “Even if you’re not a scientist, it’s really going to captivate you and you’re going to get something out of it,” he says. To that point, Chomik-Morales’s mother has quickly become an enthusiastic listener, and even begun seeking out resources to learn more about the brain on her own.

Chomik-Morales hopes the stories her guests share on “Mi Última Neurona” will inspire a future generation of Hispanic neuroscientists. She also wants listeners to know that a career in science doesn’t have to mean leaving their country behind. “Gain whatever you need to gain from outside, and then, if it’s what you desire, you’re able to go back and help your own community,” she says. With “Mi Última Neurona,” she adds, she feels she is giving back to her roots.

How do illusions trick the brain?

As part of our Ask the Brain series, Jarrod Hicks, a graduate student in Josh McDermott‘s lab and Dana Boebinger, a postdoctoral researcher at the University of Rochester (and former graduate student in Josh McDermott’s lab), answer the question, “How do illusions trick the brain?”

_____

Graduate student Jarrod Hicks studies how the brain processes sound. Photo: M.E. Megan Hicks

Imagine you’re a detective. Your job is to visit a crime scene, observe some evidence, and figure out what happened. However, there are often multiple stories that could have produced the evidence you observe. Thus, to solve the crime, you can’t just rely on the evidence in front of you – you have to use your knowledge about the world to make your best guess about the most likely sequence of events. For example, if you discover cat hair at the crime scene, your prior knowledge about the world tells you it’s unlikely that a cat is the culprit. Instead, a more likely explanation is that the culprit might have a pet cat.

Although it might not seem like it, this kind of detective work is what your brain is doing all the time. As your senses send information to your brain about the world around you, your brain plays the role of detective, piecing together each bit of information to figure out what is happening in the world. The information from your senses usually paints a pretty good picture of things, but sometimes when this information is incomplete or unclear, your brain is left to fill in the missing pieces with its best guess of what should be there. This means that what you experience isn’t actually what’s out there in the world, but rather what your brain thinks is out there. The consequence of this is that your perception of the world can depend on your experience and assumptions.

Optical illusions

Optical illusions are a great way of showing how our expectations and assumptions affect what we perceive. For example, look at the squares labeled “A” and “B” in the image below.

Checkershadow illusion. Image: Edward H. Adelson

Is one of them lighter than the other? Although most people would agree that the square labeled “B” is much lighter than the one labeled “A,” the two squares are actually the exact same color. You perceive the squares differently because your brain knows, from experience, that shadows tend to make things appear darker than what they actually are. So, despite the squares being physically identical, your brain thinks “B” should be lighter.

Auditory illusions

Tricks of perception are not limited to optical illusions. There are also several dramatic examples of how our expectations influence what we hear. For example, listen to the mystery sound below. What do you hear?

Mystery sound

Because you’ve probably never heard a sound quite like this before, your brain has very little idea about what to expect. So, although you clearly hear something, it might be very difficult to make out exactly what that something is. This mystery sound is something called sine-wave speech, and what you’re hearing is essentially a very degraded sound of someone speaking.

Now listen to a “clean” version of this speech in the audio clip below:

Clean speech

You probably hear a person saying, “the floor was quite slippery.” Now listen to the mystery sound above again. After listening to the original audio, your brain has a strong expectation about what you should hear when you listen to the mystery sound again. Even though you’re hearing the exact same mystery sound as before, you experience it completely differently. (Audio clips courtesy of University of Sussex).

 

Dana Boebinger describes the science of illusions in this McGovern Minute.

Subjective perceptions

These illusions have been specifically designed by scientists to fool your brain and reveal principles of perception. However, there are plenty of real-life situations in which your perceptions strongly depend on expectations and assumptions. For example, imagine you’re watching TV when someone begins to speak to you from another room. Because the noise from the TV makes it difficult to hear the person speaking, your brain might have to fill in the gaps to understand what’s being said. In this case, different expectations about what is being said could cause you to hear completely different things.

Which phrase do you hear?

Listen to the clip below to hear a repeating loop of speech. As the sound plays, try to listen for one of the phrases listed in teal below.

Because the audio is somewhat ambiguous, the phrase you perceive depends on which phrase you listen for. So even though it’s the exact same audio each time, you can perceive something totally different! (Note: the original audio recording is from a football game in which the fans were chanting, “that is embarrassing!”)

Illusions like the ones above are great reminders of how subjective our perceptions can be. In order to make sense of the messy information coming in from our senses, our brains are constantly trying to fill in the blanks and with its best guess of what’s out there. Because of this guesswork, our perceptions depend on our experiences, leading each of us to perceive and interact with the world in a way that’s uniquely ours.

Jarrod Hicks is a PhD candidate in the Department of Brain and Cognitive Sciences at MIT working with Josh McDermott in the Laboratory for Computational Audition. He studies sound segregation, a key aspect of real-world hearing in which a sound source of interest is estimated amid a mixture of competing sources. He is broadly interested in teaching/outreach, psychophysics, computational approaches to represent stimulus spaces, and neural coding of high-level sensory representations.

_____

Do you have a question for The Brain? Ask it here.

What words can convey

From search engines to voice assistants, computers are getting better at understanding what we mean. That’s thanks to language processing programs that make sense of a staggering number of words, without ever being told explicitly what those words mean. Such programs infer meaning instead through statistics—and a new study reveals that this computational approach can assign many kinds of information to a single word, just like the human brain.

The study, published April 14, 2022, in the journal Nature Human Behavior, was co-led by Gabriel Grand, a graduate student at MIT’s Computer Science and Artificial Intelligence Laboratory, and Idan Blank, an assistant professor at the University of California, Los Angeles, and supervised by McGovern Investigator Ev Fedorenko, a cognitive neuroscientist who studies how the human brain uses and understands language, and Francisco Pereira at the National Institute of Mental Health. Fedorenko says the rich knowledge her team was able to find within computational language models demonstrates just how much can be learned about the world through language alone.

Early language models

The research team began its analysis of statistics-based language processing models in 2015, when the approach was new. Such models derive meaning by analyzing how often pairs of words co-occur in texts and using those relationships to assess the similarities of words’ meanings. For example, such a program might conclude that “bread” and “apple” are more similar to one another than they are to “notebook,” because “bread” and “apple” are often found in proximity to words like “eat” or “snack,” whereas “notebook” is not.

The models were clearly good at measuring words’ overall similarity to one another. But most words carry many kinds of information, and their similarities depend on which qualities are being evaluated. “Humans can come up with all these different mental scales to help organize their understanding of words,” explains Grand, a former undergraduate researcher in the Fedorenko lab. For examples, he says, “dolphins and alligators might be similar in size, but one is much more dangerous than the other.”

Grand and Idan Blank, who was then a graduate student at the McGovern Institute, wanted to know whether the models captured that same nuance. And if they did, how was the information organized?

To learn how the information in such a model stacked up to humans’ understanding of words, the team first asked human volunteers to score words along many different scales: Were the concepts those words conveyed big or small, safe or dangerous, wet or dry? Then, having mapped where people position different words along these scales, they looked to see whether language processing models did the same.

Grand explains that distributional semantic models use co-occurrence statistics to organize words into a huge, multidimensional matrix. The more similar words are to one another, the closer they are within that space. The dimensions of the space are vast, and there is no inherent meaning built into its structure. “In these word embeddings, there are hundreds of dimensions, and we have no idea what any dimension means,” he says. “We’re really trying to peer into this black box and say, ‘is there structure in here?’”

Word-vectors in the category ‘animals’ (blue circles) are orthogonally projected (light-blue lines) onto the feature subspace for ‘size’ (red line), defined as the vector difference between large−→−− and small−→−− (red circles). The three dimensions in this figure are arbitrary and were chosen via principal component analysis to enhance visualization (the original GloVe word embedding has 300 dimensions, and projection happens in that space). Image: Fedorenko lab

Specifically, they asked whether the semantic scales they had asked their volunteers use were represented in the model. So they looked to see where words in the space lined up along vectors defined by the extremes of those scales. Where did dolphins and tigers fall on line from “big” to “small,” for example? And were they closer together along that line than they were on a line representing danger (“safe” to “dangerous”)?

Across more than 50 sets of world categories and semantic scales, they found that the model had organized words very much like the human volunteers. Dolphins and tigers were judged to be similar in terms of size, but far apart on scales measuring danger or wetness. The model had organized the words in a way that represented many kinds of meaning—and it had done so based entirely on the words’ co-occurrences.

That, Fedorenko says, tells us something about the power of language. “The fact that we can recover so much of this rich semantic information from just these simple word co-occurrence statistics suggests that this is one very powerful source of learning about things that you may not even have direct perceptual experience with.”

Unexpected synergy

This story originally appeared in the Spring 2022 issue of BrainScan.

***

Recent results from cognitive neuroscientist Nancy Kanwisher’s lab have left her pondering the role of music in human evolution. “Music is this big mystery,” she says. “Every human society that’s been studied has music. No other animals have music in the way that humans do. And nobody knows why humans have music at all. This has been a puzzle for centuries.”

MIT neuroscientist and McGovern Investigator Nancy Kanwisher. Photo: Jussi Puikkonen/KNAW

Some biologists and anthropologists have reasoned that since there’s no clear evolutionary advantage for humans’ unique ability to create and respond to music, these abilities must have emerged when humans began to repurpose other brain functions. To appreciate song, they’ve proposed, we draw on parts of the brain dedicated to speech and language. It makes sense, Kanwisher says: music and language are both complex, uniquely human ways of communicating. “It’s very sensible to think that there might be common machinery,” she says. “But there isn’t.”

That conclusion is based on her team’s 2015 discovery of neurons in the human brain that respond only to music. They first became clued in to these music-sensitive cells when they asked volunteers to listen to a diverse panel of sounds inside an MRI scanner. Functional brain imaging picked up signals suggesting that some neurons were specialized to detect only music but the broad map of brain activity generated by an fMRI couldn’t pinpoint those cells.

Singing in the brain

Kanwisher’s team wanted to know more but neuroscientists who study the human brain can’t always probe its circuitry with the exactitude of their colleagues who study the brains of mice or rats. They can’t insert electrodes into human brains to monitor the neurons they’re interested in. Neurosurgeons, however, sometimes do — and thus, collaborating with neurosurgeons has created unique opportunities for Kanwisher and other McGovern investigators to learn about the human brain.

Kanwisher’s team collaborated with clinicians at Albany Medical Center to work with patients who are undergoing monitoring prior to surgical treatment for epilepsy. Before operating, a neurosurgeon must identify the spot in their patient’s brain that is triggering seizures. This means inserting electrodes into the brain to monitor specific areas over a few days or weeks. The electrodes they implant pinpoint activity far more precisely, both spatially and temporally, than an MRI. And with patients’ permission, researchers like Kanwisher can take advantage of the information they collect.

“The intracranial recording from human brains that’s possible from collaboration with neurosurgeons is extremely precious to us,” Kanwisher says. “All of the research is kind of opportunistic, on whatever the surgeons are doing for clinical reasons. But sometimes we get really lucky and the electrodes are right in an area where we have long-standing scientific questions that those data can answer.”

Song-selective neural population (yellow) in the “inflated” human brain. Image: Sam Norman-Haignere

The unexpected discovery of song-specific neurons, led by postdoctoral researcher Sam Norman-Haignere, who is now an assistant professor at the University of Rochester Medical Center, emerged from such a collaboration. The team worked with patients at Albany Medical Center whose presurgical monitoring encompassed the auditory-processing part of the brain that they were curious about. Sure enough, certain electrodes picked up activity only when patients were listening to music. The data indicated that in some of those locations, it didn’t matter what kind of music was playing: the cells fired in response to a range of sounds that included flute solos, heavy metal, and rap. But other locations became active exclusively in response to vocal music. “We did not have that hypothesis at all, Kanwisher says. “It reallytook our breath away,” she says.

When that discovery is considered along with findings from McGovern colleague Ev Fedorenko, who has shown that the brain’s language-processing regions do not respond to music, Kanwisher says it’s now clear that music and language are segregated in the human brain. The origins of our unique appreciation for music, however, remain a mystery.

Clinical advantage

Clinical collaborations are also important to researchers in Ann Graybiels lab, who rely largely on model organisms like mice and rats to investigate the fine details of neural circuits. Working with clinicians helps keep them focused on answering questions that matter to patients.

In studying how the brain makes decisions, the Graybiel lab has zeroed in on connections that are vital for making choices that carry both positive and negative consequences. This is the kind of decision-making that you might call on when considering whether to accept a job that pays more but will be more demanding than your current position, for example. In experiments with rats, mice, and monkeys, they’ve identified different neurons dedicated to triggering opposing actions “approach” or “avoid” in these complex decision-making tasks. They’ve also found evidence that both age and stress change how the brain deals with these kinds of decisions.

In work led by former Graybiel lab research scientist Ken-ichi Amemori, they have worked with psychiatrist Diego Pizzagalli at McLean Hospital to learn what happens in the human brain when people make these complex decisions.

By monitoring brain activity as people made decisions inside an MRI scanner, the team identified regions that lit up when people chose to “approach” or “avoid.” They also found parallel activity patterns in monkeys that performed the same task, supporting the relevance of animal studies to understanding this circuitry.

In people diagnosed with major depression, however, the brain responded to approach-avoidance conflict somewhat differently. Certain areas were not activated as strongly as they were in people without depression, regardless of whether subjects ultimately chose to “approach” or “avoid.” The team suspects that some of these differences might reflect a stronger tendency toward avoidance, in which potential rewards are less influential for decision-making, while an individual is experiencing major depression.

The brain activity associated with approach-avoidance conflict in humans appears to align with what Graybiel’s team has seen in mice, although clinical imaging cannot reveal nearly as much detail about the involved circuits. Graybiel says that gives her confidence that what they are learning in the lab, where they can manipulate and study neural circuits with precision, is important. “I think there’s no doubt that this is relevant to humans,” she says. “I want to get as far into the mechanisms as possible, because maybe we’ll hit something that’s therapeutically valuable, or maybe we will really get an intuition about how parts of the brain work. I think that will help people.”

Singing in the brain

Press Mentions

For the first time, MIT neuroscientists have identified a population of neurons in the human brain that lights up when we hear singing, but not other types of music.

These neurons, found in the auditory cortex, appear to respond to the specific combination of voice and music, but not to either regular speech or instrumental music. Exactly what they are doing is unknown and will require more work to uncover, the researchers say.

“The work provides evidence for relatively fine-grained segregation of function within the auditory cortex, in a way that aligns with an intuitive distinction within music,” says Sam Norman-Haignere, a former MIT postdoc who is now an assistant professor of neuroscience at the University of Rochester Medical Center.

The work builds on a 2015 study in which the same research team used functional magnetic resonance imaging (fMRI) to identify a population of neurons in the brain’s auditory cortex that responds specifically to music. In the new work, the researchers used recordings of electrical activity taken at the surface of the brain, which gave them much more precise information than fMRI.

“There’s one population of neurons that responds to singing, and then very nearby is another population of neurons that responds broadly to lots of music. At the scale of fMRI, they’re so close that you can’t disentangle them, but with intracranial recordings, we get additional resolution, and that’s what we believe allowed us to pick them apart,” says Norman-Haignere.

Norman-Haignere is the lead author of the study, which appears today in the journal Current Biology. Josh McDermott, an associate professor of brain and cognitive sciences, and Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience, both members of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds and Machines (CBMM), are the senior authors of the study.

Neural recordings

In their 2015 study, the researchers used fMRI to scan the brains of participants as they listened to a collection of 165 sounds, including different types of speech and music, as well as everyday sounds such as finger tapping or a dog barking. For that study, the researchers devised a novel method of analyzing the fMRI data, which allowed them to identify six neural populations with different response patterns, including the music-selective population and another population that responds selectively to speech.

In the new study, the researchers hoped to obtain higher-resolution data using a technique known as electrocorticography (ECoG), which allows electrical activity to be recorded by electrodes placed inside the skull. This offers a much more precise picture of electrical activity in the brain compared to fMRI, which measures blood flow in the brain as a proxy of neuron activity.

“With most of the methods in human cognitive neuroscience, you can’t see the neural representations,” Kanwisher says. “Most of the kind of data we can collect can tell us that here’s a piece of brain that does something, but that’s pretty limited. We want to know what’s represented in there.”

Electrocorticography cannot be typically be performed in humans because it is an invasive procedure, but it is often used to monitor patients with epilepsy who are about to undergo surgery to treat their seizures. Patients are monitored over several days so that doctors can determine where their seizures are originating before operating. During that time, if patients agree, they can participate in studies that involve measuring their brain activity while performing certain tasks. For this study, the MIT team was able to gather data from 15 participants over several years.

For those participants, the researchers played the same set of 165 sounds that they used in the earlier fMRI study. The location of each patient’s electrodes was determined by their surgeons, so some did not pick up any responses to auditory input, but many did. Using a novel statistical analysis that they developed, the researchers were able to infer the types of neural populations that produced the data that were recorded by each electrode.

“When we applied this method to this data set, this neural response pattern popped out that only responded to singing,” Norman-Haignere says. “This was a finding we really didn’t expect, so it very much justifies the whole point of the approach, which is to reveal potentially novel things you might not think to look for.”

That song-specific population of neurons had very weak responses to either speech or instrumental music, and therefore is distinct from the music- and speech-selective populations identified in their 2015 study.

Music in the brain

In the second part of their study, the researchers devised a mathematical method to combine the data from the intracranial recordings with the fMRI data from their 2015 study. Because fMRI can cover a much larger portion of the brain, this allowed them to determine more precisely the locations of the neural populations that respond to singing.

“This way of combining ECoG and fMRI is a significant methodological advance,” McDermott says. “A lot of people have been doing ECoG over the past 10 or 15 years, but it’s always been limited by this issue of the sparsity of the recordings. Sam is really the first person who figured out how to combine the improved resolution of the electrode recordings with fMRI data to get better localization of the overall responses.”

The song-specific hotspot that they found is located at the top of the temporal lobe, near regions that are selective for language and music. That location suggests that the song-specific population may be responding to features such as the perceived pitch, or the interaction between words and perceived pitch, before sending information to other parts of the brain for further processing, the researchers say.

The researchers now hope to learn more about what aspects of singing drive the responses of these neurons. They are also working with MIT Professor Rebecca Saxe’s lab to study whether infants have music-selective areas, in hopes of learning more about when and how these brain regions develop.

The research was funded by the National Institutes of Health, the U.S. Army Research Office, the National Science Foundation, the NSF Science and Technology Center for Brains, Minds, and Machines, the Fondazione Neurone, the Howard Hughes Medical Institute, and the Kristin R. Pressman and Jessica J. Pourian ’13 Fund at MIT.

National Academy of Sciences honors cognitive neuroscientist Nancy Kanwisher

MIT neuroscientist and McGovern Investigator Nancy Kanwisher. Photo: Jussi Puikkonen/KNAW

The National Academy of Sciences (NAS) has announced today that Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience in MIT’s Department of Brain and Cognitive Sciences, has received the 2022 NAS Award in the Neurosciences for her “pioneering research into the functional organization of the human brain.” The $25,000 prize, established by the Fidia Research Foundation, is presented every three years to recognize “extraordinary contributions to the neuroscience fields.”

“I am deeply honored to receive this award from the NAS,” says Kanwisher, who is also an investigator in MIT’s McGovern Institute and a member of the Center for Brains, Minds and Machines. “It has been a profound privilege, and a total blast, to watch the human brain in action as these data began to reveal an initial picture of the organization of the human mind. But the biggest joy has been the opportunity to work with the incredible group of talented young scientists who actually did the work that this award recognizes.”

A window into the mind

Kanwisher is best known for her landmark insights into how humans recognize and process faces. Psychology had long-suggested that recognizing a face might be distinct from general object recognition. But Kanwisher galvanized the field in 1997 with her seminal discovery that the human brain contains a small region specialized to respond only to faces. The region, which Kanwisher termed the fusiform face area (FFA), became activated when subjects viewed images of faces in an MRI scanner, but not when they looked at scrambled faces or control stimuli.

Since her 1997 discovery (now the most highly cited manuscript in its area), Kanwisher and her students have applied similar methods to find brain specializations for the recognition of scenes, the mental states of others, language, and music. Taken together, her research provides a compelling glimpse into the architecture of the brain, and, ultimately, what makes us human.

“Nancy’s work over the past two decades has argued that many aspects of human cognition are supported by specialized neural circuitry, a conclusion that stands in contrast to our subjective sense of a singular mental experience,” says McGovern Institute Director Robert Desimone. “She has made profound contributions to the psychological and cognitive sciences and I am delighted that the National Academy of Sciences has recognized her outstanding achievements.”

One-in-a-million mentor

Beyond the lab, Kanwisher has a reputation as a tireless communicator and mentor who is actively engaged in the policy implications of brain research. The statistics speak for themselves: her 2014 TED talk, “A Neural portrait of the human mind” has been viewed over a million times online and her introductory MIT OCW course on the human brain has generated more than nine million views on YouTube.

Nancy Kanwisher works with researchers from her lab in MIT’s Martinos Imaging Center. Photo: Kris Brewer

Kanwisher also has an exceptional track record in training women scientists who have gone on to successful independent research careers, in many cases becoming prominent figures in their own right.

“Nancy is the one-in-a-million mentor, who is always skeptical of your ideas and your arguments, but immensely confident of your worth,” says Rebecca Saxe, John W. Jarve (1978) Professor of Brain and Cognitive Sciences, investigator at the McGovern Institute, and associate dean of MIT’s School of Science. Saxe was a graduate student in Kanwisher’s lab where she earned her PhD in cognitive neuroscience in 2003. “She has such authentic curiosity,” Saxe adds. “It’s infectious and sustaining. Working with Nancy was a constant reminder of why I wanted to be a scientist.”

The NAS will present Kanwisher with the award during its annual meeting on May 1, 2022 in Washington, DC. The event will be webcast live. Kanwisher plans to direct her prize funds to the non-profit organization Malengo, established by a former student and which provides quality undergraduate education to individuals who would otherwise not be able to afford it.

Babies can tell who has close relationships based on one clue: saliva

Learning to navigate social relationships is a skill that is critical for surviving in human societies. For babies and young children, that means learning who they can count on to take care of them.

MIT neuroscientists have now identified a specific signal that young children and even babies use to determine whether two people have a strong relationship and a mutual obligation to help each other: whether those two people kiss, share food, or have other interactions that involve sharing saliva.

In a new study, the researchers showed that babies expect people who share saliva to come to one another’s aid when one person is in distress, much more so than when people share toys or interact in other ways that do not involve saliva exchange. The findings suggest that babies can use these cues to try to figure out who around them is most likely to offer help, the researchers say.

“Babies don’t know in advance which relationships are the close and morally obligating ones, so they have to have some way of learning this by looking at what happens around them,” says Rebecca Saxe, the John W. Jarve Professor of Brain and Cognitive Sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the new study.

MIT postdoc Ashley Thomas is the lead author of the study, which appears today in Science. Brandon Woo, a Harvard University graduate student; Daniel Nettle, a professor of behavioral science at Newcastle University; and Elizabeth Spelke, a professor of psychology at Harvard, are also authors of the paper.

Sharing saliva

In human societies, people typically distinguish between “thick” and “thin” relationships. Thick relationships, usually found between family members, feature strong levels of attachment, obligation, and mutual responsiveness. Anthropologists have also observed that people in thick relationships are more willing to share bodily fluids such as saliva.

“That inspired both the question of whether infants distinguish between those types of relationships, and whether saliva sharing might be a really good cue they could use to recognize them,” Thomas says.

To study those questions, the researchers observed toddlers (16.5 to 18.5 months) and babies (8.5 to 10 months) as they watched interactions between human actors and puppets. In the first set of experiments, a puppet shared an orange with one actor, then tossed a ball back and forth with a different actor.

After the children watched these initial interactions, the researchers observed the children’s reactions when the puppet showed distress while sitting between the two actors. Based on an earlier study of nonhuman primates, the researchers hypothesized that babies would look first at the person whom they expected to help. That study showed that when baby monkeys cry, other members of the troop look to the baby’s parents, as if expecting them to step in.

The MIT team found that the children were more likely to look toward the actor who had shared food with the puppet, not the one who had shared a toy, when the puppet was in distress.

In a second set of experiments, designed to focus more specifically on saliva, the actor either placed her finger in her mouth and then into the mouth of the puppet, or placed her finger on her forehead and then onto the forehead of the puppet. Later, when the actor expressed distress while standing between the two puppets, children watching the video were more likely to look toward the puppet with whom she had shared saliva.

Social cues

The findings suggest that saliva sharing is likely an important cue that helps infants to learn about their own social relationships and those of people around them, the researchers say.

“The general skill of learning about social relationships is very useful,” Thomas says. “One reason why this distinction between thick and thin might be important for infants in particular, especially human infants, who depend on adults for longer than many other species, is that it might be a good way to figure out who else can provide the support that they depend on to survive.”

The researchers did their first set of studies shortly before Covid-19 lockdowns began, with babies who came to the lab with their families. Later experiments were done over Zoom. The results that the researchers saw were similar before and after the pandemic, confirming that pandemic-related hygiene concerns did not affect the outcome.

“We actually know the results would have been similar if it hadn’t been for the pandemic,” Saxe says. “You might wonder, did kids start to think very differently about sharing saliva when suddenly everybody was talking about hygiene all the time? So, for that question, it’s very useful that we had an initial data set collected before the pandemic.”

Doing the second set of studies on Zoom also allowed the researchers to recruit a much more diverse group of children because the subjects were not limited to families who could come to the lab in Cambridge during normal working hours.

In future work, the researchers hope to perform similar studies with infants in cultures that have different types of family structures. In adult subjects, they plan to use functional magnetic resonance imaging (fMRI) to study what parts of the brain are involved in making saliva-based assessments about social relationships.

The research was funded by the National Institutes of Health; the Patrick J. McGovern Foundation; the Guggenheim Foundation; a Social Sciences and Humanities Research Council Doctoral Fellowship; MIT’s Center for Brains, Minds, and Machines; and the Siegel Foundation.

The craving state

This story originally appeared in the Winter 2022 issue of BrainScan.

***

For people struggling with substance use disorders — and there are about 35 million of them worldwide — treatment options are limited. Even among those who seek help, relapse is common. In the United States, an epidemic of opioid addiction has been declared a public health emergency.

A 2019 survey found that 1.6 million people nationwide had an opioid use disorder, and the crisis has surged since the start of the COVID-19 pandemic. The Centers for Disease Control and Prevention estimates that more than 100,000 people died of drug overdose between April 2020 and April 2021 — nearly 30 percent more overdose deaths than occurred during the same period the previous year.

In the United States, an epidemic of opioid addiction has been declared a public health emergency.

A deeper understanding of what addiction does to the brain and body is urgently needed to pave the way to interventions that reliably release affected individuals from its grip. At the McGovern Institute, researchers are turning their attention to addiction’s driving force: the deep, recurring craving that makes people prioritize drug use over all other wants and needs.

McGovern Institute co-founder, Lore Harp McGovern.

“When you are in that state, then it seems nothing else matters,” says McGovern Investigator Fan Wang. “At that moment, you can discard everything: your relationship, your house, your job, everything. You only want the drug.”

With a new addiction initiative catalyzed by generous gifts from Institute co-founder Lore Harp McGovern and others, McGovern scientists with diverse expertise have come together to begin clarifying the neurobiology that underlies the craving state. They plan to dissect the neural transformations associated with craving at every level — from the drug-induced chemical changes that alter neuronal connections and activity to how these modifications impact signaling brain-wide. Ultimately, the McGovern team hopes not just to understand the craving state, but to find a way to relieve it — for good.

“If we can understand the craving state and correct it, or at least relieve a little bit of the pressure,” explains Wang, who will help lead the addiction initiative, “then maybe we can at least give people a chance to use their top-down control to not take the drug.”

The craving cycle

For individuals suffering from substance use disorders, craving fuels a cyclical pattern of escalating drug use. Following the euphoria induced by a drug like heroin or cocaine, depression sets in, accompanied by a drug craving motivated by the desire to relieve that suffering. And as addiction progresses, the peaks and valleys of this cycle dip lower: the pleasant feelings evoked by the drug become weaker, while the negative effects a person experiences in its absence worsen. The craving remains, and increasing use of the drug are required to relieve it.

By the time addiction sets in, the brain has been altered in ways that go beyond a drug’s immediate effects on neural signaling.

These insidious changes leave individuals susceptible to craving — and the vulnerable state endures. Long after the physical effects of withdrawal have subsided, people with substance use disorders can find their craving returns, triggered by exposure to a small amount of the drug, physical or social cues associated with previous drug use, or stress. So researchers will need to determine not only how different parts of the brain interact with one another during craving and how individual cells and the molecules within them are affected by the craving state — but also how things change as addiction develops and progresses.

Circuits, chemistry and connectivity

One clear starting point is the circuitry the brain uses to control motivation. Thanks in part to decades of research in the lab of McGovern Investigator Ann Graybiel, neuroscientists know a great deal about how these circuits learn which actions lead to pleasure and which lead to pain, and how they use that information to establish habits and evaluate the costs and benefits of complex decisions.

Graybiel’s work has shown that drugs of abuse strongly activate dopamine-responsive neurons in a part of the brain called the striatum, whose signals promote habit formation. By increasing the amount of dopamine that neurons release, these drugs motivate users to prioritize repeated drug use over other kinds of rewards, and to choose the drug in spite of pain or other negative effects. Her group continues to investigate the naturally occurring molecules that control these circuits, as well as how they are hijacked by drugs of abuse.

Distribution of opioid receptors targeted by morphine (shown in blue) in two regions in the dorsal striatum and nucleus accumbens of the mouse brain. Image: Ann Graybiel

In Fan Wang’s lab, work investigating the neural circuits that mediate the perception of physical pain has led her team to question the role of emotional pain in craving. As they investigated the source of pain sensations in the brain, they identified neurons in an emotion-regulating center called the central amygdala that appear to suppress physical pain in animals. Now, Wang wants to know whether it might be possible to modulate neurons involved in emotional pain to ameliorate the negative state that provokes drug craving.

These animal studies will be key to identifying the cellular and molecular changes that set the brain up for recurring cravings. And as McGovern scientists begin to investigate what happens in the brains of rodents that have been trained to self-administer addictive drugs like fentanyl or cocaine, they expect to encounter tremendous complexity.

McGovern Associate Investigator Polina Anikeeva, whose lab has pioneered new technologies that will help the team investigate the full spectrum of changes that underlie craving, says it will be important to consider impacts on the brain’s chemistry, firing patterns, and connectivity. To that end, multifunctional research probes developed in her lab will be critical to monitoring and manipulating neural circuits in animal models.

Imaging technology developed by investigator Ed Boyden will also enable nanoscale protein visualization brain-wide. An important goal will be to identify a neural signature of the craving state. With such a signal, researchers can begin to explore how to shut off that craving — possibly by directly modulating neural signaling.

Targeted treatments

“One of the reasons to study craving is because it’s a natural treatment point,” says McGovern Associate Investigator Alan Jasanoff. “And the dominant kind of approaches that people in our team think about are approaches that relate to neural circuits — to the specific connections between brain regions and how those could be changed.” The hope, he explains, is that it might be possible to identify a brain region whose activity is disrupted during the craving state, then use clinical brain stimulation methods to restore normal signaling — within that region, as well as in other connected parts of the brain.

To identify the right targets for such a treatment, it will be crucial to understand how the biology uncovered in laboratory animals reflects what’s happens in people with substance use disorders. Functional imaging in John Gabrieli’s lab can help bridge the gap between clinical and animal research by revealing patterns of brain activity associated with the craving state in both humans and rodents. A new technique developed in Jasanoff’s lab makes it possible to focus on the activity between specific regions of an animal’s brain. “By doing that, we hope to build up integrated models of how information passes around the brain in craving states, and of course also in control states where we’re not experiencing craving,” he explains.

In delving into the biology of the craving state, McGovern scientists are embarking on largely unexplored territory — and they do so with both optimism and urgency. “It’s hard to not appreciate just the size of the problem, and just how devastating addiction is,” says Anikeeva. “At this point, it just seems almost irresponsible to not work on it, especially when we do have the tools and we are interested in the general brain regions that are important for that problem. I would say that there’s almost a civic duty.”

Jacqueline Lees and Rebecca Saxe named associate deans of science

Jaqueline Lees and Rebecca Saxe have been named associate deans serving in the MIT School of Science. Lees is the Virginia and D.K. Ludwig Professor for Cancer Research and is currently the associate director of the Koch Institute for Integrative Cancer Research, as well as an associate department head and professor in the Department of Biology at MIT. Saxe is the John W. Jarve (1978) Professor in Brain and Cognitive Sciences and the associate head of the Department of Brain and Cognitive Sciences (BCS); she is also an associate investigator in the McGovern Institute for Brain Research.

Lees and Saxe will both contribute to the school’s diversity, equity, inclusion, and justice (DEIJ) activities, as well as develop and implement mentoring and other career-development programs to support the community. From their home departments, Saxe and Lees bring years of DEIJ and mentorship experience to bear on the expansion of school-level initiatives.

Lees currently serves on the dean’s science council in her capacity as associate director of the Koch Institute. In this new role as associate dean for the School of Science, she will bring her broad administrative and programmatic experiences to bear on the next phase for DEIJ and mentoring activities.

Lees joined MIT in 1994 as a faculty member in MIT’s Koch Institute (then the Center for Cancer Research) and Department of Biology. Her research focuses on regulators that control cellular proliferation, terminal differentiation, and stemness — functions that are frequently deregulated in tumor cells. She dissects the role of these proteins in normal cell biology and development, and establish how their deregulation contributes to tumor development and metastasis.

Since 2000, she has served on the Department of Biology’s graduate program committee, and played a major role in expanding the diversity of the graduate student population. Lees also serves on DEIJ committees in her home department, as well as at the Koch Institute.

With co-chair with Boleslaw Wyslouch, director of the Laboratory for Nuclear Science, Lees led the ReseArch Scientist CAreer LadderS (RASCALS) committee tasked to evaluate career trajectories for research staff in the School of Science and make recommendations to recruit and retain talented staff, rewarding them for their contributions to the school’s research enterprise.

“Jackie is a powerhouse in translational research, demonstrating how fundamental work at the lab bench is critical for making progress at the patient bedside,” says Nergis Mavalvala, dean of the School of Science. “With Jackie’s dedicated and thoughtful partnership, we can continue to lead in basic research and develop the recruitment, retention, and mentoring and necessary to support our community.”

Saxe will join Lees in supporting and developing programming across the school that could also provide direction more broadly at the Institute.

“Rebecca is an outstanding researcher in social cognition and a dedicated educator — someone who wants our students not only to learn, but to thrive,” says Mavalvala. “I am grateful that Rebecca will join the dean’s leadership team and bring her mentorship and leadership skills to enhance the school.”

For example, in collaboration with former department head James DiCarlo, the BCS department has focused on faculty mentorship of graduate students; and, in collaboration with Professor Mark Bear, the department developed postdoc salary and benefit standards. Both initiatives have become models at MIT.

With colleague Laura Schulz, Saxe also served as co-chair of the Committee on Medical Leave and Hospitalizations (CMLH), which outlined ways to enhance MIT’s current leave and hospitalization procedures and policies for undergraduate and graduate students. Saxe was also awarded MIT’s Committed to Caring award for excellence in graduate student mentorship, as well as the School of Science’s award for excellence in undergraduate teaching.

In her research, Saxe studies human social cognition, using a combination of behavioral testing and brain imaging technologies. She is best known for her work on brain regions specialized for abstract concepts, such as “theory of mind” tasks that involve understanding the mental states of other people. Her TED Talk, “How we read each other’s minds” has been viewed more than 3 million times. She also studies the development of the human brain during early infancy.

She obtained her PhD from MIT and was a Harvard University junior fellow before joining the MIT faculty in 2006. In 2014, the National Academy of Sciences named her one of two recipients of the Troland Award for investigators age 40 or younger “to recognize unusual achievement and further empirical research in psychology regarding the relationships of consciousness and the physical world.” In 2020, Saxe was named a John Simon Guggenheim Foundation Fellow.

Saxe and Lees will also work closely with Kuheli Dutt, newly hired assistant dean for diversity, equity, and inclusion, and other members of the dean’s science council on school-level initiatives and strategy.

“I’m so grateful that Rebecca and Jackie have agreed to take on these new roles,” Mavalvala says. “And I’m super excited to work with these outstanding thought partners as we tackle the many puzzles that I come across as dean.”

Investigating the embattled brain

Omar Rutledge served as a US Army infantryman in the 1st Armored and 25th Infantry Divisions. He was deployed in support of Operation Iraqi Freedom from March 2003 to July 2004. Photo: Omar Rutledge

As an Iraq war veteran, Omar Rutledge is deeply familiar with post-traumatic stress – recurring thoughts and memories that persist long after a danger has passed – and he knows that a brain altered by trauma is not easily fixed. But as a graduate student in the Department of Brain and Cognitive Sciences, Rutledge is determined to change that. He wants to understand exactly how trauma alters the brain – and whether the tools of neuroscience can be used to help fellow veterans with post-traumatic stress disorder (PTSD) heal from their experiences.

“In the world of PTSD research, I look to my left and to my right, and I don’t see other veterans, certainly not former infantrymen,” says Rutledge, who served in the US Army and was deployed to Iraq from March 2003 to July 2004. “If there are so few of us in this space, I feel like I have an obligation to make a difference for all who suffer from the traumatic experiences of war.”

Rutledge is uniquely positioned to make such a difference in the lab of McGovern Investigator John Gabrieli, where researchers use technologies like magnetic resonance imaging (MRI), electroencephalography (EEG), and magnetoencephalography (MEG) to peer into the human brain and explore how it powers our thoughts, memories, and emotions. Rutledge is studying how PTSD weakens the connection between the amygdala, which is responsible for emotions like fear, and the prefrontal cortex, which regulates or controls these emotional responses. He hopes these studies will eventually lead to the development of wearable technologies that can retrain the brain to be less responsive to triggering events.

“I feel like it has been a mission of mine to do this kind of work.”

Though Covid-19 has unexpectedly paused some aspects of his research, Rutledge is pursuing another line of research inspired both by the mandatory social distancing protocols imposed during the lockdown and his own experiences with social isolation. Does chronic social isolation cause physical or chemical changes in the brain similar to those seen in PTSD? And does loneliness exacerbate symptoms of PTSD?

“There’s this hypervigilance that occurs in loneliness, and there’s also something very similar that occurs in PTSD — a heightened awareness of potential threats,” says Rutledge, who is the recipient of Michael Ferrara Graduate Fellowship provided by the Poitras Center, a fellowship made possible by the many friends and family of Michael Ferrara. “The combination of the two may lead to more adverse reactions in people with PTSD.”

In the future, Rutledge hopes to explore whether chronic loneliness impairs reasoning and logic skills and has a deeper impact on veterans who have PTSD.

Although his research tends to resurface painful memories of his own combat experiences, Rutledge says if it can help other veterans heal, it’s worth it.  “In the process, it makes me a little bit stronger as well,” he adds.