New MRI probe can reveal more of the brain’s inner workings

Using a novel probe for functional magnetic resonance imaging (fMRI), MIT biological engineers have devised a way to monitor individual populations of neurons and reveal how they interact with each other.

Similar to how the gears of a clock interact in specific ways to turn the clock’s hands, different parts of the brain interact to perform a variety of tasks, such as generating behavior or interpreting the world around us. The new MRI probe could potentially allow scientists to map those networks of interactions.

“With regular fMRI, we see the action of all the gears at once. But with our new technique, we can pick up individual gears that are defined by their relationship to the other gears, and that’s critical for building up a picture of the mechanism of the brain,” says Alan Jasanoff, an MIT professor of biological engineering, brain and cognitive sciences, and nuclear science and engineering.

Using this technique, which involves genetically targeting the MRI probe to specific populations of cells in animal models, the researchers were able to identify neural populations involved in a circuit that responds to rewarding stimuli. The new MRI probe could also enable studies of many other brain circuits, the researchers say.

Jasanoff, who is also an associate investigator at the McGovern Institute, is the senior author of the study, which appears today in Nature Neuroscience. The lead authors of the paper are recent MIT PhD recipient Souparno Ghosh and former MIT research scientist Nan Li.

Tracing connections

Traditional fMRI imaging measures changes to blood flow in the brain, as a proxy for neural activity. When neurons receive signals from other neurons, it triggers an influx of calcium, which causes a diffusible gas called nitric oxide to be released. Nitric oxide acts in part as a vasodilator that increases blood flow to the area.

Imaging calcium directly can offer a more precise picture of brain activity, but that type of imaging usually requires fluorescent chemicals and invasive procedures. The MIT team wanted to develop a method that could work across the brain without that type of invasiveness.

“If we want to figure out how brain-wide networks of cells and brain-wide mechanisms function, we need something that can be detected deep in tissue and preferably across the entire brain at once,” Jasanoff says. “The way that we chose to do that in this study was to essentially hijack the molecular basis of fMRI itself.”

The researchers created a genetic probe, delivered by viruses, that codes for a protein that sends out a signal whenever the neuron is active. This protein, which the researchers called NOSTIC (nitric oxide synthase for targeting image contrast), is an engineered form of an enzyme called nitric oxide synthase. The NOSTIC protein can detect elevated calcium levels that arise during neural activity; it then generates nitric oxide, leading to an artificial fMRI signal that arises only from cells that contain NOSTIC.

The probe is delivered by a virus that is injected into a particular site, after which it travels along axons of neurons that connect to that site. That way, the researchers can label every neural population that feeds into a particular location.

“When we use this virus to deliver our probe in this way, it causes the probe to be expressed in the cells that provide input to the location where we put the virus,” Jasanoff says. “Then, by performing functional imaging of those cells, we can start to measure what makes input to that region take place, or what types of input arrive at that region.”

Turning the gears

In the new study, the researchers used their probe to label populations of neurons that project to the striatum, a region that is involved in planning movement and responding to reward. In rats, they were able to determine which neural populations send input to the striatum during or immediately following a rewarding stimulus — in this case, deep brain stimulation of the lateral hypothalamus, a brain center that is involved in appetite and motivation, among other functions.

One question that researchers have had about deep brain stimulation of the lateral hypothalamus is how wide-ranging the effects are. In this study, the MIT team showed that several neural populations, located in regions including the motor cortex and the entorhinal cortex, which is involved in memory, send input into the striatum following deep brain stimulation.

“It’s not simply input from the site of the deep brain stimulation or from the cells that carry dopamine. There are these other components, both distally and locally, that shape the response, and we can put our finger on them because of the use of this probe,” Jasanoff says.

During these experiments, neurons also generate regular fMRI signals, so in order to distinguish the signals that are coming specifically from the genetically altered neurons, the researchers perform each experiment twice: once with the probe on, and once following treatment with a drug that inhibits the probe. By measuring the difference in fMRI activity between these two conditions, they can determine how much activity is present in probe-containing cells specifically.

The researchers now hope to use this approach, which they call hemogenetics, to study other networks in the brain, beginning with an effort to identify some of the regions that receive input from the striatum following deep brain stimulation.

“One of the things that’s exciting about the approach that we’re introducing is that you can imagine applying the same tool at many sites in the brain and piecing together a network of interlocking gears, which consist of these input and output relationships,” Jasanoff says. “This can lead to a broad perspective on how the brain works as an integrated whole, at the level of neural populations.”

The research was funded by the National Institutes of Health and the MIT Simons Center for the Social Brain.

Singing in the brain

Press Mentions

For the first time, MIT neuroscientists have identified a population of neurons in the human brain that lights up when we hear singing, but not other types of music.

These neurons, found in the auditory cortex, appear to respond to the specific combination of voice and music, but not to either regular speech or instrumental music. Exactly what they are doing is unknown and will require more work to uncover, the researchers say.

“The work provides evidence for relatively fine-grained segregation of function within the auditory cortex, in a way that aligns with an intuitive distinction within music,” says Sam Norman-Haignere, a former MIT postdoc who is now an assistant professor of neuroscience at the University of Rochester Medical Center.

The work builds on a 2015 study in which the same research team used functional magnetic resonance imaging (fMRI) to identify a population of neurons in the brain’s auditory cortex that responds specifically to music. In the new work, the researchers used recordings of electrical activity taken at the surface of the brain, which gave them much more precise information than fMRI.

“There’s one population of neurons that responds to singing, and then very nearby is another population of neurons that responds broadly to lots of music. At the scale of fMRI, they’re so close that you can’t disentangle them, but with intracranial recordings, we get additional resolution, and that’s what we believe allowed us to pick them apart,” says Norman-Haignere.

Norman-Haignere is the lead author of the study, which appears today in the journal Current Biology. Josh McDermott, an associate professor of brain and cognitive sciences, and Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience, both members of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds and Machines (CBMM), are the senior authors of the study.

Neural recordings

In their 2015 study, the researchers used fMRI to scan the brains of participants as they listened to a collection of 165 sounds, including different types of speech and music, as well as everyday sounds such as finger tapping or a dog barking. For that study, the researchers devised a novel method of analyzing the fMRI data, which allowed them to identify six neural populations with different response patterns, including the music-selective population and another population that responds selectively to speech.

In the new study, the researchers hoped to obtain higher-resolution data using a technique known as electrocorticography (ECoG), which allows electrical activity to be recorded by electrodes placed inside the skull. This offers a much more precise picture of electrical activity in the brain compared to fMRI, which measures blood flow in the brain as a proxy of neuron activity.

“With most of the methods in human cognitive neuroscience, you can’t see the neural representations,” Kanwisher says. “Most of the kind of data we can collect can tell us that here’s a piece of brain that does something, but that’s pretty limited. We want to know what’s represented in there.”

Electrocorticography cannot be typically be performed in humans because it is an invasive procedure, but it is often used to monitor patients with epilepsy who are about to undergo surgery to treat their seizures. Patients are monitored over several days so that doctors can determine where their seizures are originating before operating. During that time, if patients agree, they can participate in studies that involve measuring their brain activity while performing certain tasks. For this study, the MIT team was able to gather data from 15 participants over several years.

For those participants, the researchers played the same set of 165 sounds that they used in the earlier fMRI study. The location of each patient’s electrodes was determined by their surgeons, so some did not pick up any responses to auditory input, but many did. Using a novel statistical analysis that they developed, the researchers were able to infer the types of neural populations that produced the data that were recorded by each electrode.

“When we applied this method to this data set, this neural response pattern popped out that only responded to singing,” Norman-Haignere says. “This was a finding we really didn’t expect, so it very much justifies the whole point of the approach, which is to reveal potentially novel things you might not think to look for.”

That song-specific population of neurons had very weak responses to either speech or instrumental music, and therefore is distinct from the music- and speech-selective populations identified in their 2015 study.

Music in the brain

In the second part of their study, the researchers devised a mathematical method to combine the data from the intracranial recordings with the fMRI data from their 2015 study. Because fMRI can cover a much larger portion of the brain, this allowed them to determine more precisely the locations of the neural populations that respond to singing.

“This way of combining ECoG and fMRI is a significant methodological advance,” McDermott says. “A lot of people have been doing ECoG over the past 10 or 15 years, but it’s always been limited by this issue of the sparsity of the recordings. Sam is really the first person who figured out how to combine the improved resolution of the electrode recordings with fMRI data to get better localization of the overall responses.”

The song-specific hotspot that they found is located at the top of the temporal lobe, near regions that are selective for language and music. That location suggests that the song-specific population may be responding to features such as the perceived pitch, or the interaction between words and perceived pitch, before sending information to other parts of the brain for further processing, the researchers say.

The researchers now hope to learn more about what aspects of singing drive the responses of these neurons. They are also working with MIT Professor Rebecca Saxe’s lab to study whether infants have music-selective areas, in hopes of learning more about when and how these brain regions develop.

The research was funded by the National Institutes of Health, the U.S. Army Research Office, the National Science Foundation, the NSF Science and Technology Center for Brains, Minds, and Machines, the Fondazione Neurone, the Howard Hughes Medical Institute, and the Kristin R. Pressman and Jessica J. Pourian ’13 Fund at MIT.

On a mission to alleviate chronic pain

About 50 million Americans suffer from chronic pain, which interferes with their daily life, social interactions, and ability to work. MIT Professor Fan Wang wants to develop new ways to help relieve that pain, by studying and potentially modifying the brain’s own pain control mechanisms.

Her recent work has identified an “off switch” for pain, located in the brain’s amygdala. She hopes that finding ways to control this switch could lead to new treatments for chronic pain.

“Chronic pain is a major societal issue,” Wang says. “By studying pain-suppression neurons in the brain’s central amygdala, I hope to create a new therapeutic approach for alleviating pain.”

Wang, who joined the MIT faculty in January 2021, is also the leader of a new initiative at the McGovern Institute for Brain Research that is studying drug addiction, with the goal of developing more effective treatments for addiction.

“Opioid prescription for chronic pain is a major contributor to the opioid epidemic. With the Covid pandemic, I think addiction and overdose are becoming worse. People are more anxious, and they seek drugs to alleviate such mental pain,” Wang says. “As scientists, it’s our duty to tackle this problem.”

Sensory circuits

Wang, who grew up in Beijing, describes herself as “a nerdy child” who loved books and math. In high school, she took part in science competitions, then went on to study biology at Tsinghua University. She arrived in the United States in 1993 to begin her PhD at Columbia University. There, she worked on tracing the connection patterns of olfactory receptor neurons in the lab of Richard Axel, who later won the Nobel Prize for his discoveries of odorant receptors and how the olfactory system is organized.

After finishing her PhD, Wang decided to switch gears. As a postdoc at the University of California at San Francisco and then Stanford University, she began studying how the brain perceives touch.

In 2003, Wang joined the faculty at Duke University School of Medicine. There, she began developing techniques to study the brain circuits that underlie the sense of touch, tracing circuits that carry sensory information from the whiskers of mice to the brain. She also studied how the brain integrates movements of touch organs with signals of sensory stimuli to generate perception (such as using stretching movements to sense elasticity).

As she pursued her sensory perception studies, Wang became interested in studying pain perception, but she felt she needed to develop new techniques to tackle it. While at Duke, she invented a technique called CANE (capturing activated neural ensembles), which can identify networks of neurons that are activated by a particular stimulus.

Using this approach in mice, she identified neurons that become active in response to pain, but so many neurons across the brain were activated that it didn’t offer much useful information. As a way to indirectly get at how the brain controls pain, she decided to use CANE to explore the effects of drugs used for general anesthesia. During general anesthesia, drugs render a patient unconscious, but Wang hypothesized that the drugs might also shut off pain perception.

“At that time, it was just a wild idea,” Wang recalls. “I thought there may be other mechanisms — that instead of just a loss of consciousness, anesthetics may do something to the brain that actually turns pain off.”

Support for the existence of an “off switch” for pain came from the observation that wounded soldiers on a battlefield can continue to fight, essentially blocking out pain despite their injuries.

In a study of mice treated with anesthesia drugs, Wang discovered that the brain does have this kind of switch, in an unexpected location: the amygdala, which is involved in regulating emotion. She showed that this cluster of neurons can turn off pain when activated, and when it is suppressed, mice become highly sensitive to ordinary gentle touch.

“There’s a baseline level of activity that makes the animals feel normal, and when you activate these neurons, they’ll feel less pain. When you silence them, they’ll feel more pain,” Wang says.

Turning off pain

That finding, which Wang reported in 2020, raised the possibility of somehow modulating that switch in humans to try to treat chronic pain. This is a long-term goal of Wang’s, but more work is required to achieve it, she says. Currently her lab is working on analyzing the RNA expression patterns of the neurons in the cluster she identified. They also are measuring the neurons’ electrical activity and how they interact with other neurons in the brain, in hopes of identifying circuits that could be targeted to tamp down the perception of pain.

One way of modulating these circuits could be to use deep brain stimulation, which involves implanting electrodes in certain areas of the brain. Focused ultrasound, which is still in early stages of development and does not require surgery, could be a less invasive alternative.

Another approach Wang is interested in exploring is pairing brain stimulation with a context such as looking at a smartphone app. This kind of pairing could help train the brain to shut off pain using the app, without the need for the original stimulation (deep brain stimulation or ultrasound).

“Maybe you don’t need to constantly stimulate the brain. You may just need to reactivate it with a context,” Wang says. “After a while you would probably need to be restimulated, or reconditioned, but at least you have a longer window where you don’t need to go to the hospital for stimulation, and you just need to use a context.”

Wang, who was drawn to MIT in part by its focus on fostering interdisciplinary collaborations, is now working with several other McGovern Institute members who are taking different angles to try to figure out how the brain generates the state of craving that occurs in drug addiction, including opioid addiction.

“We’re going to focus on trying to understand this craving state: how it’s created in the brain and how can we sort of erase that trace in the brain, or at least control it. And then you can neuromodulate it in real time, for example, and give people a chance to get back their control,” she says.

Dendrites may help neurons perform complicated calculations

Within the human brain, neurons perform complex calculations on information they receive. Researchers at MIT have now demonstrated how dendrites — branch-like extensions that protrude from neurons — help to perform those computations.

The researchers found that within a single neuron, different types of dendrites receive input from distinct parts of the brain, and process it in different ways. These differences may help neurons to integrate a variety of inputs and generate an appropriate response, the researchers say.

In the neurons that the researchers examined in this study, it appears that this dendritic processing helps cells to take in visual information and combine it with motor feedback, in a circuit that is involved in navigation and planning movement.

“Our hypothesis is that these neurons have the ability to pick out specific features and landmarks in the visual environment, and combine them with information about running speed, where I’m going, and when I’m going to start, to move toward a goal position,” says Mark Harnett, an associate professor of brain and cognitive sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Mathieu Lafourcade, a former MIT postdoc, is the lead author of the paper, which appears today in Neuron.

Complex calculations

Any given neuron can have dozens of dendrites, which receive synaptic input from other neurons. Neuroscientists have hypothesized that these dendrites can act as compartments that perform their own computations on incoming information before sending the results to the body of the neuron, which integrates all these signals to generate an output.

Previous research has shown that dendrites can amplify incoming signals using specialized proteins called NMDA receptors. These are voltage-sensitive neurotransmitter receptors that are dependent on the activity of other receptors called AMPA receptors. When a dendrite receives many incoming signals through AMPA receptors at the same time, the threshold to activate nearby NMDA receptors is reached, creating an extra burst of current.

This phenomenon, known as supralinearity, is believed to help neurons distinguish between inputs that arrive close together or farther apart in time or space, Harnett says.

In the new study, the MIT researchers wanted to determine whether different types of inputs are targeted specifically to different types of dendrites, and if so, how that would affect the computations performed by those neurons. They focused on a population of neurons called pyramidal cells, the principal output neurons of the cortex, which have several different types of dendrites. Basal dendrites extend below the body of the neuron, apical oblique dendrites extend from a trunk that travels up from the body, and tuft dendrites are located at the top of the trunk.

Harnett and his colleagues chose a part of the brain called the retrosplenial cortex (RSC) for their studies because it is a good model for association cortex — the type of brain cortex used for complex functions such as planning, communication, and social cognition. The RSC integrates information from many parts of the brain to guide navigation, and pyramidal neurons play a key role in that function.

In a study of mice, the researchers first showed that three different types of input come into pyramidal neurons of the RSC: from the visual cortex into basal dendrites, from the motor cortex into apical oblique dendrites, and from the lateral nuclei of the thalamus, a visual processing area, into tuft dendrites.

“Until now, there hasn’t been much mapping of what inputs are going to those dendrites,” Harnett says. “We found that there are some sophisticated wiring rules here, with different inputs going to different dendrites.”

A range of responses

The researchers then measured electrical activity in each of those compartments. They expected that NMDA receptors would show supralinear activity, because this behavior has been demonstrated before in dendrites of pyramidal neurons in both the primary sensory cortex and the hippocampus.

In the basal dendrites, the researchers saw just what they expected: Input coming from the visual cortex provoked supralinear electrical spikes, generated by NMDA receptors. However, just 50 microns away, in the apical oblique dendrites of the same cells, the researchers found no signs of supralinear activity. Instead, input to those dendrites drives a steady linear response. Those dendrites also have a much lower density of NMDA receptors.

“That was shocking, because no one’s ever reported that before,” Harnett says. “What that means is the apical obliques don’t care about the pattern of input. Inputs can be separated in time, or together in time, and it doesn’t matter. It’s just a linear integrator that’s telling the cell how much input it’s getting, without doing any computation on it.”

Those linear inputs likely represent information such as running speed or destination, Harnett says, while the visual information coming into the basal dendrites represents landmarks or other features of the environment. The supralinearity of the basal dendrites allows them to perform more sophisticated types of computation on that visual input, which the researchers hypothesize allows the RSC to flexibly adapt to changes in the visual environment.

In the tuft dendrites, which receive input from the thalamus, it appears that NMDA spikes can be generated, but not very easily. Like the apical oblique dendrites, the tuft dendrites have a low density of NMDA receptors. Harnett’s lab is now studying what happens in all of these different types of dendrites as mice perform navigation tasks.

The research was funded by a Boehringer Ingelheim Fonds PhD Fellowship, the National Institutes of Health, the James W. and Patricia T. Poitras Fund, the Klingenstein-Simons Fellowship Program, a Vallee Scholar Award, and a McKnight Scholar Award.

School of Science announces 2022 Infinite Expansion Awards

The MIT School of Science has announced eight postdocs and research scientists as recipients of the 2022 Infinite Expansion Award.

The award, formerly known as the Infinite Kilometer Award, was created in 2012 to highlight extraordinary members of the MIT science community. The awardees are nominated not only for their research, but for going above and beyond in mentoring junior colleagues, participating in educational programs, and contributing to their departments, labs, and research centers, the school, and the Institute.

The 2022 School of Science Infinite Expansion winners are:

  • Héctor de Jesús-Cortés, a postdoc in the Picower Institute for Learning and Memory, nominated by professor and Department of Brain and Cognitive Sciences (BCS) head Michale Fee, professor and McGovern Institute for Brain Research Director Robert Desimone, professor and Picower Institute Director Li-Huei Tsai, professor and associate BCS head Laura Schulz, associate professor and associate BCS head Joshua McDermott, and professor and BCS Postdoc Officer Mark Bear for his “awe-inspiring commitment of time and energy to research, outreach, education, mentorship, and community;”
  • Harold Erbin, a postdoc in the Laboratory for Nuclear Science’s Institute for Artificial Intelligence and Fundamental Interactions (IAIFI), nominated by professor and IAIFI Director Jesse Thaler, associate professor and IAIFI Deputy Director Mike Williams, and associate professor and IAIFI Early Career and Equity Committee Chair Tracy Slatyer for “provid[ing] exemplary service on the IAIFI Early Career and Equity Committee” and being “actively involved in many other IAIFI community building efforts;”
  • Megan Hill, a postdoc in the Department of Chemistry, nominated by Professor Jeremiah Johnson for being an “outstanding scientist” who has “also made exceptional contributions to our community through her mentorship activities and participation in Women in Chemistry;”
  • Kevin Kuns, a postdoc in the Kavli Institute for Astrophysics and Space Research, nominated by Associate Professor Matthew Evans for “consistently go[ing] beyond expectations;”
  • Xingcheng Lin, a postdoc in the Department of Chemistry, nominated by Associate Professor Bin Zhang for being “very talented, extremely hardworking, and genuinely enthusiastic about science;”
  • Alexandra Pike, a postdoc in the Department of Biology, nominated by Professor Stephen Bell for “not only excel[ing] in the laboratory” but also being “an exemplary citizen in the biology department, contributing to teaching, community, and to improving diversity, equity, and inclusion in the department;”
  • Nora Shipp, a postdoc with the Kavli Institute for Astrophysics and Space Research, nominated by Assistant Professor Lina Necib for being “independent, efficient, with great leadership qualities” with “impeccable” research; and
  • Jakob Voigts, a research scientist in the McGovern Institute for Brain Research, nominated by Associate Professor Mark Harnett and his laboratory for “contribut[ing] to the growth and development of the lab and its members in numerous and irreplaceable ways.”

Winners are honored with a monetary award and will be celebrated with family, friends, and nominators at a later date, along with recipients of the Infinite Mile Award.

Where did that sound come from?

The human brain is finely tuned not only to recognize particular sounds, but also to determine which direction they came from. By comparing differences in sounds that reach the right and left ear, the brain can estimate the location of a barking dog, wailing fire engine, or approaching car.

MIT neuroscientists have now developed a computer model that can also perform that complex task. The model, which consists of several convolutional neural networks, not only performs the task as well as humans do, it also struggles in the same ways that humans do.

“We now have a model that can actually localize sounds in the real world,” says Josh McDermott, an associate professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research. “And when we treated the model like a human experimental participant and simulated this large set of experiments that people had tested humans on in the past, what we found over and over again is it the model recapitulates the results that you see in humans.”

Findings from the new study also suggest that humans’ ability to perceive location is adapted to the specific challenges of our environment, says McDermott, who is also a member of MIT’s Center for Brains, Minds, and Machines.

McDermott is the senior author of the paper, which appears today in Nature Human Behavior. The paper’s lead author is MIT graduate student Andrew Francl.

Modeling localization

When we hear a sound such as a train whistle, the sound waves reach our right and left ears at slightly different times and intensities, depending on what direction the sound is coming from. Parts of the midbrain are specialized to compare these slight differences to help estimate what direction the sound came from, a task also known as localization.

This task becomes markedly more difficult under real-world conditions — where the environment produces echoes and many sounds are heard at once.

Scientists have long sought to build computer models that can perform the same kind of calculations that the brain uses to localize sounds. These models sometimes work well in idealized settings with no background noise, but never in real-world environments, with their noises and echoes.

To develop a more sophisticated model of localization, the MIT team turned to convolutional neural networks. This kind of computer modeling has been used extensively to model the human visual system, and more recently, McDermott and other scientists have begun applying it to audition as well.

Convolutional neural networks can be designed with many different architectures, so to help them find the ones that would work best for localization, the MIT team used a supercomputer that allowed them to train and test about 1,500 different models. That search identified 10 that seemed the best-suited for localization, which the researchers further trained and used for all of their subsequent studies.

To train the models, the researchers created a virtual world in which they can control the size of the room and the reflection properties of the walls of the room. All of the sounds fed to the models originated from somewhere in one of these virtual rooms. The set of more than 400 training sounds included human voices, animal sounds, machine sounds such as car engines, and natural sounds such as thunder.

The researchers also ensured the model started with the same information provided by human ears. The outer ear, or pinna, has many folds that reflect sound, altering the frequencies that enter the ear, and these reflections vary depending on where the sound comes from. The researchers simulated this effect by running each sound through a specialized mathematical function before it went into the computer model.

“This allows us to give the model the same kind of information that a person would have,” Francl says.

After training the models, the researchers tested them in a real-world environment. They placed a mannequin with microphones in its ears in an actual room and played sounds from different directions, then fed those recordings into the models. The models performed very similarly to humans when asked to localize these sounds.

“Although the model was trained in a virtual world, when we evaluated it, it could localize sounds in the real world,” Francl says.

Similar patterns

The researchers then subjected the models to a series of tests that scientists have used in the past to study humans’ localization abilities.

In addition to analyzing the difference in arrival time at the right and left ears, the human brain also bases its location judgments on differences in the intensity of sound that reaches each ear. Previous studies have shown that the success of both of these strategies varies depending on the frequency of the incoming sound. In the new study, the MIT team found that the models showed this same pattern of sensitivity to frequency.

“The model seems to use timing and level differences between the two ears in the same way that people do, in a way that’s frequency-dependent,” McDermott says.

The researchers also showed that when they made localization tasks more difficult, by adding multiple sound sources played at the same time, the computer models’ performance declined in a way that closely mimicked human failure patterns under the same circumstances.

“As you add more and more sources, you get a specific pattern of decline in humans’ ability to accurately judge the number of sources present, and their ability to localize those sources,” Francl says. “Humans seem to be limited to localizing about three sources at once, and when we ran the same test on the model, we saw a really similar pattern of behavior.”

Because the researchers used a virtual world to train their models, they were also able to explore what happens when their model learned to localize in different types of unnatural conditions. The researchers trained one set of models in a virtual world with no echoes, and another in a world where there was never more than one sound heard at a time. In a third, the models were only exposed to sounds with narrow frequency ranges, instead of naturally occurring sounds.

When the models trained in these unnatural worlds were evaluated on the same battery of behavioral tests, the models deviated from human behavior, and the ways in which they failed varied depending on the type of environment they had been trained in. These results support the idea that the localization abilities of the human brain are adapted to the environments in which humans evolved, the researchers say.

The researchers are now applying this type of modeling to other aspects of audition, such as pitch perception and speech recognition, and believe it could also be used to understand other cognitive phenomena, such as the limits on what a person can pay attention to or remember, McDermott says.

The research was funded by the National Science Foundation and the National Institute on Deafness and Other Communication Disorders.

Five MIT faculty elected 2021 AAAS Fellows

Five MIT faculty members have been elected as fellows of the American Association for the Advancement of Science (AAAS).

The 2021 class of AAAS Fellows includes 564 scientists, engineers, and innovators spanning 24 scientific disciplines who are being recognized for their scientifically and socially distinguished achievements.

Mircea Dincă is the W. M. Keck Professor of Energy in the Department of Chemistry. His group’s research focuses on addressing challenges related to the storage and consumption of energy, and global environmental concerns. Central to these efforts are the synthesis of novel organic-inorganic hybrid materials and the manipulation of their electrochemical and photophysical properties, with a current emphasis on porous materials and extended one-dimensional van der Waals materials.

Guoping Feng is the James W. and Patricia T. Poitras Professor of Neuroscience in the Department of Brain and Cognitive Sciences, associate director of MIT’s McGovern Institute for Brain Research, director of Model Systems and Neurobiology at the Stanley Center for Psychiatric Research, and an institute member of the Broad Institute of MIT and Harvard. His research is devoted to understanding the development and function of synapses in the brain and how synaptic dysfunction may contribute to neurodevelopmental and psychiatric disorders. By understanding the molecular, cellular, and circuitry mechanisms of these disorders, Feng hopes his work will eventually lead to the development of new and effective treatments for the millions of people suffering from these devastating diseases.

David Shoemaker is a senior research scientist with the MIT Kavli Institute for Astrophysics and Space Research. His work is focused on gravitational-wave observation and includes developing technologies for the detectors (LIGO, LISA), developing proposals for new instruments (Cosmic Explorer), managing the teams to build them and the consortia which exploit the data (LIGO Scientific Collaboration, LISA Consortium), and supporting the overall growth of the field (Gravitational-Wave International Committee).

Ian Hunter is the Hatsopoulos Professor of Mechanical Engineering and runs the Bioinstrumentation Lab at MIT. His main areas of research are instrumentation, microrobotics, medical devices, and biomimetic materials. Over the years he and his students have developed many instruments and devices including: confocal laser microscopes, scanning tunneling electron microscopes, miniature mass spectrometers, new forms of Raman spectroscopy, needle-free drug delivery technologies, nano- and micro-robots, microsurgical robots, robotic endoscopes, high-performance Lorentz force motors, and microarray technologies for massively parallel chemical and biological assays.

Evelyn N. Wang is the Ford Professor of Engineering and head of the Department of Mechanical Engineering. Her research program combines fundamental studies of micro/nanoscale heat and mass transport processes with the development of novel engineered structures to create innovative solutions in thermal management, energy, and water harvesting systems. Her work in thermophotovoltaics was named to Technology Review’s lists of Biggest Clean Energy Advances, in 2016, and Ten Breakthrough Technologies, in 2017, and to the Department of Energy Frontiers Research Center’s Ten of Ten awards. Her work extracting water from air has won her the title of 2017 Foreign Policy’s Global ReThinker and the 2018 Eighth Prince Sultan bin Abdulaziz International Prize for Water.

Babies can tell who has close relationships based on one clue: saliva

Learning to navigate social relationships is a skill that is critical for surviving in human societies. For babies and young children, that means learning who they can count on to take care of them.

MIT neuroscientists have now identified a specific signal that young children and even babies use to determine whether two people have a strong relationship and a mutual obligation to help each other: whether those two people kiss, share food, or have other interactions that involve sharing saliva.

In a new study, the researchers showed that babies expect people who share saliva to come to one another’s aid when one person is in distress, much more so than when people share toys or interact in other ways that do not involve saliva exchange. The findings suggest that babies can use these cues to try to figure out who around them is most likely to offer help, the researchers say.

“Babies don’t know in advance which relationships are the close and morally obligating ones, so they have to have some way of learning this by looking at what happens around them,” says Rebecca Saxe, the John W. Jarve Professor of Brain and Cognitive Sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the new study.

MIT postdoc Ashley Thomas is the lead author of the study, which appears today in Science. Brandon Woo, a Harvard University graduate student; Daniel Nettle, a professor of behavioral science at Newcastle University; and Elizabeth Spelke, a professor of psychology at Harvard, are also authors of the paper.

Sharing saliva

In human societies, people typically distinguish between “thick” and “thin” relationships. Thick relationships, usually found between family members, feature strong levels of attachment, obligation, and mutual responsiveness. Anthropologists have also observed that people in thick relationships are more willing to share bodily fluids such as saliva.

“That inspired both the question of whether infants distinguish between those types of relationships, and whether saliva sharing might be a really good cue they could use to recognize them,” Thomas says.

To study those questions, the researchers observed toddlers (16.5 to 18.5 months) and babies (8.5 to 10 months) as they watched interactions between human actors and puppets. In the first set of experiments, a puppet shared an orange with one actor, then tossed a ball back and forth with a different actor.

After the children watched these initial interactions, the researchers observed the children’s reactions when the puppet showed distress while sitting between the two actors. Based on an earlier study of nonhuman primates, the researchers hypothesized that babies would look first at the person whom they expected to help. That study showed that when baby monkeys cry, other members of the troop look to the baby’s parents, as if expecting them to step in.

The MIT team found that the children were more likely to look toward the actor who had shared food with the puppet, not the one who had shared a toy, when the puppet was in distress.

In a second set of experiments, designed to focus more specifically on saliva, the actor either placed her finger in her mouth and then into the mouth of the puppet, or placed her finger on her forehead and then onto the forehead of the puppet. Later, when the actor expressed distress while standing between the two puppets, children watching the video were more likely to look toward the puppet with whom she had shared saliva.

Social cues

The findings suggest that saliva sharing is likely an important cue that helps infants to learn about their own social relationships and those of people around them, the researchers say.

“The general skill of learning about social relationships is very useful,” Thomas says. “One reason why this distinction between thick and thin might be important for infants in particular, especially human infants, who depend on adults for longer than many other species, is that it might be a good way to figure out who else can provide the support that they depend on to survive.”

The researchers did their first set of studies shortly before Covid-19 lockdowns began, with babies who came to the lab with their families. Later experiments were done over Zoom. The results that the researchers saw were similar before and after the pandemic, confirming that pandemic-related hygiene concerns did not affect the outcome.

“We actually know the results would have been similar if it hadn’t been for the pandemic,” Saxe says. “You might wonder, did kids start to think very differently about sharing saliva when suddenly everybody was talking about hygiene all the time? So, for that question, it’s very useful that we had an initial data set collected before the pandemic.”

Doing the second set of studies on Zoom also allowed the researchers to recruit a much more diverse group of children because the subjects were not limited to families who could come to the lab in Cambridge during normal working hours.

In future work, the researchers hope to perform similar studies with infants in cultures that have different types of family structures. In adult subjects, they plan to use functional magnetic resonance imaging (fMRI) to study what parts of the brain are involved in making saliva-based assessments about social relationships.

The research was funded by the National Institutes of Health; the Patrick J. McGovern Foundation; the Guggenheim Foundation; a Social Sciences and Humanities Research Council Doctoral Fellowship; MIT’s Center for Brains, Minds, and Machines; and the Siegel Foundation.

MIT Future Founders Initiative announces prize competition to promote female entrepreneurs in biotech

In a fitting sequel to its entrepreneurship “boot camp” educational lecture series last fall, the MIT Future Founders Initiative has announced the MIT Future Founders Prize Competition, supported by Northpond Ventures, and named the MIT faculty cohort that will participate in this year’s competition. The Future Founders Initiative was established in 2020 to promote female entrepreneurship in biotech.

Despite increasing representation at MIT, female science and engineering faculty found biotech startups at a disproportionately low rate compared with their male colleagues, according to research led by the initiative’s founders, MIT Professor Sangeeta Bhatia, MIT Professor and President Emerita Susan Hockfield, and MIT Amgen Professor of Biology Emerita Nancy Hopkins. In addition to highlighting systemic gender imbalances in the biotech pipeline, the initiative’s founders emphasize that the dearth of female biotech entrepreneurs represents lost opportunities for society as a whole — a bottleneck in the proliferation of publicly accessible medical and technological innovation.

“A very common myth is that representation of women in the pipeline is getting better with time … We can now look at the data … and simply say, ‘that’s not true’,” said Bhatia, who is the John and Dorothy Wilson Professor of Health Sciences and Technology and Electrical Engineering and Computer Science, and a member of MIT’s Koch Institute for Integrative Cancer Research and the Institute for Medical Engineering and Science, in an interview for the March/April 2021 MIT Faculty Newsletter. “We need new solutions. This isn’t just about waiting and being optimistic.”

Inspired by generous funding from Northpond Labs, the research and development-focused affiliate of Northpond Ventures, and by the success of other MIT prize incentive competitions such as the Climate Tech and Energy Prize, the Future Founders Initiative Prize Competition will be structured as a learning cohort in which participants will be supported in commercializing their existing inventions with instruction in market assessments, fundraising, and business capitalization, as well as other programming. The program, which is being run as a partnership between the MIT School of Engineering and the Martin Trust Center for MIT Entrepreneurship, provides hands-on opportunities to learn from industry leaders about their experiences, ranging from licensing technology to creating early startup companies. Bhatia and Kit Hickey, an entrepreneur-in-residence at the Martin Trust Center and senior lecturer at the MIT Sloan School of Management, are co-directors of the program.

“The competition is an extraordinary effort to increase the number of female faculty who translate their research and ideas into real-world applications through entrepreneurship,” says Anantha Chandrakasan, dean of the MIT School of Engineering and Vannevar Bush Professor of Electrical Engineering and Computer Science. “Our hope is that this likewise serves as an opportunity for participants to gain exposure and experience to the many ways in which they could achieve commercial impact through their research.”

At the end of the program, the cohort members will pitch their ideas to a selection committee composed of MIT faculty, biotech founders, and venture capitalists. The grand prize winner will receive $250,000 in discretionary funds, and two runners-up will receive $100,000. The winners will be announced at a showcase event, at which the entire cohort will present their work. All participants will also receive a $10,000 stipend for participating in the competition.

“The biggest payoff is not identifying the winner of the competition,” says Bhatia. “Really, what we are doing is creating a cohort … and then, at the end, we want to create a lot of visibility for these women and make them ‘top of mind’ in the community.”

The Selection Committee members for the MIT Future Founders Prize Competition are:

  • Bill Aulet, professor of the practice in the MIT Sloan School of Management and managing director of the Martin Trust Center for MIT Entrepreneurship
  • Sangeeta Bhatia, the John and Dorothy Wilson Professor of Electrical Engineering and Computer Science at MIT; a member of MIT’s Koch Institute for Integrative Cancer Research and the Institute for Medical Engineering and Science; and founder of Hepregen, Glympse Bio, and Satellite Bio
  • Kit Hickey, senior lecturer in the MIT Sloan School of Management and entrepreneur-in-residence at the Martin Trust Center
  • Susan Hockfield, MIT president emerita and professor of neuroscience
  • Andrea Jackson, director at Northpond Ventures
  • Harvey Lodish, professor of biology and biomedical engineering at MIT and founder of Genzyme, Millennium, and Rubius
  • Fiona Murray, associate dean for innovation and inclusion in the MIT Sloan School of Management; the William Porter Professor of Entrepreneurship; co-director of the MIT Innovation Initiative; and faculty director of the MIT Legatum Center
  • Amy Schulman, founding CEO of Lyndra Therapeutics and partner at Polaris Partners
  • Nandita Shangari, managing director at Novartis Venture Fund

“As an investment firm dedicated to supporting entrepreneurs, we are acutely aware of the limited number of companies founded and led by women in academia. We believe humanity should be benefiting from brilliant ideas and scientific breakthroughs from women in science, which could address many of the world’s most pressing problems. Together with MIT, we are providing an opportunity for women faculty members to enhance their visibility and gain access to the venture capital ecosystem,” says Andrea Jackson, director at Northpond Ventures.

“This first cohort is representative of the unrealized opportunity this program is designed to capture. While it will take a while to build a robust community of connections and role models, I am pleased and confident this program will make entrepreneurship more accessible and inclusive to our community, which will greatly benefit society,” says Susan Hockfield, MIT president emerita.

The MIT Future Founders Prize Competition cohort members were selected from schools across MIT, including the School of Science, the School of Engineering, and Media Lab within the School of Architecture and Planning. They are:

Polina Anikeeva is professor of materials science and engineering and brain and cognitive sciences, an associate member of the McGovern Institute for Brain Research, and the associate director of the Research Laboratory of Electronics. She is particularly interested in advancing the possibility of future neuroprosthetics, through biologically-informed materials synthesis, modeling, and device fabrication. Anikeeva earned her BS in biophysics from St. Petersburg State Polytechnic University and her PhD in materials science and engineering from MIT.

Natalie Artzi is principal research scientist in the Institute of Medical Engineering and Science and an assistant professor in the department of medicine at Brigham and Women’s Hospital. Through the development of smart materials and medical devices, her research seeks to “personalize” medical interventions based on the specific presentation of diseased tissue in a given patient. She earned both her BS and PhD in chemical engineering from the Technion-Israel Institute of Technology.

Laurie A. Boyer is professor of biology and biological engineering in the Department of Biology. By studying how diverse molecular programs cross-talk to regulate the developing heart, she seeks to develop new therapies that can help repair cardiac tissue. She earned her BS in biomedical science from Framingham State University and her PhD from the University of Massachusetts Medical School.

Tal Cohen is associate professor in the departments of Civil and Environmental Engineering and Mechanical Engineering. She wields her understanding of how materials behave when they are pushed to their extremes to tackle engineering challenges in medicine and industry. She earned her BS, MS, and PhD in aerospace engineering from the Technion-Israel Institute of Technology.

Canan Dagdeviren is assistant professor of media arts and sciences and the LG Career Development Professor of Media Arts and Sciences. Her research focus is on creating new sensing, energy harvesting, and actuation devices that can be stretched, wrapped, folded, twisted, and implanted onto the human body while maintaining optimal performance. She earned her BS in physics engineering from Hacettepe University, her MS in materials science and engineering from Sabanci University, and her PhD in materials science and engineering from the University of Illinois at Urbana-Champaign.

Ariel Furst is the Raymond (1921) & Helen St. Laurent Career Development Professor in the Department of Chemical Engineering. Her research addresses challenges in global health and sustainability, utilizing electrochemical methods and biomaterials engineering. She is particularly interested in new technologies that detect and treat disease. Furst earned her BS in chemistry at the University of Chicago and her PhD at Caltech.

Kristin Knouse is assistant professor in the Department of Biology and the Koch Institute for Integrative Cancer Research. She develops tools to investigate the molecular regulation of organ injury and regeneration directly within a living organism with the goal of uncovering novel therapeutic avenues for diverse diseases. She earned her BS in biology from Duke University, her PhD and MD through the Harvard and MIT MD-PhD program.

Elly Nedivi is the William R. (1964) & Linda R. Young Professor of Neuroscience at the Picower Institute for Learning and Memory with joint appointments in the departments of Brain and Cognitive Sciences and Biology. Through her research of neurons, genes, and proteins, Nedivi focuses on elucidating the cellular mechanisms that control plasticity in both the developing and adult brain. She earned her BS in biology from Hebrew University and her PhD in neuroscience from Stanford University.

Ellen Roche is associate professor in the Department of Mechanical Engineering and Institute of Medical Engineering and Science, and the W.M. Keck Career Development Professor in Biomedical Engineering. Borrowing principles and design forms she observes in nature, Roche works to develop implantable therapeutic devices that assist cardiac and other biological function. She earned her bachelor’s degree in biomedical engineering from the National University of Ireland at Galway, her MS in bioengineering from Trinity College Dublin, and her PhD from Harvard University.

A key brain region responds to faces similarly in infants and adults

Within the visual cortex of the adult brain, a small region is specialized to respond to faces, while nearby regions show strong preferences for bodies or for scenes such as landscapes.

Neuroscientists have long hypothesized that it takes many years of visual experience for these areas to develop in children. However, a new MIT study suggests that these regions form much earlier than previously thought. In a study of babies ranging in age from two to nine months, the researchers identified areas of the infant visual cortex that already show strong preferences for either faces, bodies, or scenes, just as they do in adults.

“These data push our picture of development, making babies’ brains look more similar to adults, in more ways, and earlier than we thought,” says Rebecca Saxe, the John W. Jarve Professor of Brain and Cognitive Sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the new study.

Using functional magnetic resonance imaging (fMRI), the researchers collected usable data from more than 50 infants, a far greater number than any research lab has been able to scan before. This allowed them to examine the infant visual cortex in a way that had not been possible until now.

“This is a result that’s going to make a lot of people have to really grapple with their understanding of the infant brain, the starting point of development, and development itself,” says Heather Kosakowski, an MIT graduate student and the lead author of the study, which appears today in Current Biology.

MIT graduate student Heather Kosakowski prepares an infant for an MRI scan at the Martinos Imaging Center. Photo: Caitlin Cunningham

Distinctive regions

More than 20 years ago, Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience at MIT, used fMRI to discover the fusiform face area: a small region of the visual cortex that responds much more strongly to faces than any other kind of visual input.

Since then, Kanwisher and her colleagues have also identified parts of the visual cortex that respond to bodies (the extrastriate body area, or EBA), and scenes (the parahippocampal place area, or PPA).

“There is this set of functionally very distinctive regions that are present in more or less the same place in pretty much every adult,” says Kanwisher, who is also a member of MIT’s Center for Brains, Minds, and Machines, and an author of the new study. “That raises all these questions about how these regions develop. How do they get there, and how do you build a brain that has such similar structure in each person?”

One way to try to answer those questions is to investigate when these highly selective regions first develop in the brain. A longstanding hypothesis is that it takes several years of visual experience for these regions to gradually become selective for their specific targets. Scientists who study the visual cortex have found similar selectivity patterns in children as young as 4 or 5 years old, but there have been few studies of children younger than that.

In 2017, Saxe and one of her graduate students, Ben Deen, reported the first successful use of fMRI to study the brains of awake infants. That study, which included data from nine babies, suggested that while infants did have areas that respond to faces and scenes, those regions were not yet highly selective. For example, the fusiform face area did not show a strong preference for human faces over every other kind of input, including human bodies or the faces of other animals.

However, that study was limited by the small number of subjects, and also by its reliance on an fMRI coil that the researchers had developed especially for babies, which did not offer as high-resolution imaging as the coils used for adults.

For the new study, the researchers wanted to try to get better data, from more babies. They built a new scanner that is more comfortable for babies and also more powerful, with resolution similar to that of fMRI scanners used to study the adult brain.

After going into the specialized scanner, along with a parent, the babies watched videos that showed either faces, body parts such as kicking feet or waving hands, objects such as toys, or natural scenes such as mountains.

The researchers recruited nearly 90 babies for the study, collected usable fMRI data from 52, half of which contributed higher-resolution data collected using the new coil. Their analysis revealed that specific regions of the infant visual cortex show highly selective responses to faces, body parts, and natural scenes, in the same locations where those responses are seen in the adult brain. The selectivity for natural scenes, however, was not as strong as for faces or body parts.

The infant brain

The findings suggest that scientists’ conception of how the infant brain develops may need to be revised to accommodate the observation that these specialized regions start to resemble those of adults sooner than anyone had expected.

“The thing that is so exciting about these data is that they revolutionize the way we understand the infant brain,” Kosakowski says. “A lot of theories have grown up in the field of visual neuroscience to accommodate the view that you need years of development for these specialized regions to emerge. And what we’re saying is actually, no, you only really need a couple of months.”

Because their data on the area of the brain that responds to scenes was not as strong as for the other locations they looked at, the researchers now plan to pursue additional studies of that region, this time showing babies images on a much larger screen that will more closely mimic the experience of being within a scene. For that study, they plan to use near-infrared spectroscopy (NIRS), a non-invasive imaging technique that doesn’t require the participant to be inside a scanner.

“That will let us ask whether young babies have robust responses to visual scenes that we underestimated in this study because of the visual constraints of the experimental setup in the scanner,” Saxe says.

The researchers are now further analyzing the data they gathered for this study in hopes of learning more about how development of the fusiform face area progresses from the youngest babies they studied to the oldest. They also hope to perform new experiments examining other aspects of cognition, including how babies’ brains respond to language and music.

The research was funded by the National Science Foundation, the National Institutes of Health, the McGovern Institute, and the Center for Brains, Minds, and Machines.