New MRI probe can reveal more of the brain’s inner workings

Using a novel probe for functional magnetic resonance imaging (fMRI), MIT biological engineers have devised a way to monitor individual populations of neurons and reveal how they interact with each other.

Similar to how the gears of a clock interact in specific ways to turn the clock’s hands, different parts of the brain interact to perform a variety of tasks, such as generating behavior or interpreting the world around us. The new MRI probe could potentially allow scientists to map those networks of interactions.

“With regular fMRI, we see the action of all the gears at once. But with our new technique, we can pick up individual gears that are defined by their relationship to the other gears, and that’s critical for building up a picture of the mechanism of the brain,” says Alan Jasanoff, an MIT professor of biological engineering, brain and cognitive sciences, and nuclear science and engineering.

Using this technique, which involves genetically targeting the MRI probe to specific populations of cells in animal models, the researchers were able to identify neural populations involved in a circuit that responds to rewarding stimuli. The new MRI probe could also enable studies of many other brain circuits, the researchers say.

Jasanoff, who is also an associate investigator at the McGovern Institute, is the senior author of the study, which appears today in Nature Neuroscience. The lead authors of the paper are recent MIT PhD recipient Souparno Ghosh and former MIT research scientist Nan Li.

Tracing connections

Traditional fMRI imaging measures changes to blood flow in the brain, as a proxy for neural activity. When neurons receive signals from other neurons, it triggers an influx of calcium, which causes a diffusible gas called nitric oxide to be released. Nitric oxide acts in part as a vasodilator that increases blood flow to the area.

Imaging calcium directly can offer a more precise picture of brain activity, but that type of imaging usually requires fluorescent chemicals and invasive procedures. The MIT team wanted to develop a method that could work across the brain without that type of invasiveness.

“If we want to figure out how brain-wide networks of cells and brain-wide mechanisms function, we need something that can be detected deep in tissue and preferably across the entire brain at once,” Jasanoff says. “The way that we chose to do that in this study was to essentially hijack the molecular basis of fMRI itself.”

The researchers created a genetic probe, delivered by viruses, that codes for a protein that sends out a signal whenever the neuron is active. This protein, which the researchers called NOSTIC (nitric oxide synthase for targeting image contrast), is an engineered form of an enzyme called nitric oxide synthase. The NOSTIC protein can detect elevated calcium levels that arise during neural activity; it then generates nitric oxide, leading to an artificial fMRI signal that arises only from cells that contain NOSTIC.

The probe is delivered by a virus that is injected into a particular site, after which it travels along axons of neurons that connect to that site. That way, the researchers can label every neural population that feeds into a particular location.

“When we use this virus to deliver our probe in this way, it causes the probe to be expressed in the cells that provide input to the location where we put the virus,” Jasanoff says. “Then, by performing functional imaging of those cells, we can start to measure what makes input to that region take place, or what types of input arrive at that region.”

Turning the gears

In the new study, the researchers used their probe to label populations of neurons that project to the striatum, a region that is involved in planning movement and responding to reward. In rats, they were able to determine which neural populations send input to the striatum during or immediately following a rewarding stimulus — in this case, deep brain stimulation of the lateral hypothalamus, a brain center that is involved in appetite and motivation, among other functions.

One question that researchers have had about deep brain stimulation of the lateral hypothalamus is how wide-ranging the effects are. In this study, the MIT team showed that several neural populations, located in regions including the motor cortex and the entorhinal cortex, which is involved in memory, send input into the striatum following deep brain stimulation.

“It’s not simply input from the site of the deep brain stimulation or from the cells that carry dopamine. There are these other components, both distally and locally, that shape the response, and we can put our finger on them because of the use of this probe,” Jasanoff says.

During these experiments, neurons also generate regular fMRI signals, so in order to distinguish the signals that are coming specifically from the genetically altered neurons, the researchers perform each experiment twice: once with the probe on, and once following treatment with a drug that inhibits the probe. By measuring the difference in fMRI activity between these two conditions, they can determine how much activity is present in probe-containing cells specifically.

The researchers now hope to use this approach, which they call hemogenetics, to study other networks in the brain, beginning with an effort to identify some of the regions that receive input from the striatum following deep brain stimulation.

“One of the things that’s exciting about the approach that we’re introducing is that you can imagine applying the same tool at many sites in the brain and piecing together a network of interlocking gears, which consist of these input and output relationships,” Jasanoff says. “This can lead to a broad perspective on how the brain works as an integrated whole, at the level of neural populations.”

The research was funded by the National Institutes of Health and the MIT Simons Center for the Social Brain.

Singing in the brain

Press Mentions

For the first time, MIT neuroscientists have identified a population of neurons in the human brain that lights up when we hear singing, but not other types of music.

These neurons, found in the auditory cortex, appear to respond to the specific combination of voice and music, but not to either regular speech or instrumental music. Exactly what they are doing is unknown and will require more work to uncover, the researchers say.

“The work provides evidence for relatively fine-grained segregation of function within the auditory cortex, in a way that aligns with an intuitive distinction within music,” says Sam Norman-Haignere, a former MIT postdoc who is now an assistant professor of neuroscience at the University of Rochester Medical Center.

The work builds on a 2015 study in which the same research team used functional magnetic resonance imaging (fMRI) to identify a population of neurons in the brain’s auditory cortex that responds specifically to music. In the new work, the researchers used recordings of electrical activity taken at the surface of the brain, which gave them much more precise information than fMRI.

“There’s one population of neurons that responds to singing, and then very nearby is another population of neurons that responds broadly to lots of music. At the scale of fMRI, they’re so close that you can’t disentangle them, but with intracranial recordings, we get additional resolution, and that’s what we believe allowed us to pick them apart,” says Norman-Haignere.

Norman-Haignere is the lead author of the study, which appears today in the journal Current Biology. Josh McDermott, an associate professor of brain and cognitive sciences, and Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience, both members of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds and Machines (CBMM), are the senior authors of the study.

Neural recordings

In their 2015 study, the researchers used fMRI to scan the brains of participants as they listened to a collection of 165 sounds, including different types of speech and music, as well as everyday sounds such as finger tapping or a dog barking. For that study, the researchers devised a novel method of analyzing the fMRI data, which allowed them to identify six neural populations with different response patterns, including the music-selective population and another population that responds selectively to speech.

In the new study, the researchers hoped to obtain higher-resolution data using a technique known as electrocorticography (ECoG), which allows electrical activity to be recorded by electrodes placed inside the skull. This offers a much more precise picture of electrical activity in the brain compared to fMRI, which measures blood flow in the brain as a proxy of neuron activity.

“With most of the methods in human cognitive neuroscience, you can’t see the neural representations,” Kanwisher says. “Most of the kind of data we can collect can tell us that here’s a piece of brain that does something, but that’s pretty limited. We want to know what’s represented in there.”

Electrocorticography cannot be typically be performed in humans because it is an invasive procedure, but it is often used to monitor patients with epilepsy who are about to undergo surgery to treat their seizures. Patients are monitored over several days so that doctors can determine where their seizures are originating before operating. During that time, if patients agree, they can participate in studies that involve measuring their brain activity while performing certain tasks. For this study, the MIT team was able to gather data from 15 participants over several years.

For those participants, the researchers played the same set of 165 sounds that they used in the earlier fMRI study. The location of each patient’s electrodes was determined by their surgeons, so some did not pick up any responses to auditory input, but many did. Using a novel statistical analysis that they developed, the researchers were able to infer the types of neural populations that produced the data that were recorded by each electrode.

“When we applied this method to this data set, this neural response pattern popped out that only responded to singing,” Norman-Haignere says. “This was a finding we really didn’t expect, so it very much justifies the whole point of the approach, which is to reveal potentially novel things you might not think to look for.”

That song-specific population of neurons had very weak responses to either speech or instrumental music, and therefore is distinct from the music- and speech-selective populations identified in their 2015 study.

Music in the brain

In the second part of their study, the researchers devised a mathematical method to combine the data from the intracranial recordings with the fMRI data from their 2015 study. Because fMRI can cover a much larger portion of the brain, this allowed them to determine more precisely the locations of the neural populations that respond to singing.

“This way of combining ECoG and fMRI is a significant methodological advance,” McDermott says. “A lot of people have been doing ECoG over the past 10 or 15 years, but it’s always been limited by this issue of the sparsity of the recordings. Sam is really the first person who figured out how to combine the improved resolution of the electrode recordings with fMRI data to get better localization of the overall responses.”

The song-specific hotspot that they found is located at the top of the temporal lobe, near regions that are selective for language and music. That location suggests that the song-specific population may be responding to features such as the perceived pitch, or the interaction between words and perceived pitch, before sending information to other parts of the brain for further processing, the researchers say.

The researchers now hope to learn more about what aspects of singing drive the responses of these neurons. They are also working with MIT Professor Rebecca Saxe’s lab to study whether infants have music-selective areas, in hopes of learning more about when and how these brain regions develop.

The research was funded by the National Institutes of Health, the U.S. Army Research Office, the National Science Foundation, the NSF Science and Technology Center for Brains, Minds, and Machines, the Fondazione Neurone, the Howard Hughes Medical Institute, and the Kristin R. Pressman and Jessica J. Pourian ’13 Fund at MIT.

Assessing connections in the brain’s reading network

When we read, information zips between language processing centers in different parts of the brain, traveling along neural highways in the white matter. This coordinated activity allows us to decipher words and comprehend their meaning. Many neuroscientists suspect that variations in white matter may underlie differences in reading ability, and hope that by determining which white matter tracts are involved, they will be able to guide the development of more effective interventions for children who struggle with reading skills.

In a January 14, 2022, online publication in the journal NeuroImage, scientists at MIT’s McGovern Institute report on the largest brain imaging study to date to evaluate the relationship between white matter structure and reading ability. Their findings suggest that if white matter deficiencies are a significant cause of reading disability, new strategies will be needed to pin them down.

White matter is composed of bundles of insulated nerve fibers. It can be thought of as the internet of the brain, says senior author John Gabrieli, the Grover Hermann Professor of Health Sciences and Technology at MIT. “It’s the connectivity: the way that the brain communicates at some distance to orchestrate higher-level thoughts, and abilities like reading,” explains Gabrieli, who is also a professor of brain and cognitive sciences and an investigator at the McGovern Institute.

The left inferior cerebellar peduncle, a white matter tract that connects the cerebellum to the brainstem and spinal cord. Image: Steven Meisler

Long-distance connections

To visualize white matter and study its structure, neuroscientists use an imaging technique called diffusion-weighted imaging (DWI). Images are collected in an MRI scanner by tracking the movements of water molecules in the brain. A key measure used to interpret these images is fractional anisotropy (FA), which varies with many physical features of nerve fibers, such as their density, diameter, and degree of insulation. Although FA does not measure any of these properties directly, it is considered an indicator of structural integrity within white matter tracts.

Several studies have found the FA of one or more white matter tracts to be lower in children with low reading scores or dyslexia than in children with stronger reading abilities. But those studies are small—usually involving only a few dozen children—and their findings are inconsistent. So it has been difficult to attribute reading problems to poor connections between specific parts of the brain.

Hoping to glean more conclusive results, Gabrieli and Steven Meisler, a graduate student in the Harvard Program in Speech and Hearing Bioscience and Technology who is completing his doctoral work in the Gabrieli lab, turned to a large collection of high-quality brain images available through the Child Mind Institute’s Healthy Brain Network. Using DWI images collected from 686 children and state-of-the-art methods of analysis, they assessed the FA of 20 white matter tracts that are thought to be important for reading.

The children represented in the dataset had diverse reading abilities, but surprisingly, when they compared children with and without reading disability, Meisler and Gabrieli found no significant differences in the FA of any of the 20 tracts. Nor did they find any correlation between white matter FA and children’s overall reading scores.

More detailed analysis did link reading ability to the FA of two particular white matter tracts. The researchers only detected the correlation when they narrowed their analysis to children older than eight, who are usually reading to learn, rather than learning to read. Within this group, they found two white matter tracts whose FA was lower in children who struggled with a specific reading skill: reading “pseudowords.” The ability to read nonsense words is used to assess knowledge of the relationship between letters and sounds, since real words can be recognized instead through experience and memory.

The right superior longitudinal fasciculus, a white matter tract that connects frontal brain regions to parietal areas. The research team found that fractional anisotropy (FA) of the right superior longitudinal fasciculus and the left inferior cerebellar peduncles (shown above) correlated positively with pseudoword reading ability among children ages 9 and older. Image: Steven Meisler

The first of these tracts connects language processing centers in the frontal and parietal brain regions. The other contains fibers that connect that the brainstem with the cerebellum, and may help control the eye movements needed to see and track words. The FA differences that Meisler and Gabrieli linked to reading scores were small, and it’s not yet clear what they mean. Since less cohesive structure in these two tracts was linked to lower pseudoword-reading scores only in older children, it may be a consequence of living with a reading disability rather than a cause, Meisler says.

The findings don’t rule out a role for white matter structure in reading disability, but they do suggest that researchers will need a different approach to find relevant features. “Our results suggest that FA does not relate to reading abilities as much as previously thought,” Meisler says. In future studies, he says, researchers will likely need to take advantage of more advanced methods of image analysis to assess features that more directly reflect white matter’s ability to serve as a conduit of information.

The craving state

This story originally appeared in the Winter 2022 issue of BrainScan.

***

For people struggling with substance use disorders — and there are about 35 million of them worldwide — treatment options are limited. Even among those who seek help, relapse is common. In the United States, an epidemic of opioid addiction has been declared a public health emergency.

A 2019 survey found that 1.6 million people nationwide had an opioid use disorder, and the crisis has surged since the start of the COVID-19 pandemic. The Centers for Disease Control and Prevention estimates that more than 100,000 people died of drug overdose between April 2020 and April 2021 — nearly 30 percent more overdose deaths than occurred during the same period the previous year.

In the United States, an epidemic of opioid addiction has been declared a public health emergency.

A deeper understanding of what addiction does to the brain and body is urgently needed to pave the way to interventions that reliably release affected individuals from its grip. At the McGovern Institute, researchers are turning their attention to addiction’s driving force: the deep, recurring craving that makes people prioritize drug use over all other wants and needs.

McGovern Institute co-founder, Lore Harp McGovern.

“When you are in that state, then it seems nothing else matters,” says McGovern Investigator Fan Wang. “At that moment, you can discard everything: your relationship, your house, your job, everything. You only want the drug.”

With a new addiction initiative catalyzed by generous gifts from Institute co-founder Lore Harp McGovern and others, McGovern scientists with diverse expertise have come together to begin clarifying the neurobiology that underlies the craving state. They plan to dissect the neural transformations associated with craving at every level — from the drug-induced chemical changes that alter neuronal connections and activity to how these modifications impact signaling brain-wide. Ultimately, the McGovern team hopes not just to understand the craving state, but to find a way to relieve it — for good.

“If we can understand the craving state and correct it, or at least relieve a little bit of the pressure,” explains Wang, who will help lead the addiction initiative, “then maybe we can at least give people a chance to use their top-down control to not take the drug.”

The craving cycle

For individuals suffering from substance use disorders, craving fuels a cyclical pattern of escalating drug use. Following the euphoria induced by a drug like heroin or cocaine, depression sets in, accompanied by a drug craving motivated by the desire to relieve that suffering. And as addiction progresses, the peaks and valleys of this cycle dip lower: the pleasant feelings evoked by the drug become weaker, while the negative effects a person experiences in its absence worsen. The craving remains, and increasing use of the drug are required to relieve it.

By the time addiction sets in, the brain has been altered in ways that go beyond a drug’s immediate effects on neural signaling.

These insidious changes leave individuals susceptible to craving — and the vulnerable state endures. Long after the physical effects of withdrawal have subsided, people with substance use disorders can find their craving returns, triggered by exposure to a small amount of the drug, physical or social cues associated with previous drug use, or stress. So researchers will need to determine not only how different parts of the brain interact with one another during craving and how individual cells and the molecules within them are affected by the craving state — but also how things change as addiction develops and progresses.

Circuits, chemistry and connectivity

One clear starting point is the circuitry the brain uses to control motivation. Thanks in part to decades of research in the lab of McGovern Investigator Ann Graybiel, neuroscientists know a great deal about how these circuits learn which actions lead to pleasure and which lead to pain, and how they use that information to establish habits and evaluate the costs and benefits of complex decisions.

Graybiel’s work has shown that drugs of abuse strongly activate dopamine-responsive neurons in a part of the brain called the striatum, whose signals promote habit formation. By increasing the amount of dopamine that neurons release, these drugs motivate users to prioritize repeated drug use over other kinds of rewards, and to choose the drug in spite of pain or other negative effects. Her group continues to investigate the naturally occurring molecules that control these circuits, as well as how they are hijacked by drugs of abuse.

Distribution of opioid receptors targeted by morphine (shown in blue) in two regions in the dorsal striatum and nucleus accumbens of the mouse brain. Image: Ann Graybiel

In Fan Wang’s lab, work investigating the neural circuits that mediate the perception of physical pain has led her team to question the role of emotional pain in craving. As they investigated the source of pain sensations in the brain, they identified neurons in an emotion-regulating center called the central amygdala that appear to suppress physical pain in animals. Now, Wang wants to know whether it might be possible to modulate neurons involved in emotional pain to ameliorate the negative state that provokes drug craving.

These animal studies will be key to identifying the cellular and molecular changes that set the brain up for recurring cravings. And as McGovern scientists begin to investigate what happens in the brains of rodents that have been trained to self-administer addictive drugs like fentanyl or cocaine, they expect to encounter tremendous complexity.

McGovern Associate Investigator Polina Anikeeva, whose lab has pioneered new technologies that will help the team investigate the full spectrum of changes that underlie craving, says it will be important to consider impacts on the brain’s chemistry, firing patterns, and connectivity. To that end, multifunctional research probes developed in her lab will be critical to monitoring and manipulating neural circuits in animal models.

Imaging technology developed by investigator Ed Boyden will also enable nanoscale protein visualization brain-wide. An important goal will be to identify a neural signature of the craving state. With such a signal, researchers can begin to explore how to shut off that craving — possibly by directly modulating neural signaling.

Targeted treatments

“One of the reasons to study craving is because it’s a natural treatment point,” says McGovern Associate Investigator Alan Jasanoff. “And the dominant kind of approaches that people in our team think about are approaches that relate to neural circuits — to the specific connections between brain regions and how those could be changed.” The hope, he explains, is that it might be possible to identify a brain region whose activity is disrupted during the craving state, then use clinical brain stimulation methods to restore normal signaling — within that region, as well as in other connected parts of the brain.

To identify the right targets for such a treatment, it will be crucial to understand how the biology uncovered in laboratory animals reflects what’s happens in people with substance use disorders. Functional imaging in John Gabrieli’s lab can help bridge the gap between clinical and animal research by revealing patterns of brain activity associated with the craving state in both humans and rodents. A new technique developed in Jasanoff’s lab makes it possible to focus on the activity between specific regions of an animal’s brain. “By doing that, we hope to build up integrated models of how information passes around the brain in craving states, and of course also in control states where we’re not experiencing craving,” he explains.

In delving into the biology of the craving state, McGovern scientists are embarking on largely unexplored territory — and they do so with both optimism and urgency. “It’s hard to not appreciate just the size of the problem, and just how devastating addiction is,” says Anikeeva. “At this point, it just seems almost irresponsible to not work on it, especially when we do have the tools and we are interested in the general brain regions that are important for that problem. I would say that there’s almost a civic duty.”

McGovern Institute Director receives highest honor from the Society for Neuroscience

The Society for Neuroscience will present its highest honor, the Ralph W. Gerard Prize in Neuroscience, to McGovern Institute Director Robert Desimone at its annual meeting today.

The Gerard Prize is named for neuroscientist Ralph W. Gerard who helped establish the Society for Neuroscience, and honors “outstanding scientists who have made significant contributions to neuroscience throughout their careers.” Desimone will share the $30,000 prize with Vanderbilt University neuroscientist Jon Kaas.

Desimone is being recognized for his career contributions to understanding cortical function in the visual system. His seminal work on attention spans decades, including the discovery of a neural basis for covert attention in the temporal cortex and the creation of the biased competition model, suggesting that attention is biased towards material relevant to the task. More recent work revealed how synchronized brain rhythms help enhance visual processing. Desimone also helped discover both face cells and neural populations that identify objects even when the size or location of the object changes. His long list of contributions includes mapping the extrastriate visual cortex, publishing the first report of columns for motion processing outside the primary visual cortex, and discovering how the temporal cortex retains memories. Desimone’s work has moved the field from broad strokes of input and output to a more nuanced understanding of cortical function that allows the brain to make sense of the environment.

At its annual meeting, beginning today, the Society will honor Desimone and other leading researchers who have made significant contributions to neuroscience — including the understanding of cognitive processes, drug addiction, neuropharmacology, and theoretical models — with this year’s Outstanding Achievement Awards.

“The Society is honored to recognize this year’s awardees, whose groundbreaking research has revolutionized our understanding of the brain, from the level of the synapse to the structure and function of the cortex, shedding light on how vision, memory, perception of touch and pain, and drug
addiction are organized in the brain,” SfN President Barry Everitt, said. “This exceptional group of neuroscientists has made fundamental discoveries, paved the way for new therapeutic approaches, and introduced new tools that will lay the foundation for decades of research to come.”

A connectome for cognition

The lateral prefrontal cortex is a particularly well-connected part of the brain. Neurons there communicate with processing centers throughout the rest of the brain, gathering information and sending commands to implement executive control over behavior. Now, scientists at MIT’s McGovern Institute have mapped these connections and revealed an unexpected order within them: The lateral prefrontal cortex, they’ve found, contains maps of other major parts of the brain’s cortex.

The researchers, led by postdoctoral researcher Rui Xu and McGovern Institute Director Robert Desimone, report that the lateral prefrontal cortex contains a set of maps that represent the major processing centers in the other parts of the cortex, including the temporal and parietal lobes. Their organization likely supports the lateral prefrontal cortex’s roles managing complex functions such as attention and working memory, which require integrating information from multiple sources and coordinating activity elsewhere in the brain. The findings are published November 4, 2021, in the journal Neuron.

Topographic maps

The layout of the maps, which allows certain regions of the lateral prefrontal cortex to directly interact with multiple areas across the brain, indicates that this part of the brain is particularly well positioned for its role. “This function of integrating and then sending back control signals to appropriate levels in the processing hierarchies of the brain is clearly one of the reasons that prefrontal cortex is so important for cognition and executive control,” says Desimone.

In many parts of the brain, neurons’ physical organization has been found to reflect the information represented there. For example, individual neurons’ positions within the visual cortex mirror the layout of the cells in the retina from which they receive input, such that the spatial pattern of neuronal activity in this part of the brain provides an approximate view of the image seen by the eyes. For example, if you fixate on the first letter of a word, the next letters in the word will map to sequential locations in the visual cortex. Likewise, the arm and hand are mapped to adjacent locations in the somatic cortex, where the brain receives sensory information from the skin.

Topographic maps such as these, which have been found primarily in brain regions involved in sensory and motor processing, offer clues about how information is stored and processed in the brain. Neuroscientists have hoped that topographic maps within the lateral prefrontal cortex will provide insight into the complex cognitive processes that are carried out there—but such maps have been elusive.

Previous anatomical studies had given little indication how different parts of the brain communicate preferentially to specific locations within the prefrontal cortex to give rise to regional specialization of cognitive functions. Recently, however, the Desimone lab identified two areas within the lateral prefrontal cortex of monkeys with specific roles in focusing an animal’s visual attention. Knowing that some spots within the lateral prefrontal cortex were wired for specific functions, they wondered if others were, too. They decided they needed a detailed map of the connections emanating from this part of the brain, and devised a plan to plot connectivity from hundreds of points within the lateral prefrontal cortex.

Cortical connectome

To generate a wiring diagram, or connectome, Xu used functional MRI to monitor activity throughout a monkey’s brain as he stimulated specific points within its lateral prefrontal cortex. He moved systematically through the brain region, stimulating points spaced as close as one millimeter apart, and noting which parts of the brain lit up in response. Ultimately, the team collected data from about 100 sites for each of two monkeys.

As the data accumulated, clear patterns emerged. Different regions within the lateral prefrontal cortex formed orderly connections with each of five processing centers throughout the brain. Points within each of these maps connected to sites with the same relative positions in the distant processing centers. Because some parts of the lateral prefrontal cortex are wired to interact with more than one processing centers, these maps overlap, positioning the prefrontal cortex to integrate information from different sources.

The team found significant overlap, for example, between the maps of the temporal cortex, a part of the brain that uses visual information to recognize objects, and the parietal cortex, which computes the spatial relationships between objects. “It is mapping objects and space together in a way that would integrate the two systems,” explains Desimone. “And then on top of that, it has other maps of other brain systems that are partially overlapping with that—so they’re all sort of coming together.”

Desimone and Xu say the new connectome will help guide further investigations of how the prefrontal cortex orchestrates complex cognitive processes. “I think this really gives us a direction for the future, because we now need to understand the cognitive concepts that are mapped there,” Desimone says.

Already, they say, the connectome offers encouragement that a deeper understanding of complex cognition is within reach. “This topographic connectivity gives the lateral prefrontal some specific advantage to serve its function,” says Xu. “This suggests that lateral prefrontal cortex has a fine organization, just like the more studied parts of the brain, so the approaches that have been used to study these other regions may also benefit the studies of high-level cognition.”

Artificial intelligence sheds light on how the brain processes language

In the past few years, artificial intelligence models of language have become very good at certain tasks. Most notably, they excel at predicting the next word in a string of text; this technology helps search engines and texting apps predict the next word you are going to type.

The most recent generation of predictive language models also appears to learn something about the underlying meaning of language. These models can not only predict the word that comes next, but also perform tasks that seem to require some degree of genuine understanding, such as question answering, document summarization, and story completion.

Such models were designed to optimize performance for the specific function of predicting text, without attempting to mimic anything about how the human brain performs this task or understands language. But a new study from MIT neuroscientists suggests the underlying function of these models resembles the function of language-processing centers in the human brain.

Computer models that perform well on other types of language tasks do not show this similarity to the human brain, offering evidence that the human brain may use next-word prediction to drive language processing.

“The better the model is at predicting the next word, the more closely it fits the human brain,” says Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience, a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds, and Machines (CBMM), and an author of the new study. “It’s amazing that the models fit so well, and it very indirectly suggests that maybe what the human language system is doing is predicting what’s going to happen next.”

Joshua Tenenbaum, a professor of computational cognitive science at MIT and a member of CBMM and MIT’s Artificial Intelligence Laboratory (CSAIL); and Evelina Fedorenko, the Frederick A. and Carole J. Middleton Career Development Associate Professor of Neuroscience and a member of the McGovern Institute, are the senior authors of the study, which appears this week in the Proceedings of the National Academy of Sciences.

Martin Schrimpf, an MIT graduate student who works in CBMM, is the first author of the paper.

Making predictions

The new, high-performing next-word prediction models belong to a class of models called deep neural networks. These networks contain computational “nodes” that form connections of varying strength, and layers that pass information between each other in prescribed ways.

Over the past decade, scientists have used deep neural networks to create models of vision that can recognize objects as well as the primate brain does. Research at MIT has also shown that the underlying function of visual object recognition models matches the organization of the primate visual cortex, even though those computer models were not specifically designed to mimic the brain.

In the new study, the MIT team used a similar approach to compare language-processing centers in the human brain with language-processing models. The researchers analyzed 43 different language models, including several that are optimized for next-word prediction. These include a model called GPT-3 (Generative Pre-trained Transformer 3), which, given a prompt, can generate text similar to what a human would produce. Other models were designed to perform different language tasks, such as filling in a blank in a sentence.

As each model was presented with a string of words, the researchers measured the activity of the nodes that make up the network. They then compared these patterns to activity in the human brain, measured in subjects performing three language tasks: listening to stories, reading sentences one at a time, and reading sentences in which one word is revealed at a time. These human datasets included functional magnetic resonance (fMRI) data and intracranial electrocorticographic measurements taken in people undergoing brain surgery for epilepsy.

They found that the best-performing next-word prediction models had activity patterns that very closely resembled those seen in the human brain. Activity in those same models was also highly correlated with measures of human behavioral measures such as how fast people were able to read the text.

“We found that the models that predict the neural responses well also tend to best predict human behavior responses, in the form of reading times. And then both of these are explained by the model performance on next-word prediction. This triangle really connects everything together,” Schrimpf says.

“A key takeaway from this work is that language processing is a highly constrained problem: The best solutions to it that AI engineers have created end up being similar, as this paper shows, to the solutions found by the evolutionary process that created the human brain. Since the AI network didn’t seek to mimic the brain directly — but does end up looking brain-like — this suggests that, in a sense, a kind of convergent evolution has occurred between AI and nature,” says Daniel Yamins, an assistant professor of psychology and computer science at Stanford University, who was not involved in the study.

Game changer

One of the key computational features of predictive models such as GPT-3 is an element known as a forward one-way predictive transformer. This kind of transformer is able to make predictions of what is going to come next, based on previous sequences. A significant feature of this transformer is that it can make predictions based on a very long prior context (hundreds of words), not just the last few words.

Scientists have not found any brain circuits or learning mechanisms that correspond to this type of processing, Tenenbaum says. However, the new findings are consistent with hypotheses that have been previously proposed that prediction is one of the key functions in language processing, he says.

“One of the challenges of language processing is the real-time aspect of it,” he says. “Language comes in, and you have to keep up with it and be able to make sense of it in real time.”

The researchers now plan to build variants of these language processing models to see how small changes in their architecture affect their performance and their ability to fit human neural data.

“For me, this result has been a game changer,” Fedorenko says. “It’s totally transforming my research program, because I would not have predicted that in my lifetime we would get to these computationally explicit models that capture enough about the brain so that we can actually leverage them in understanding how the brain works.”

The researchers also plan to try to combine these high-performing language models with some computer models Tenenbaum’s lab has previously developed that can perform other kinds of tasks such as constructing perceptual representations of the physical world.

“If we’re able to understand what these language models do and how they can connect to models which do things that are more like perceiving and thinking, then that can give us more integrative models of how things work in the brain,” Tenenbaum says. “This could take us toward better artificial intelligence models, as well as giving us better models of how more of the brain works and how general intelligence emerges, than we’ve had in the past.”

The research was funded by a Takeda Fellowship; the MIT Shoemaker Fellowship; the Semiconductor Research Corporation; the MIT Media Lab Consortia; the MIT Singleton Fellowship; the MIT Presidential Graduate Fellowship; the Friends of the McGovern Institute Fellowship; the MIT Center for Brains, Minds, and Machines, through the National Science Foundation; the National Institutes of Health; MIT’s Department of Brain and Cognitive Sciences; and the McGovern Institute.

Other authors of the paper are Idan Blank PhD ’16 and graduate students Greta Tuckute, Carina Kauf, and Eghbal Hosseini.

New bionics center established at MIT with $24 million gift

A deepening understanding of the brain has created unprecedented opportunities to alleviate the challenges posed by disability. Scientists and engineers are taking design cues from biology itself to create revolutionary technologies that restore the function of bodies affected by injury, aging, or disease – from prosthetic limbs that effortlessly navigate tricky terrain to digital nervous systems that move the body after a spinal cord injury.

With the establishment of the new K. Lisa Yang Center for Bionics, MIT is pushing forward the development and deployment of enabling technologies that communicate directly with the nervous system to mitigate a broad range of disabilities. The center’s scientists, clinicians, and engineers will work together to create, test, and disseminate bionic technologies that integrate with both the body and mind.

The center is funded by a $24 million gift to MIT’s McGovern Institute for Brain Research from philanthropist Lisa Yang, a former investment banker committed to advocacy for individuals with visible and invisible disabilities.

Portait of philanthropist Lisa Yang.
Philanthropist Lisa Yang is committed to advocacy for individuals with visible and invisible disabilities. Photo: Caitlin Cunningham

Her previous gifts to MIT have also enabled the establishment of the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics in Neuroscience, Hock E. Tan and K. Lisa Yang Center for Autism Research, Y. Eva Tan Professorship in Neurotechnology, and the endowed K. Lisa Yang Post-Baccalaureate Program.

“The K. Lisa Yang Center for Bionics will provide a dynamic hub for scientists, engineers and designers across MIT to work together on revolutionary answers to the challenges of disability,” says MIT President L. Rafael Reif. “With this visionary gift, Lisa Yang is unleashing a powerful collaborative strategy that will have broad impact across a large spectrum of human conditions – and she is sending a bright signal to the world that the lives of individuals who experience disability matter deeply.”

An interdisciplinary approach

To develop prosthetic limbs that move as the brain commands or optical devices that bypass an injured spinal cord to stimulate muscles, bionic developers must integrate knowledge from a diverse array of fields—from robotics and artificial intelligence to surgery, biomechanics, and design. The K. Lisa Yang Center for Bionics will be deeply interdisciplinary, uniting experts from three MIT schools: Science, Engineering, and Architecture and Planning. With clinical and surgical collaborators at Harvard Medical School, the center will ensure that research advances are tested rapidly and reach people in need, including those in traditionally underserved communities.

To support ongoing efforts to move toward a future without disability, the center will also provide four endowed fellowships for MIT graduate students working in bionics or other research areas focused on improving the lives of individuals who experience disability.

“I am thrilled to support MIT on this major research effort to enable powerful new solutions that improve the quality of life for individuals who experience disability,” says Yang. “This new commitment extends my philanthropic investment into the realm of physical disabilities, and I look forward to the center’s positive impact on countless lives, here in the US and abroad.”

The center will be led by Hugh Herr, a professor of media arts and sciences at MIT’s Media Lab, and Ed Boyden, the Y. Eva Tan Professor of Neurotechnology at MIT, a professor of biological engineering, brain and cognitive sciences, and media arts and sciences, and an investigator at MIT’s McGovern Institute and the Howard Hughes Medical Institute.

A double amputee himself, Herr is a pioneer in the development of bionic limbs to improve mobility for those with physical disabilities. “The world profoundly needs relief from the disabilities imposed by today’s nonexistent or broken technologies. We must continually strive towards a technological future in which disability is no longer a common life experience,” says Herr. “I am thrilled that the Yang Center for Bionics will help to measurably improve the human experience for so many.”

Boyden, who is a renowned creator of tools to analyze and control the brain, will play a key role in merging bionics technologies with the nervous system. “The Yang Center for Bionics will be a research center unlike any other in the world,” he says. “A deep understanding of complex biological systems, coupled with rapid advances in human-machine bionic interfaces, mean we will soon have the capability to offer entirely new strategies for individuals who experience disability. It is an honor to be part of the center’s founding team.”

Center priorities

In its first four years, the K. Lisa Yang Center for Bionics will focus on developing and testing three bionic technologies:

  • Digital nervous system: to eliminate movement disorders caused by spinal cord injuries, using computer-controlled muscle activations to control limb movements while simultaneously stimulating spinal cord repair
  • Brain-controlled limb exoskeletons: to assist weak muscles and enable natural movement for people affected by stroke or musculoskeletal disorders
  • Bionic limb reconstruction: to restore natural, brain-controlled movements as well as the sensation of touch and proprioception (awareness of position and movement) from bionic limbs

A fourth priority will be developing a mobile delivery system to ensure patients in medically underserved communities have access to prosthetic limb services. Investigators will field test a system that uses a mobile clinic to conduct the medical imaging needed to design personalized, comfortable prosthetic limbs and to fit the prostheses to patients where they live. Investigators plan to initially bring this mobile delivery system to Sierra Leone, where thousands of people suffered amputations during the country’s 11-year civil war. While the population of persons with amputation continues to increase each year in Sierra Leone, today less than 10% of persons in need benefit from functional prostheses. Through the mobile delivery system, a key center objective is to scale up production and access of functional limb prostheses for Sierra Leoneans in dire need.

Portrait of Lisa Yang, Hugh Herr, Julius Maada Bio, and David Moinina Sengeh (from left to right).
Philanthropist Lisa Yang (far left) and MIT bionics researcher Hugh Herr (second from left) met with Sierra Leone’s President Julius Maada Bio (second from right) and Chief Innovation Officer for the Directorate of Science, Technology and Innovation, David Moinina Sengeh, to discuss the mobile clinic component of the new K. Lisa Yang Center for Bionics at MIT. Photo: David Moinina Sengeh

“The mobile prosthetics service fueled by the K. Lisa Yang Center for Bionics at MIT is an innovative solution to a global problem,” said Julius Maada Bio, President of Sierra Leone. “I am proud that Sierra Leone will be the first site for deploying this state-of-the-art digital design and fabrication process. As leader of a government that promotes innovative technologies and prioritizes human capital development, I am overjoyed that this pilot project will give Sierra Leoneans (especially in rural areas) access to quality limb prostheses and thus improve their quality of life.”

Together, Herr and Boyden will launch research at the bionics center with three other MIT faculty: Assistant Professor of Media Arts and Sciences Canan Dagdeviren, Walter A. Rosenblith Professor of Cognitive Neuroscience Nancy Kanwisher, and David H. Koch (1962) Institute Professor Robert Langer. They will work closely with three clinical collaborators at Harvard Medical School: orthopedic surgeon Marco Ferrone, plastic surgeon Matthew Carty, and Nancy Oriol, Faculty Associate Dean for Community Engagement in Medical Education.

“Lisa Yang and I share a vision for a future in which each and every person in the world has the right to live without a debilitating disability if they so choose,” adds Herr. “The Yang Center will be a potent catalyst for true innovation and impact in the bionics space, and I am overjoyed to work with my colleagues at MIT, and our accomplished clinical partners at Harvard, to make important steps forward to help realize this vision.”

Jacqueline Lees and Rebecca Saxe named associate deans of science

Jaqueline Lees and Rebecca Saxe have been named associate deans serving in the MIT School of Science. Lees is the Virginia and D.K. Ludwig Professor for Cancer Research and is currently the associate director of the Koch Institute for Integrative Cancer Research, as well as an associate department head and professor in the Department of Biology at MIT. Saxe is the John W. Jarve (1978) Professor in Brain and Cognitive Sciences and the associate head of the Department of Brain and Cognitive Sciences (BCS); she is also an associate investigator in the McGovern Institute for Brain Research.

Lees and Saxe will both contribute to the school’s diversity, equity, inclusion, and justice (DEIJ) activities, as well as develop and implement mentoring and other career-development programs to support the community. From their home departments, Saxe and Lees bring years of DEIJ and mentorship experience to bear on the expansion of school-level initiatives.

Lees currently serves on the dean’s science council in her capacity as associate director of the Koch Institute. In this new role as associate dean for the School of Science, she will bring her broad administrative and programmatic experiences to bear on the next phase for DEIJ and mentoring activities.

Lees joined MIT in 1994 as a faculty member in MIT’s Koch Institute (then the Center for Cancer Research) and Department of Biology. Her research focuses on regulators that control cellular proliferation, terminal differentiation, and stemness — functions that are frequently deregulated in tumor cells. She dissects the role of these proteins in normal cell biology and development, and establish how their deregulation contributes to tumor development and metastasis.

Since 2000, she has served on the Department of Biology’s graduate program committee, and played a major role in expanding the diversity of the graduate student population. Lees also serves on DEIJ committees in her home department, as well as at the Koch Institute.

With co-chair with Boleslaw Wyslouch, director of the Laboratory for Nuclear Science, Lees led the ReseArch Scientist CAreer LadderS (RASCALS) committee tasked to evaluate career trajectories for research staff in the School of Science and make recommendations to recruit and retain talented staff, rewarding them for their contributions to the school’s research enterprise.

“Jackie is a powerhouse in translational research, demonstrating how fundamental work at the lab bench is critical for making progress at the patient bedside,” says Nergis Mavalvala, dean of the School of Science. “With Jackie’s dedicated and thoughtful partnership, we can continue to lead in basic research and develop the recruitment, retention, and mentoring and necessary to support our community.”

Saxe will join Lees in supporting and developing programming across the school that could also provide direction more broadly at the Institute.

“Rebecca is an outstanding researcher in social cognition and a dedicated educator — someone who wants our students not only to learn, but to thrive,” says Mavalvala. “I am grateful that Rebecca will join the dean’s leadership team and bring her mentorship and leadership skills to enhance the school.”

For example, in collaboration with former department head James DiCarlo, the BCS department has focused on faculty mentorship of graduate students; and, in collaboration with Professor Mark Bear, the department developed postdoc salary and benefit standards. Both initiatives have become models at MIT.

With colleague Laura Schulz, Saxe also served as co-chair of the Committee on Medical Leave and Hospitalizations (CMLH), which outlined ways to enhance MIT’s current leave and hospitalization procedures and policies for undergraduate and graduate students. Saxe was also awarded MIT’s Committed to Caring award for excellence in graduate student mentorship, as well as the School of Science’s award for excellence in undergraduate teaching.

In her research, Saxe studies human social cognition, using a combination of behavioral testing and brain imaging technologies. She is best known for her work on brain regions specialized for abstract concepts, such as “theory of mind” tasks that involve understanding the mental states of other people. Her TED Talk, “How we read each other’s minds” has been viewed more than 3 million times. She also studies the development of the human brain during early infancy.

She obtained her PhD from MIT and was a Harvard University junior fellow before joining the MIT faculty in 2006. In 2014, the National Academy of Sciences named her one of two recipients of the Troland Award for investigators age 40 or younger “to recognize unusual achievement and further empirical research in psychology regarding the relationships of consciousness and the physical world.” In 2020, Saxe was named a John Simon Guggenheim Foundation Fellow.

Saxe and Lees will also work closely with Kuheli Dutt, newly hired assistant dean for diversity, equity, and inclusion, and other members of the dean’s science council on school-level initiatives and strategy.

“I’m so grateful that Rebecca and Jackie have agreed to take on these new roles,” Mavalvala says. “And I’m super excited to work with these outstanding thought partners as we tackle the many puzzles that I come across as dean.”

Having more conversations to boost brain development

Engaging children in more conversation may be all it takes to strengthen language processing networks in their brains, according to a new study by MIT scientists.

Childhood experiences, including language exposure, have a profound impact on the brain’s development. Now, scientists led by McGovern Institute investigator John Gabrieli have shown that when families change their communication style to incorporate more back-and-forth exchanges between child and adult, key brain regions grow and children’s language abilities advance. Other parts of the brain may be impacted, as well.

In a study of preschool and kindergarten-aged children and their families, Gabrieli, Harvard postdoctoral researcher Rachel Romeo, and colleagues found that increasing conversation had a measurable impact on children’s brain structure and cognition within just a few months. “In just nine weeks, fluctuations in how often parents spoke with their kids appear to make a difference in brain development, language development, and executive function development,” Gabrieli says. The team’s findings are reported in the June issue of the journal Developmental Cognitive Neuroscience.

“We’re excited because this adds a little more evidence to the idea that [the brain] is malleable,” adds Romeo, who is now an assistant professor at the University of Maryland College Park.

“It suggests that in a relatively short period of time, the brain can change in positive ways,” says Romeo.

30 million word gap

In the 1990s, researchers determined that there are dramatic discrepancies in the language that children are exposed to early in life. They found that children from high-income families heard about 30 million more words during their first three years than children from lower-income families—and those exposed to more language tended to do better on tests of language development, vocabulary, and reading comprehension.

In 2018, Gabrieli and Romeo found that it was not the volume of language that made a difference, however, but instead the extent to which children were engaged in conversation. They measured this by counting the number of “conversational turns” that children experienced over a few days—that is, the frequency with which dialogue switched between child and adult. When they compared the brains of children who experienced significantly different levels of these conversational turns, they found structural and functional differences in regions known to be involved in language and speech.

After observing these differences, the researchers wanted to know whether altering a child’s language environment would impact their brain’s future development. To find out, they enrolled the families of fifty-two children between the ages of four and seven in a study, and randomly assigned half of the families to participate in a nine-week parent training program. While the program did not focus exclusively on language, there was an emphasis on improving communication, and parents were encouraged to engage in meaningful dialogues with their children.

Romeo and colleagues sent families home with audio recording devices to capture all of the language children were exposed to over two full days, first at the outset of the program and again after the nine-week training was complete. When they analyzed the recordings, they found that in many families, conversation between children and their parents had increased—and children who experienced the greatest increase in conversational turns showed the greatest improvements in language skills as well as in executive functions—a set of skills that includes memory, attention, and self-control.

 

graph depicting cortical changes
Clusters where changes in cortical thickness are significantly correlated with changes in children’s experienced conversational turns. Scatterplots represent the average change in cortical thickness as a function of the pre-to-post changes in conversational turns.

MRI scans showed that over the nine-week study, these children also experienced the most growth in two key brain areas: a sound processing center called the supramarginal gyrus and a region involved in language processing and speech production called Broca’s area. Intriguingly, these areas are very close to parts of the brain involved in executive function and social cognition.

“The brain networks for executive functioning, language, and social cognition are deeply intertwined and going through these really important periods of development during this preschool and transition-to-school period,” Romeo says. “Conversational turns seem to be going beyond just linguistic information. They seem to be about human communication and cognition at a deeper level. I think the brain results are suggestive of that, because there are so many language regions that could pop out, but these happen to be language regions that also are associated with other cognitive functions.”

Talk more

Gabrieli and Romeo say they are interested in exploring simple ways—such a web or smartphone-based tools—to support parents in communicating with their children in ways that foster brain development. It’s particularly exciting, Gabrieli notes, that introducing more conversation can impact brain development when at the age when children are preparing to begin school.

“Kids who arrive to school school-ready in language skills do better in school for years to come,” Gabrieli says. “So I think it’s really exciting to be able to see that the school readiness is so flexible and dynamic in nine weeks of experience.”

“We know this is not a trivial ask of people,” he says. “There’s a lot of factors that go into people’s lives— their own prior experiences, the pressure of their circumstances. But it’s a doable thing. You don’t have to have an expensive tutor or some deluxe pre-K environment. You can just talk more with your kid.”