How the brain coordinates speaking and breathing

MIT researchers have discovered a brain circuit that drives vocalization and ensures that you talk only when you breathe out, and stop talking when you breathe in.

McGovern Investigator Fan Wang. Photo: Caitliin Cunningham

The newly discovered circuit controls two actions that are required for vocalization: narrowing of the larynx and exhaling air from the lungs. The researchers also found that this vocalization circuit is under the command of a brainstem region that regulates the breathing rhythm, which ensures that breathing remains dominant over speech.

“When you need to breathe in, you have to stop vocalization. We found that the neurons that control vocalization receive direct inhibitory input from the breathing rhythm generator,” says Fan Wang, an MIT professor of brain and cognitive sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Jaehong Park, a Duke University graduate student who is currently a visiting student at MIT, is the lead author of the study, which appears today in Science. Other authors of the paper include MIT technical associates Seonmi Choi and Andrew Harrahill, former MIT research scientist Jun Takatoh, and Duke University researchers Shengli Zhao and Bao-Xia Han.

Vocalization control

Located in the larynx, the vocal cords are two muscular bands that can open and close. When they are mostly closed, or adducted, air exhaled from the lungs generates sound as it passes through the cords.

The MIT team set out to study how the brain controls this vocalization process, using a mouse model. Mice communicate with each other using sounds known as ultrasonic vocalizations (USVs), which they produce using the unique whistling mechanism of exhaling air through a small hole between nearly closed vocal cords.

“We wanted to understand what are the neurons that control the vocal cord adduction, and then how do those neurons interact with the breathing circuit?” Wang says.

To figure that out, the researchers used a technique that allows them to map the synaptic connections between neurons. They knew that vocal cord adduction is controlled by laryngeal motor neurons, so they began by tracing backward to find the neurons that innervate those motor neurons.

This revealed that one major source of input is a group of premotor neurons found in the hindbrain region called the retroambiguus nucleus (RAm). Previous studies have shown that this area is involved in vocalization, but it wasn’t known exactly which part of the RAm was required or how it enabled sound production.

Image of green and magenta cells under a microscope.
Laryngeal premotor neurons (green) and Fos (magenta) labeling in the RAm. Image: Fan Wang

The researchers found that these synaptic tracing-labeled RAm neurons were strongly activated during USVs. This observation prompted the team to use an activity-dependent method to target these vocalization-specific RAm neurons, termed as RAmVOC. They used chemogenetics and optogenetics to explore what would happen if they silenced or stimulated their activity. When the researchers blocked the RAmVOC neurons, the mice were no longer able to produce USVs or any other kind of vocalization. Their vocal cords did not close, and their abdominal muscles did not contract, as they normally do during exhalation for vocalization.

Conversely, when the RAmVOC neurons were activated, the vocal cords closed, the mice exhaled, and USVs were produced. However, if the stimulation lasted two seconds or longer, these USVs would be interrupted by inhalations, suggesting that the process is under control of the same part of the brain that regulates breathing.

“Breathing is a survival need,” Wang says. “Even though these neurons are sufficient to elicit vocalization, they are under the control of breathing, which can override our optogenetic stimulation.”

Rhythm generation

Additional synaptic mapping revealed that neurons in a part of the brainstem called the pre-Bötzinger complex, which acts as a rhythm generator for inhalation, provide direct inhibitory input to the RAmVOC neurons.

“The pre-Bötzinger complex generates inhalation rhythms automatically and continuously, and the inhibitory neurons in that region project to these vocalization premotor neurons and essentially can shut them down,” Wang says.

This ensures that breathing remains dominant over speech production, and that we have to pause to breathe while speaking.

The researchers believe that although human speech production is more complex than mouse vocalization, the circuit they identified in mice plays the conserved role in speech production and breathing in humans.

“Even though the exact mechanism and complexity of vocalization in mice and humans is really different, the fundamental vocalization process, called phonation, which requires vocal cord closure and the exhalation of air, is shared in both the human and the mouse,” Park says.

The researchers now hope to study how other functions such as coughing and swallowing food may be affected by the brain circuits that control breathing and vocalization.

The research was funded by the National Institutes of Health.

School of Science announces 2024 Infinite Expansion Awards

The MIT School of Science has announced nine postdocs and research scientists as recipients of the 2024 Infinite Expansion Award, which highlights extraordinary members of the MIT community.

The following are the 2024 School of Science Infinite Expansion winners:

  • Sarthak Chandra, a research scientist in the Department of Brain and Cognitive Sciences, was nominated by Professor Ila Fiete, who wrote, “He has expanded the research abilities of my group by being a versatile and brilliant scientist, by drawing connections with a different area that he was an expert in from his PhD training, and by being a highly involved and caring mentor.”
  • Michal Fux, a research scientist in the Department of Brain and Cognitive Sciences, was nominated by Professor Pawan Sinha, who wrote, “She is one of those figurative beams of light that not only brilliantly illuminate scientific questions, but also enliven a research team.”
  • Andrew Savinov, a postdoc in the Department of Biology, was nominated by Associate Professor Gene-Wei Li, who wrote, “Andrew is an extraordinarily creative and accomplished biophysicist, as well as an outstanding contributor to the broader MIT community.”
  • Ho Fung Cheng, a postdoc in the Department of Chemistry, was nominated by Professor Jeremiah Johnson, who wrote, “His impact on research and our departmental community during his time at MIT has been outstanding, and I believe that he will be a worldclass teacher and research group leader in his independent career next year.”
  • Gabi Wenzel, a postdoc in the Department of Chemistry, was nominated by Assistant Professor Brett McGuire, who wrote, “In the one year since Gabi joined our team, she has become an indispensable leader, demonstrating exceptional skill, innovation, and dedication in our challenging research environment.”
  • Yu-An Zhang, a postdoc in the Department of Chemistry, was nominated by Professor Alison Wendlandt, who wrote, “He is a creative, deep-thinking scientist and a superb organic chemist. But above all, he is an off-scale mentor and a cherished coworker.”
  • Wouter Van de Pontseele, a senior postdoc in the Laboratory for Nuclear Science, was nominated by Professor Joseph Formaggio, who wrote, “He is a talented scientist with an intense creativity, scholarship, and student mentorship record. In the time he has been with my group, he has led multiple facets of my experimental program and has been a wonderful citizen of the MIT community.”
  • Alexander Shvonski, a lecturer in the Department of Physics, was nominated by Assistant Professor Andrew Vanderburg, who wrote, “… I have been blown away by Alex’s knowledge of education research and best practices, his skills as a teacher and course content designer, and I have been extremely grateful for his assistance.”
  • David Stoppel, a research scientist in The Picower Institute for Learning and Memory, was nominated by Professor Mark Bear and his research group, who wrote, “As impressive as his research achievements might be, David’s most genuine qualification for this award is his incredible commitment to mentorship and the dissemination of knowledge.”

Winners are honored with a monetary award and will be celebrated with family, friends, and nominators at a later date, along with recipients of the Infinite Mile Award.

Imaging method reveals new cells and structures in human brain tissue

Using a novel microscopy technique, MIT and Brigham and Women’s Hospital/Harvard Medical School researchers have imaged human brain tissue in greater detail than ever before, revealing cells and structures that were not previously visible.

McGovern Institute Investigator Edward Boyden. Photo: Justin Knight

Among their findings, the researchers discovered that some “low-grade” brain tumors contain more putative aggressive tumor cells than expected, suggesting that some of these tumors may be more aggressive than previously thought.

The researchers hope that this technique could eventually be deployed to diagnose tumors, generate more accurate prognoses, and help doctors choose treatments.

“We’re starting to see how important the interactions of neurons and synapses with the surrounding brain are to the growth and progression of tumors. A lot of those things we really couldn’t see with conventional tools, but now we have a tool to look at those tissues at the nanoscale and try to understand these interactions,” says Pablo Valdes, a former MIT postdoc who is now an assistant professor of neuroscience at the University of Texas Medical Branch and the lead author of the study.

Edward Boyden, the Y. Eva Tan Professor in Neurotechnology at MIT; a professor of biological engineering, media arts and sciences, and brain and cognitive sciences; a Howard Hughes Medical Institute investigator; and a member of MIT’s McGovern Institute for Brain Research and Koch Institute for Integrative Cancer Research; and E. Antonio Chiocca, a professor of neurosurgery at Harvard Medical School and chair of neurosurgery at Brigham and Women’s Hospital, are the senior authors of the study, which appears today in Science Translational Medicine.

Making molecules visible

The new imaging method is based on expansion microscopy, a technique developed in Boyden’s lab in 2015 based on a simple premise: Instead of using powerful, expensive microscopes to obtain high-resolution images, the researchers devised a way to expand the tissue itself, allowing it to be imaged at very high resolution with a regular light microscope.

The technique works by embedding the tissue into a polymer that swells when water is added, and then softening up and breaking apart the proteins that normally hold tissue together. Then, adding water swells the polymer, pulling all the proteins apart from each other. This tissue enlargement allows researchers to obtain images with a resolution of around 70 nanometers, which was previously possible only with very specialized and expensive microscopes such as scanning electron microscopes.

In 2017, the Boyden lab developed a way to expand preserved human tissue specimens, but the chemical reagents that they used also destroyed the proteins that the researchers were interested in labeling. By labeling the proteins with fluorescent antibodies before expansion, the proteins’ location and identity could be visualized after the expansion process was complete. However, the antibodies typically used for this kind of labeling can’t easily squeeze through densely packed tissue before it’s expanded.

So, for this study, the authors devised a different tissue-softening protocol that breaks up the tissue but preserves proteins in the sample. After the tissue is expanded, proteins can be labelled with commercially available fluorescent antibodies. The researchers then can perform several rounds of imaging, with three or four different proteins labeled in each round. This labeling of proteins enables many more structures to be imaged, because once the tissue is expanded, antibodies can squeeze through and label proteins they couldn’t previously reach.

The technique works by embedding the tissue into a polymer that swells when water is added, and then softening up and breaking apart the proteins that normally hold tissue together.

“We open up the space between the proteins so that we can get antibodies into crowded spaces that we couldn’t otherwise,” Valdes says. “We saw that we could expand the tissue, we could decrowd the proteins, and we could image many, many proteins in the same tissue by doing multiple rounds of staining.”

Working with MIT Assistant Professor Deblina Sarkar, the researchers demonstrated a form of this “decrowding” in 2022 using mouse tissue.

The new study resulted in a decrowding technique for use with human brain tissue samples that are used in clinical settings for pathological diagnosis and to guide treatment decisions. These samples can be more difficult to work with because they are usually embedded in paraffin and treated with other chemicals that need to be broken down before the tissue can be expanded.

In this study, the researchers labeled up to 16 different molecules per tissue sample. The molecules they targeted include markers for a variety of structures, including axons and synapses, as well as markers that identify cell types such as astrocytes and cells that form blood vessels. They also labeled molecules linked to tumor aggressiveness and neurodegeneration.

Using this approach, the researchers analyzed healthy brain tissue, along with samples from patients with two types of glioma — high-grade glioblastoma, which is the most aggressive primary brain tumor, with a poor prognosis, and low-grade gliomas, which are considered less aggressive.

“We wanted to look at brain tumors so that we can understand them better at the nanoscale level, and by doing that, to be able to develop better treatments and diagnoses in the future. At this point, it was more developing a tool to be able to understand them better, because currently in neuro-oncology, people haven’t done much in terms of super-resolution imaging,” Valdes says.

A diagnostic tool

To identify aggressive tumor cells in gliomas they studied, the researchers labeled vimentin, a protein that is found in highly aggressive glioblastomas. To their surprise, they found many more vimentin-expressing tumor cells in low-grade gliomas than had been seen using any other method.

“This tells us something about the biology of these tumors, specifically, how some of them probably have a more aggressive nature than you would suspect by doing standard staining techniques,” Valdes says.

When glioma patients undergo surgery, tumor samples are preserved and analyzed using immunohistochemistry staining, which can reveal certain markers of aggressiveness, including some of the markers analyzed in this study.

“These are incurable brain cancers, and this type of discovery will allow us to figure out which cancer molecules to target so we can design better treatments. It also proves the profound impact of having clinicians like us at the Brigham and Women’s interacting with basic scientists such as Ed Boyden at MIT to discover new technologies that can improve patient lives,” Chiocca says.

The researchers hope their expansion microscopy technique could allow doctors to learn much more about patients’ tumors, helping them to determine how aggressive the tumor is and guiding treatment choices. Valdes now plans to do a larger study of tumor types to try to establish diagnostic guidelines based on the tumor traits that can be revealed using this technique.

“Our hope is that this is going to be a diagnostic tool to pick up marker cells, interactions, and so on, that we couldn’t before,” he says. “It’s a practical tool that will help the clinical world of neuro-oncology and neuropathology look at neurological diseases at the nanoscale like never before, because fundamentally it’s a very simple tool to use.”

Boyden’s lab also plans to use this technique to study other aspects of brain function, in healthy and diseased tissue.

“Being able to do nanoimaging is important because biology is about nanoscale things — genes, gene products, biomolecules — and they interact over nanoscale distances,” Boyden says. “We can study all sorts of nanoscale interactions, including synaptic changes, immune interactions, and changes that occur during cancer and aging.”

The research was funded by K. Lisa Yang, the Howard Hughes Medical Institute, John Doerr, Open Philanthropy, the Bill and Melinda Gates Foundation, the Koch Institute Frontier Research Program, the National Institutes of Health, and the Neurosurgery Research and Education Foundation.

Simons Center’s collaborative approach propels autism research, at MIT and beyond

The secret to the success of MIT’s Simons Center for the Social Brain is in the name. With a founding philosophy of “collaboration and community” that has supported scores of scientists across more than a dozen Boston-area research institutions, the SCSB advances research by being inherently social.

SCSB’s mission is “to understand the neural mechanisms underlying social cognition and behavior and to translate this knowledge into better diagnosis and treatment of autism spectrum disorders.” When Director Mriganka Sur founded the center in 2012 in partnership with the Simons Foundation Autism Research Initiative (SFARI) of Jim and Marilyn Simons, he envisioned a different way to achieve urgently needed research progress than the traditional approach of funding isolated projects in individual labs. Sur wanted SCSB’s contribution to go beyond papers, though it has generated about 350 and counting. He sought the creation of a sustained, engaged autism research community at MIT and beyond.

“When you have a really big problem that spans so many issues  a clinical presentation, a gene, and everything in between  you have to grapple with multiple scales of inquiry,” says Sur, the Newton Professor of Neuroscience in MIT’s Department of Brain and Cognitive Sciences (BCS) and The Picower Institute for Learning and Memory. “This cannot be solved by one person or one lab. We need to span multiple labs and multiple ways of thinking. That was our vision.”

In parallel with a rich calendar of public colloquia, lunches, and special events, SCSB catalyzes multiperspective, multiscale research collaborations in two programmatic ways. Targeted projects fund multidisciplinary teams of scientists with complementary expertise to collectively tackle a pressing scientific question. Meanwhile, the center supports postdoctoral Simons Fellows with not one, but two mentors, ensuring a further cross-pollination of ideas and methods.

Complementary collaboration

In 11 years, SCSB has funded nine targeted projects. Each one, by design, involves a deep and multifaceted exploration of a major question with both fundamental importance and clinical relevance. The first project, back in 2013, for example, marshaled three labs spanning BCS, the Department of Biology, and The Whitehead Institute for Biomedical Research to advance understanding of how mutation of the Shank3 gene leads to the pathophysiology of Phelan-McDermid Syndrome by working across scales ranging from individual neural connections to whole neurons to circuits and behavior.

Other past projects have applied similarly integrated, multiscale approaches to topics ranging from how 16p11.2 gene deletion alters the development of brain circuits and cognition to the critical role of the thalamic reticular nucleus in information flow during sleep and wakefulness. Two others produced deep examinations of cognitive functions: how we go from hearing a string of words to understanding a sentence’s intended meaning, and the neural and behavioral correlates of deficits in making predictions about social and sensory stimuli. Yet another project laid the groundwork for developing a new animal model for autism research.

SFARI is especially excited by SCSB’s team science approach, says Kelsey Martin, executive vice president of autism and neuroscience at the Simons Foundation. “I’m delighted by the collaborative spirit of the SCSB,” Martin says. “It’s wonderful to see and learn about the multidisciplinary team-centered collaborations sponsored by the center.”

New projects

In the last year, SCSB has launched three new targeted projects. One team is investigating why many people with autism experience sensory overload and is testing potential interventions to help. The scientists hypothesize that patients experience a deficit in filtering out the mundane stimuli that neurotypical people predict are safe to ignore. Studies suggest the predictive filter relies on relatively low-frequency “alpha/beta” brain rhythms from deep layers of the cortex moderating the higher frequency “gamma” rhythms in superficial layers that process sensory information.

Together, the labs of Charles Nelson, professor of pediatrics at Boston Children’s Hospital (BCH), and BCS faculty members Bob Desimone, the Doris and Don Berkey Professor of Neuroscience at MIT and director of the McGovern Institute, and Earl K. Miller, the Picower Professor, are testing the hypothesis in two different animal models at MIT and in human volunteers at BCH. In the animals they’ll also try out a new real-time feedback system invented in Miller’s lab that can potentially correct the balance of these rhythms in the brain. And in an animal model engineered with a Shank3 mutation, Desimone’s lab will test a gene therapy, too.

“None of us could do all aspects of this project on our own,” says Miller, an investigator in the Picower Institute. “It could only come about because the three of us are working together, using different approaches.”

Right from the start, Desimone says, close collaboration with Nelson’s group at BCH has been essential. To ensure his and Miller’s measurements in the animals and Nelson’s measurements in the humans are as comparable as possible, they have tightly coordinated their research protocols.

“If we hadn’t had this joint grant we would have chosen a completely different, random set of parameters than Chuck, and the results therefore wouldn’t have been comparable. It would be hard to relate them,” says Desimone, who also directs MIT’s McGovern Institute for Brain Research. “This is a project that could not be accomplished by one lab operating in isolation.”

Another targeted project brings together a coalition of seven labs — six based in BCS (professors Evelina Fedorenko, Edward Gibson, Nancy Kanwisher, Roger Levy, Rebecca Saxe, and Joshua Tenenbaum) and one at Dartmouth College (Caroline Robertson) — for a synergistic study of the cognitive, neural, and computational underpinnings of conversational exchanges. The study will integrate the linguistic and non-linguistic aspects of conversational ability in neurotypical adults and children and those with autism.

Fedorenko said the project builds on advances and collaborations from the earlier language Targeted Project she led with Kanwisher.

“Many directions that we started to pursue continue to be active directions in our labs. But most importantly, it was really fun and allowed the PIs [principal investigators] to interact much more than we normally would and to explore exciting interdisciplinary questions,” Fedorenko says. “When Mriganka approached me a few years after the project’s completion asking about a possible new targeted project, I jumped at the opportunity.”

Gibson and Robertson are studying how people align their dialogue, not only in the content and form of their utterances, but using eye contact. Fedorenko and Kanwisher will employ fMRI to discover key components of a conversation network in the cortex. Saxe will examine the development of conversational ability in toddlers using novel MRI techniques. Levy and Tenenbaum will complement these efforts to improve computational models of language processing and conversation.

The newest Targeted Project posits that the immune system can be harnessed to help treat behavioral symptoms of autism. Four labs — three in BCS and one at Harvard Medical School (HMS) — will study mechanisms by which peripheral immune cells can deliver a potentially therapeutic cytokine to the brain. A study by two of the collaborators, MIT associate professor Gloria Choi and HMS associate professor Jun Huh, showed that when IL-17a reaches excitatory neurons in a region of the mouse cortex, it can calm hyperactivity in circuits associated with social and repetitive behavior symptoms. Huh, an immunologist, will examine how IL-17a can get from the periphery to the brain, while Choi will examine how it has its neurological effects. Sur and MIT associate professor Myriam Heiman will conduct studies of cell types that bridge neural circuits with brain circulatory systems.

“It is quite amazing that we have a core of scientists working on very different things coming together to tackle this one common goal,” Choi says. “I really value that.”

Multiple mentors

While SCSB Targeted Projects unify labs around research, the center’s Simons Fellowships unify labs around young researchers, providing not only funding, but a pair of mentors and free-flowing interactions between their labs. Fellows also gain opportunities to inform and inspire their fundamental research by visiting with patients with autism, Sur says.

“The SCSB postdoctoral program serves a critical role in ensuring that a diversity of outstanding scientists are exposed to autism research during their training, providing a pipeline of new talent and creativity for the field,” adds Martin, of the Simons Foundation.

Simons Fellows praise the extra opportunities afforded by additional mentoring. Postdoc Alex Major was a Simons Fellow in Miller’s lab and that of Nancy Kopell, a mathematics professor at Boston University renowned for her modeling of the brain wave phenomena that the Miller lab studies experimentally.

“The dual mentorship structure is a very useful aspect of the fellowship” Major says. “It is both a chance to network with another PI and provides experience in a different neuroscience sub-field.”

Miller says co-mentoring expands the horizons and capabilities of not only the mentees but also the mentors and their labs. “Collaboration is 21st century neuroscience,” Miller says. “Some our studies of the brain have gotten too big and comprehensive to be encapsulated in just one laboratory. Some of these big questions require multiple approaches and multiple techniques.”

Desimone, who recently co-mentored Seng Bum (Michael Yoo) along with BCS and McGovern colleague Mehrdad Jazayeri in a project studying how animals learn from observing others, agrees.

“We hear from postdocs all the time that they wish they had two mentors, just in general to get another point of view,” Desimone says. “This is a really good thing and it’s a way for faculty members to learn about what other faculty members and their postdocs are doing.”

Indeed, the Simons Center model suggests that research can be very successful when it’s collaborative and social.

Do we only use 10 percent of our brain?

Movies like “Limitless” and “Lucy” play on the notion that humans use only 10 percent of their brains—and those who unlock a higher percentage wield powers like infinite memory or telekinesis. It’s enticing to think that so much of the brain remains untapped and is ripe for boosting human potential.

But the idea that we use 10 percent of our brain is 100 percent a myth.

In fact, scientists believe that we use our entire brain every day. Mila Halgren is a graduate student in the lab of Mark Harnett, an associate professor of brain and cognitive sciences and an investigator at the McGovern Institute. The Harnett lab studies the computational power of neurons, that is, how neural networks rapidly process massive amounts of information.

“All of our brain is constantly in use and consumes a tremendous amount of energy,” Halgren says. “Despite making up only two percent of our body weight, it devours 20 percent of our calories.” This doesn’t appear to change significantly with different tasks, from typing on a computer to doing yoga. “Even while we sleep, our entire brain remains intensely active.”

When did this myth take root?

Portrait of scientist Mila Halgren
Mila Halgren is a PhD student in MIT’s Department of Brain and Cognitive Sciences. Photo: Mila Halgren

The myth is thought to have gained traction when scientists first began exploring the brain’s abilities but lacked the tools to capture its exact workings. In 1907, William James, a founder of American psychology, suggested in his book “The Energies of Men” that “we are making use of only a small part of our possible mental and physical resources.” This influential work likely sparked the idea that humans access a mere fraction of the brain—setting this common misconception ablaze.

Brainpower lore even suggests that Albert Einstein credited his genius to being able to access more than 10 percent of his brain. However, no such quote has been documented and this too is perhaps a myth of cosmic proportion.

Halgren believes that there may be some fact backing this fiction. “People may think our brain is underutilized in the sense that some neurons fire very infrequently—once every few minutes or less. But this isn’t true of most neurons, some of which fire hundreds of times per second,” she says.

In the nascent years of neuroscience, scientists also argued that a large portion of the brain must be inactive because some people experience brain injuries and can still function at a high level, like the famous case of Phineas Gage. Halgren points to the brain’s remarkable plasticity—the reshaping of neural connections. “Entire brain hemispheres can be removed during early childhood and the rest of the brain will rewire and compensate for the loss. In other words, the brain will use 100 percent of what it has, but can make do with less depending on which structures are damaged.”

Is there a limit to the brain?

If we indeed use our entire brain, can humans tease out any problem? Or, are there enigmas in the world that we will never unravel?

“This is still in contention,” Halgren says. “There may be certain problems that the human brain is fundamentally unable to solve, like how a mouse will never understand chemistry and a chimpanzee can’t do calculus.”

Can we increase our brainpower?

The brain may have its limits, but there are ways to boost our cognitive prowess to ace that midterm or crank up productivity in the workplace. According to Halgren, “You can increase your brainpower, but there’s no ‘trick’ that will allow you to do so. Like any organ in your body, the brain works best with proper sleep, exercise, low stress, and a well-balanced diet.”

The truth is, we may never rearrange furniture with our minds or foresee which team will win the Super Bowl. The idea of a largely latent brain is draped in fantasy, but debunking this myth speaks to the immense growth of neuroscience over the years—and the allure of other misconceptions that scientists have yet to demystify.

The brain runs an internal simulation to keep track of time

Clocks, computers, and metronomes can keep time with exquisite precision. But even in the absence of an external time keeper, we can track time on our own. We know when minutes or hours have elapsed, and we can maintain a rhythm when we dance, sing, or play music. Now, neuroscientists at the National Autonomous University of Mexico and MIT’s McGovern Institute and have discovered one way the brain keeps a beat: It runs an internal simulation, mentally recreating the perception of an external rhythm and preparing an appropriately timed response.

The discovery, reported January 10, 2024, in the journal Science Advances, illustrates how animals can think about imaginary events and use an internal model to guide their interactions with the world. “It’s a real indication of mental states as an independent driver of behavior,” says neuroscientist Mehrdad Jazayeri, an investigator at the McGovern Institute and an associate professor of brain and cognitive sciences at MIT.

Predicting the future

Jazayeri teamed up with Victor de Lafuente, a neuroscientist at the National Autonomous University of Mexico, to investigate the brain’s time-keeping ability. De Lafuente, who led the study, says they were motivated by curiosity about how the brain makes predictions and prepares for future states of the world.

De Lafuente and his team used a visual metronome to teach monkeys a simple rhythm, showing them a circle that moved between two positions on a screen to set a steady tempo. Then the metronome stopped. After a variable and unpredictable pause, the monkeys were asked to indicate where the dot would be if the metronome had carried on.

Monkeys do well at this task, successfully keeping time after the metronome stops. After the waiting period, they are usually able to identify the expected position of the circle, which they communicate by reaching towards a touchscreen.

To find out how the animals were keeping track of the metronome’s rhythm, de Lafuente’s group monitored their brain activity. In several key brain regions, they found rhythmic patterns of activity that oscillated at the same frequency as the metronome. This occurred while the monkeys watched the metronome. More remarkably, it continued after the metronome had stopped.

“The animal is seeing things going and then things stop. What we find in the brain is the continuation of that process in the animal’s mind,” Jazayeri says. “An entire network is replicating what it was doing.”

That was true in the visual cortex, where clusters of neurons respond to stimuli in specific spots within the eyes’ field of view. One set of cells in the visual cortex fired when the metronome’s circle was on the left of the screen; another set fired when the dot was on the right. As a monkey followed the visual metronome, the researchers could see these cells’ activity alternating rhythmically, tracking the movement. When the metronome stopped, the back-and-forth neural activity continued, maintaining the rhythm. “Once the stimulus was no longer visible, they were seeing the stimulus within their minds,” de Lafuente says.

They found something similar in the brain’s motor cortex, where movements are prepared and executed. De Lafuente explains that the monkeys are motionless for most of their time-keeping task; only when they are asked to indicate where the metronome’s circle should be do they move a hand to touch the screen. But the motor cortex was engaged even before it was time to move. “Within their brains there is a signal that is switching from the left to the right,” he says. “So the monkeys are thinking ‘left, right, left, right’—even when they are not moving and the world is constant.”

While some scientists have proposed that the brain may have a central time-keeping mechanism, the team’s findings indicate that entire networks can be called on to track the passage of time. The monkeys’ model of the future was surprisingly explicit, de Lafuente says, representing specific sensory stimuli and plans for movement. “This offers a potential solution to mentally tracking the dynamics in the world, which is to basically think about them in terms of how they actually would have happened,” Jazayeri says.

 

Margaret Livingstone awarded the 2024 Scolnick Prize in Neuroscience

Today the McGovern Institute at MIT announces that the 2024 Edward M. Scolnick Prize in Neuroscience will be awarded to Margaret Livingstone, Takeda Professor of Neurobiology at Harvard Medical School. The Scolnick Prize is awarded annually by the McGovern Institute, for outstanding achievements in neuroscience.

“Margaret Livingstone’s driven curiosity and original experimental approaches have led to fundamental advances in our understanding of visual perception,” says Robert Desimone, director of the McGovern Institute and chair of the selection committee. “In particular, she has made major advances in resolving a long-standing debate over whether the brain domains and neurons that are specifically tuned to detect facial features are present from birth or arise from experience. Her developmental research shows that the cerebral cortex already contains topographic sensory maps at birth but that domain-specific maps, for example to recognize facial-features, require experience and sensory input to develop normally.”

“Margaret Livingstone’s driven curiosity and original experimental approaches have led to fundamental advances in our understanding of visual perception.” — Robert Desimone

Livingstone received a BS from MIT in 1972 and, under the mentorship of Edward Kravitz, a PhD in neurobiology from Harvard University in 1981. Her doctoral research in lobsters showed that the biogenic amines serotonin and octopamine control context-dependent behaviors such as offensive versus defensive postures. She followed up on this discovery as a postdoctoral fellow by researching biogenic amine signaling in learning and memory, with Prof. William Quinn at Princeton University. Using learning and memory mutants created in the fruit fly model she identified defects in dopamine-synthesizing enzymes and calcium-dependent enzymes that produce cAMP. Her results supported the then burgeoning idea that biogenic amines signal through second messengers enable behavioral plasticity.

To test whether biogenic amines also control neuronal function in mammals, Livingstone moved back to Harvard Medical School in 1983 to study the effects of sleep on visual processing with David Hubel, who was studying neuronal activity in the nonhuman primate visual cortex. Over the course of a 20-year collaboration, Livingstone and Hubel showed that the visual system is functionally and anatomically divided into parallel pathways that detect and process the distinct visual features of color, motion, and orientation.

Livingstone quickly rose through the academic ranks at Harvard to be appointed as an instructor and then assistant professor in 1983, associate professor in 1986 and full professor in 1988. With her own laboratory, Livingstone began to explore the organization of face-perception domains in the inferotemporal cortex of nonhuman primates. By combining single-cell recording and fMRI brain imaging data from the same animal, her then graduate student Doris Tsao, in collaboration with Winrich Freiwald, showed that an abundance of individual neurons within the face-recognition domain are tuned to a combination of facial features. These results helped to explain the long-standing question of how individual neurons show such exquisite selectivity to specific faces.

Three images of Mona Lisa, side by side, each with a different filter slightly obscuring the face.
Mona Lisa’s smile has been described as mysterious and fleeting because it seems to disappear when viewers look directly at it. Livingstone showed that Mona Lisa’s smile is more apparent in our peripheral vision than our central (or foveal) vision because our peripheral vision is more sensitive to low spatial frequencies, or shadows and shadings of black and white. These shadows make her lips seem to turn upward into a subtle smile. The three images above show the painting filtered to reveal very low spatial frequency features (left, with the smile more apparent) to high spatial frequency features (right, with the smile being less visible). Image: Margaret Livingstone

In researching face patches, Livingstone became fascinated with the question of whether face-perception domains are present from birth, as many scientists thought at the time. Livingstone and her postdoc Michael Arcaro carried out experiments that showed that the development of face patches requires visual exposure to faces in the early postnatal period. Moreover, they showed that entirely unnatural symbol-specific domains can form in animals that experienced intensive visual exposure to symbols early in development. Thus, experience is both necessary and sufficient for the formation of feature-specific domains in the inferotemporal cortex. Livingtone’s results support a consistent principle for the development of higher-level cortex, from a hard-wired sensory topographic map present at birth to the formation of experience-dependent domains that detect combined, stimulus-specific features.

Livingstone is also known for her scientifically based exploration of the visual arts. Her book “Vision and Art: The Biology of Seeing,” which has sold more than 40,000 copies to date, explores how both the techniques artists use and our anatomy and physiology influence our perception of art. Livingstone has presented this work to audiences around the country, from Pixar Studios, MicroSoft and IBM to The Metropolitan Museum of Art, The National Gallery and The Hirshhorn Museum.

In 2014, Livingstone was awarded the Takeda Professorship of Neurobiology at Harvard Medical School. She was awarded the Mika Salpeter Lifetime Achievement Award from the Society for Neuroscience in 2011, the Grossman Award from the Society of Neurological Surgeons in 2013 and the Roberts Prize for Best Paper in Physics in Medicine and Biology in 2013 and 2016. Livingstone was elected fellow of the American Academy of Arts and Sciences in 2018 and of the National Academy of Science in 2020. She will be awarded the Scolnick Prize in the spring of 2024.

How the brain responds to reward is linked to socioeconomic background

MIT neuroscientists have found that the brain’s sensitivity to rewarding experiences — a critical factor in motivation and attention — can be shaped by socioeconomic conditions.

In a study of 12 to 14-year-olds whose socioeconomic status (SES) varied widely, the researchers found that children from lower SES backgrounds showed less sensitivity to reward than those from more affluent backgrounds.

Using functional magnetic resonance imaging (fMRI), the research team measured brain activity as the children played a guessing game in which they earned extra money for each correct guess. When participants from higher SES backgrounds guessed correctly, a part of the brain called the striatum, which is linked to reward, lit up much more than in children from lower SES backgrounds.

The brain imaging results also coincided with behavioral differences in how participants from lower and higher SES backgrounds responded to correct guesses. The findings suggest that lower SES circumstances may prompt the brain to adapt to the environment by dampening its response to rewards, which are often scarcer in low SES environments.

“If you’re in a highly resourced environment, with many rewards available, your brain gets tuned in a certain way. If you’re in an environment in which rewards are more scarce, then your brain accommodates the environment in which you live. Instead of being overresponsive to rewards, it seems like these brains, on average, are less responsive, because probably their environment has been less consistent in the availability of rewards,” says John Gabrieli, the Grover Hermann Professor of Health Sciences and Technology, a professor of brain and cognitive sciences, and a member of MIT’s McGovern Institute for Brain Research.

Gabrieli and Rachel Romeo, a former MIT postdoc who is now an assistant professor in the Department of Human Development and Quantitative Methodology at the University of Maryland, are the senior authors of the study. MIT postdoc Alexandra Decker is the lead author of the paper, which appears today in the Journal of Neuroscience.

Reward response

Previous research has shown that children from lower SES backgrounds tend to perform worse on tests of attention and memory, and they are more likely to experience depression and anxiety. However, until now, few studies have looked at the possible association between SES and reward sensitivity.

In the new study, the researchers focused on a part of the brain called the striatum, which plays a significant role in reward response and decision-making. Studies in people and animal models have shown that this region becomes highly active during rewarding experiences.

To investigate potential links between reward sensitivity, the striatum, and socioeconomic status, the researchers recruited more than 100 adolescents from a range of SES backgrounds, as measured by household income and how much education their parents received.

Each of the participants underwent fMRI scanning while they played a guessing game. The participants were shown a series of numbers between 1 and 9, and before each trial, they were asked to guess whether the next number would be greater than or less than 5. They were told that for each correct guess, they would earn an extra dollar, and for each incorrect guess, they would lose 50 cents.

Unbeknownst to the participants, the game was set up to control whether the guess would be correct or incorrect. This allowed the researchers to ensure that each participant had a similar experience, which included periods of abundant rewards or few rewards. In the end, everyone ended up winning the same amount of money (in addition to a stipend that each participant received for participating in the study).

Previous work has shown that the brain appears to track the rate of rewards available. When rewards are abundant, people or animals tend to respond more quickly because they don’t want to miss out on the many available rewards. The researchers saw that in this study as well: When participants were in a period when most of their responses were correct, they tended to respond more quickly.

“If your brain is telling you there’s a really high chance that you’re going to receive a reward in this environment, it’s going to motivate you to collect rewards, because if you don’t act, you’re missing out on a lot of rewards,” Decker says.

Brain scans showed that the degree of activation in the striatum appeared to track fluctuations in the rate of rewards across time, which the researchers think could act as a motivational signal that there are many rewards to collect. The striatum lit up more during periods in which rewards were abundant and less during periods in which rewards were scarce. However, this effect was less pronounced in the children from lower SES backgrounds, suggesting their brains were less attuned to fluctuations in the rate of reward over time.

The researchers also found that during periods of scarce rewards, participants tended to take longer to respond after a correct guess, another phenomenon that has been shown before. It’s unknown exactly why this happens, but two possible explanations are that people are savoring their reward or that they are pausing to update the reward rate. However, once again, this effect was less pronounced in the children from lower SES backgrounds — that is, they did not pause as long after a correct guess during the scarce-reward periods.

“There was a reduced response to reward, which is really striking. It may be that if you’re from a lower SES environment, you’re not as hopeful that the next response will gain similar benefits, because you may have a less reliable environment for earning rewards,” Gabrieli says. “It just points out the power of the environment. In these adolescents, it’s shaping their psychological and brain response to reward opportunity.”

Environmental effects

The fMRI scans performed during the study also revealed that children from lower SES backgrounds showed less activation in the striatum when they guessed correctly, suggesting that their brains have a dampened response to reward.

The researchers hypothesize that these differences in reward sensitivity may have evolved over time, in response to the children’s environments.

“Socioeconomic status is associated with the degree to which you experience rewards over the course of your lifetime,” Decker says. “So, it’s possible that receiving a lot of rewards perhaps reinforces behaviors that make you receive more rewards, and somehow this tunes the brain to be more responsive to rewards. Whereas if you are in an environment where you receive fewer rewards, your brain might become, over time, less attuned to them.”

The study also points out the value of recruiting study subjects from a range of SES backgrounds, which takes more effort but yields important results, the researchers say.

“Historically, many studies have involved the easiest people to recruit, who tend to be people who come from advantaged environments. If we don’t make efforts to recruit diverse pools of participants, we almost always end up with children and adults who come from high-income, high-education environments,” Gabrieli says. “Until recently, we did not realize that principles of brain development vary in relation to the environment in which one grows up, and there was very little evidence about the influence of SES.”

The research was funded by the William and Flora Hewlett Foundation and a Natural Sciences and Engineering Research Council of Canada Postdoctoral Fellowship.

Study reveals a universal pattern of brain wave frequencies

Throughout the brain’s cortex, neurons are arranged in six distinctive layers, which can be readily seen with a microscope. A team of MIT and Vanderbilt University neuroscientists has now found that these layers also show distinct patterns of electrical activity, which are consistent over many brain regions and across several animal species, including humans.

The researchers found that in the topmost layers, neuron activity is dominated by rapid oscillations known as gamma waves. In the deeper layers, slower oscillations called alpha and beta waves predominate. The universality of these patterns suggests that these oscillations are likely playing an important role across the brain, the researchers say.

“When you see something that consistent and ubiquitous across cortex, it’s playing a very fundamental role in what the cortex does,” says Earl Miller, the Picower Professor of Neuroscience, a member of MIT’s Picower Institute for Learning and Memory, and one of the senior authors of the new study.

Imbalances in how these oscillations interact with each other may be involved in brain disorders such as attention deficit hyperactivity disorder, the researchers say.

“Overly synchronous neural activity is known to play a role in epilepsy, and now we suspect that different pathologies of synchrony may contribute to many brain disorders, including disorders of perception, attention, memory, and motor control. In an orchestra, one instrument played out of synchrony with the rest can disrupt the coherence of the entire piece of music,” says Robert Desimone, director of MIT’s McGovern Institute for Brain Research and one of the senior authors of the study.

André Bastos, an assistant professor of psychology at Vanderbilt University, is also a senior author of the open-access paper, which appears today in Nature Neuroscience. The lead authors of the paper are MIT research scientist Diego Mendoza-Halliday and MIT postdoc Alex Major.

Layers of activity

The human brain contains billions of neurons, each of which has its own electrical firing patterns. Together, groups of neurons with similar patterns generate oscillations of electrical activity, or brain waves, which can have different frequencies. Miller’s lab has previously shown that high-frequency gamma rhythms are associated with encoding and retrieving sensory information, while low-frequency beta rhythms act as a control mechanism that determines which information is read out from working memory.

His lab has also found that in certain parts of the prefrontal cortex, different brain layers show distinctive patterns of oscillation: faster oscillation at the surface and slower oscillation in the deep layers. One study, led by Bastos when he was a postdoc in Miller’s lab, showed that as animals performed working memory tasks, lower-frequency rhythms generated in deeper layers regulated the higher-frequency gamma rhythms generated in the superficial layers.

In addition to working memory, the brain’s cortex also is the seat of thought, planning, and high-level processing of emotion and sensory information. Throughout the regions involved in these functions, neurons are arranged in six layers, and each layer has its own distinctive combination of cell types and connections with other brain areas.

“The cortex is organized anatomically into six layers, no matter whether you look at mice or humans or any mammalian species, and this pattern is present in all cortical areas within each species,” Mendoza-Halliday says. “Unfortunately, a lot of studies of brain activity have been ignoring those layers because when you record the activity of neurons, it’s been difficult to understand where they are in the context of those layers.”

In the new paper, the researchers wanted to explore whether the layered oscillation pattern they had seen in the prefrontal cortex is more widespread, occurring across different parts of the cortex and across species.

Using a combination of data acquired in Miller’s lab, Desimone’s lab, and labs from collaborators at Vanderbilt, the Netherlands Institute for Neuroscience, and the University of Western Ontario, the researchers were able to analyze 14 different areas of the cortex, from four mammalian species. This data included recordings of electrical activity from three human patients who had electrodes inserted in the brain as part of a surgical procedure they were undergoing.

Recording from individual cortical layers has been difficult in the past, because each layer is less than a millimeter thick, so it’s hard to know which layer an electrode is recording from. For this study, electrical activity was recorded using special electrodes that record from all of the layers at once, then feed the data into a new computational algorithm the authors designed, termed FLIP (frequency-based layer identification procedure). This algorithm can determine which layer each signal came from.

“More recent technology allows recording of all layers of cortex simultaneously. This paints a broader perspective of microcircuitry and allowed us to observe this layered pattern,” Major says. “This work is exciting because it is both informative of a fundamental microcircuit pattern and provides a robust new technique for studying the brain. It doesn’t matter if the brain is performing a task or at rest and can be observed in as little as five to 10 seconds.”

Across all species, in each region studied, the researchers found the same layered activity pattern.

“We did a mass analysis of all the data to see if we could find the same pattern in all areas of the cortex, and voilà, it was everywhere. That was a real indication that what had previously been seen in a couple of areas was representing a fundamental mechanism across the cortex,” Mendoza-Halliday says.

Maintaining balance

The findings support a model that Miller’s lab has previously put forth, which proposes that the brain’s spatial organization helps it to incorporate new information, which carried by high-frequency oscillations, into existing memories and brain processes, which are maintained by low-frequency oscillations. As information passes from layer to layer, input can be incorporated as needed to help the brain perform particular tasks such as baking a new cookie recipe or remembering a phone number.

“The consequence of a laminar separation of these frequencies, as we observed, may be to allow superficial layers to represent external sensory information with faster frequencies, and for deep layers to represent internal cognitive states with slower frequencies,” Bastos says. “The high-level implication is that the cortex has multiple mechanisms involving both anatomy and oscillations to separate ‘external’ from ‘internal’ information.”

Under this theory, imbalances between high- and low-frequency oscillations can lead to either attention deficits such as ADHD, when the higher frequencies dominate and too much sensory information gets in, or delusional disorders such as schizophrenia, when the low frequency oscillations are too strong and not enough sensory information gets in.

“The proper balance between the top-down control signals and the bottom-up sensory signals is important for everything the cortex does,” Miller says. “When the balance goes awry, you get a wide variety of neuropsychiatric disorders.”

The researchers are now exploring whether measuring these oscillations could help to diagnose these types of disorders. They are also investigating whether rebalancing the oscillations could alter behavior — an approach that could one day be used to treat attention deficits or other neurological disorders, the researchers say.

The researchers also hope to work with other labs to characterize the layered oscillation patterns in more detail across different brain regions.

“Our hope is that with enough of that standardized reporting, we will start to see common patterns of activity across different areas or functions that might reveal a common mechanism for computation that can be used for motor outputs, for vision, for memory and attention, et cetera,” Mendoza-Halliday says.

The research was funded by the U.S. Office of Naval Research, the U.S. National Institutes of Health, the U.S. National Eye Institute, the U.S. National Institute of Mental Health, the Picower Institute, a Simons Center for the Social Brain Postdoctoral Fellowship, and a Canadian Institutes of Health Postdoctoral Fellowship.

Calling neurons to attention

The world assaults our senses, exposing us to more noise and color and scents and sensations than we can fully comprehend. Our brains keep us tuned in to what’s important, letting less relevant sights and sounds fade into the background while we focus on the most salient features of our surroundings. Now, scientists at MIT’s McGovern Institute have a better understanding of how the brain manages this critical task of directing our attention.

In the January 15, 2023, issue of the journal Neuron, a team led by Diego Mendoza-Halliday, a research scientist in McGovern Institute Director Robert Desimone’s lab, reports on a group of neurons in the brain’s prefrontal cortex that are critical for directing an animal’s visual attention. Their findings not only demonstrate this brain region’s important role in guiding attention, but also help establish attention as a function that is distinct from other cognitive functions, such as short-term memory, in the brain.

Attention and working memory

Mendoza-Halliday, who is now an assistant professor at the University of Pittsburgh, explains that attention has a close relationship to working memory, which the brain uses to temporarily store information after our senses take it in. The two brain functions strongly influence one another: We’re more likely to remember something if we pay attention to it, and paying attention to certain features of our environment may involve representing those features in our working memory. For example, he explains, both attention and working memory are called on when searching for a triangular red keychain on a cluttered desk: “What my brain does is it remembers that my keyholder is red and it’s a triangle, and then builds a working memory representation and uses it as a search template. So now everything that is red and everything that is a triangle receives preferential processing, or is attended to.”

Working memory and attention are so closely associated that some neuroscientists have proposed that the brain calls on the same neural mechanisms to create them. “This has led to the belief that maybe attention and working memory are just two sides of the same coin—that they’re basically the same function in different modes,” Mendoza-Halliday says. His team’s findings, however, say otherwise.

Circuit manipulation

To study the origins of attention in the brain, Mendoza-Halliday and colleagues trained monkeys to focus their attention on a visual feature that matches a cue they have seen before. After seeing a set of dots move across the screen, they must call on their working memory to remember the direction of that movement for a few seconds while the screen goes blank. Then the experimenters present the animals with more moving dots, this time traveling in multiple directions. By focusing on the dots moving in the same direction as the first set they saw, the monkeys are able to recognize when those dots briefly accelerate. Reporting on the speed change earns the animals a reward.

While the monkeys performed this task, the researchers monitored cells in several brain regions, including the prefrontal cortex, which Desimone’s team has proposed plays a role in directing attention. The activity patterns they recorded suggested that distinct groups of cells participated in the attention and working memory aspects of the task.

To better understand those cells’ roles, the researchers manipulated their activity. They used optogenetics, an approach in which a light-sensitive protein is introduced into neurons so that they can be switched on or off with a pulse of light. Desimone’s lab, in collaboration with Edward Boyden, the Y. Eva Tan Professor in Neurotechnology at MIT and a member of the McGovern Institute, pioneered the use of optogenetics in primates. “Optogenetics allows us to distinguish between correlation and causality in neural circuits,” says Desimone, the Doris and Don Berkey Professor of Neuroscience at MIT.  “If we turn off a circuit using optogenetics, and the animal can no longer perform the task, that is good evidence for a causal role of the circuit,” says Desimone, who is also a professor of brain and cognitive sciences at MIT.

Using this optogenetic method, they switched off neurons in a specific portion of the brain’s lateral prefrontal cortex for a few hundred milliseconds at a time as the monkeys performed their dot-tracking task. The researchers found that they could switch off signaling from the lateral prefrontal cortex early, when the monkeys needed their working memory but had no dots to attend to, without interfering with the animals’ ability to complete the task. But when they blocked signaling when the monkeys needed to focus their attention, the animals performed poorly.

The team also monitored activity in the brain visual’s cortex during the moving-dot task. When the lateral prefrontal cortex was shut off, neurons in connected visual areas showed less heightened reactivity to movement in the direction the monkey was attending to. Mendoza-Halliday says this suggests that cells in the lateral prefrontal cortex are important for telling sensory-processing circuits what visual features to pay attention to.

The discovery that at least part of the brain’s lateral prefrontal cortex is critical for attention but not for working memory offers a new view of the relationship between the two. “It is a physiological demonstration that working memory and attention cannot be the same function, since they rely on partially separate neuronal populations and neural mechanisms,” Mendoza-Halliday says.