Women in STEM — A celebration of excellence and curiosity

What better way to commemorate Women’s History Month and International Women’s Day than to give  three of the world’s most accomplished scientists an opportunity to talk about their careers? On March 7, MindHandHeart invited professors Paula Hammond, Ann Graybiel, and Sangeeta Bhatia to share their career journeys, from the progress they have witnessed to the challenges they have faced as women in STEM. Their conversation was moderated by Mary Fuller, chair of the faculty and professor of literature.

Hammond, an Institute professor with appointments in the Department of Chemical Engineering and the Koch Institute for Integrative Cancer Research, reflected on the strides made by women faculty at MIT, while acknowledging ongoing challenges. “I think that we have advanced a great deal in the last few decades in terms of the numbers of women who are present, although we still have a long way to go,” Hammond noted in her opening. “We’ve seen a remarkable increase over the past couple of decades in our undergraduate population here at MIT, and now we’re beginning to see it in the graduate population, which is really exciting.” Hammond was recently appointed to the role of vice provost for faculty.

Ann Graybiel, also an Institute professor, who has appointments in the Department of Brain and Cognitive Sciences and the McGovern Institute for Brain Research, described growing up in the Deep South. “Girls can’t do science,” she remembers being told in school, and they “can’t do research.” Yet her father, a physician scientist, often took her with him to work and had her assist from a young age, eventually encouraging her directly to pursue a career in science. Graybiel, who first came to MIT in 1973, noted that she continued to face barriers and rejection throughout her career long after leaving the South, but that individual gestures of inspiration, generosity, or simple statements of “You can do it” from her peers helped her power through and continue in her scientific pursuits.

Sangeeta Bhatia, the John and Dorothy Wilson Professor of Health Sciences and Technology and Electrical Engineering and Computer Science, director of the Marble Center for Cancer Nanomedicine at the Koch Institute for Integrative Cancer Research, and a member of the Institute for Medical Engineering and Science, is also the mother of two teenage girls. She shared her perspective on balancing career and family life: “I wanted to pick up my kids from school and I wanted to know their friends. … I had a vision for the life that I wanted.” Setting boundaries at work, she noted, empowered her to achieve both personal and professional goals. Bhatia also described her collaboration with President Emerita Susan Hockfield and MIT Amgen Professor of Biology Emerita Nancy Hopkins to spearhead the Future Founders Initiative, which aims to boost the representation of female faculty members pursuing biotechnology ventures.

A video of the full panel discussion is available on the MindHandHeart YouTube channel.

A new computational technique could make it easier to engineer useful proteins

To engineer proteins with useful functions, researchers usually begin with a natural protein that has a desirable function, such as emitting fluorescent light, and put it through many rounds of random mutation that eventually generate an optimized version of the protein.

This process has yielded optimized versions of many important proteins, including green fluorescent protein (GFP). However, for other proteins, it has proven difficult to generate an optimized version. MIT researchers have now developed a computational approach that makes it easier to predict mutations that will lead to better proteins, based on a relatively small amount of data.

Using this model, the researchers generated proteins with mutations that were predicted to lead to improved versions of GFP and a protein from adeno-associated virus (AAV), which is used to deliver DNA for gene therapy. They hope it could also be used to develop additional tools for neuroscience research and medical applications.

Woman gestures with her hand in front of a glass wall with equations written on it.
MIT Professor of Brain and Cognitive Sciences Ila Fiete in her lab at the McGovern Institute. Photo: Steph Stevens

“Protein design is a hard problem because the mapping from DNA sequence to protein structure and function is really complex. There might be a great protein 10 changes away in the sequence, but each intermediate change might correspond to a totally nonfunctional protein. It’s like trying to find your way to the river basin in a mountain range, when there are craggy peaks along the way that block your view. The current work tries to make the riverbed easier to find,” says Ila Fiete, a professor of brain and cognitive sciences at MIT, a member of MIT’s McGovern Institute for Brain Research, director of the K. Lisa Yang Integrative Computational Neuroscience Center, and one of the senior authors of the study.

Regina Barzilay, the School of Engineering Distinguished Professor for AI and Health at MIT, and Tommi Jaakkola, the Thomas Siebel Professor of Electrical Engineering and Computer Science at MIT, are also senior authors of an open-access paper on the work, which will be presented at the International Conference on Learning Representations in May. MIT graduate students Andrew Kirjner and Jason Yim are the lead authors of the study. Other authors include Shahar Bracha, an MIT postdoc, and Raman Samusevich, a graduate student at Czech Technical University.

Optimizing proteins

Many naturally occurring proteins have functions that could make them useful for research or medical applications, but they need a little extra engineering to optimize them. In this study, the researchers were originally interested in developing proteins that could be used in living cells as voltage indicators. These proteins, produced by some bacteria and algae, emit fluorescent light when an electric potential is detected. If engineered for use in mammalian cells, such proteins could allow researchers to measure neuron activity without using electrodes.

While decades of research have gone into engineering these proteins to produce a stronger fluorescent signal, on a faster timescale, they haven’t become effective enough for widespread use. Bracha, who works in Edward Boyden’s lab at the McGovern Institute, reached out to Fiete’s lab to see if they could work together on a computational approach that might help speed up the process of optimizing the proteins.

“This work exemplifies the human serendipity that characterizes so much science discovery,” Fiete says.

“This work grew out of the Yang Tan Collective retreat, a scientific meeting of researchers from multiple centers at MIT with distinct missions unified by the shared support of K. Lisa Yang. We learned that some of our interests and tools in modeling how brains learn and optimize could be applied in the totally different domain of protein design, as being practiced in the Boyden lab.”

For any given protein that researchers might want to optimize, there is a nearly infinite number of possible sequences that could generated by swapping in different amino acids at each point within the sequence. With so many possible variants, it is impossible to test all of them experimentally, so researchers have turned to computational modeling to try to predict which ones will work best.

In this study, the researchers set out to overcome those challenges, using data from GFP to develop and test a computational model that could predict better versions of the protein.

They began by training a type of model known as a convolutional neural network (CNN) on experimental data consisting of GFP sequences and their brightness — the feature that they wanted to optimize.

The model was able to create a “fitness landscape” — a three-dimensional map that depicts the fitness of a given protein and how much it differs from the original sequence — based on a relatively small amount of experimental data (from about 1,000 variants of GFP).

These landscapes contain peaks that represent fitter proteins and valleys that represent less fit proteins. Predicting the path that a protein needs to follow to reach the peaks of fitness can be difficult, because often a protein will need to undergo a mutation that makes it less fit before it reaches a nearby peak of higher fitness. To overcome this problem, the researchers used an existing computational technique to “smooth” the fitness landscape.

Once these small bumps in the landscape were smoothed, the researchers retrained the CNN model and found that it was able to reach greater fitness peaks more easily. The model was able to predict optimized GFP sequences that had as many as seven different amino acids from the protein sequence they started with, and the best of these proteins were estimated to be about 2.5 times fitter than the original.

“Once we have this landscape that represents what the model thinks is nearby, we smooth it out and then we retrain the model on the smoother version of the landscape,” Kirjner says. “Now there is a smooth path from your starting point to the top, which the model is now able to reach by iteratively making small improvements. The same is often impossible for unsmoothed landscapes.”

Proof-of-concept

The researchers also showed that this approach worked well in identifying new sequences for the viral capsid of adeno-associated virus (AAV), a viral vector that is commonly used to deliver DNA. In that case, they optimized the capsid for its ability to package a DNA payload.

“We used GFP and AAV as a proof-of-concept to show that this is a method that works on data sets that are very well-characterized, and because of that, it should be applicable to other protein engineering problems,” Bracha says.

The researchers now plan to use this computational technique on data that Bracha has been generating on voltage indicator proteins.

“Dozens of labs having been working on that for two decades, and still there isn’t anything better,” she says. “The hope is that now with generation of a smaller data set, we could train a model in silico and make predictions that could be better than the past two decades of manual testing.”

The research was funded, in part, by the U.S. National Science Foundation, the Machine Learning for Pharmaceutical Discovery and Synthesis consortium, the Abdul Latif Jameel Clinic for Machine Learning in Health, the DTRA Discovery of Medical Countermeasures Against New and Emerging threats program, the DARPA Accelerated Molecular Discovery program, the Sanofi Computational Antibody Design grant, the U.S. Office of Naval Research, the Howard Hughes Medical Institute, the National Institutes of Health, the K. Lisa Yang ICoN Center, and the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics at MIT.

How the brain coordinates speaking and breathing

MIT researchers have discovered a brain circuit that drives vocalization and ensures that you talk only when you breathe out, and stop talking when you breathe in.

McGovern Investigator Fan Wang. Photo: Caitliin Cunningham

The newly discovered circuit controls two actions that are required for vocalization: narrowing of the larynx and exhaling air from the lungs. The researchers also found that this vocalization circuit is under the command of a brainstem region that regulates the breathing rhythm, which ensures that breathing remains dominant over speech.

“When you need to breathe in, you have to stop vocalization. We found that the neurons that control vocalization receive direct inhibitory input from the breathing rhythm generator,” says Fan Wang, an MIT professor of brain and cognitive sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Jaehong Park, a Duke University graduate student who is currently a visiting student at MIT, is the lead author of the study, which appears today in Science. Other authors of the paper include MIT technical associates Seonmi Choi and Andrew Harrahill, former MIT research scientist Jun Takatoh, and Duke University researchers Shengli Zhao and Bao-Xia Han.

Vocalization control

Located in the larynx, the vocal cords are two muscular bands that can open and close. When they are mostly closed, or adducted, air exhaled from the lungs generates sound as it passes through the cords.

The MIT team set out to study how the brain controls this vocalization process, using a mouse model. Mice communicate with each other using sounds known as ultrasonic vocalizations (USVs), which they produce using the unique whistling mechanism of exhaling air through a small hole between nearly closed vocal cords.

“We wanted to understand what are the neurons that control the vocal cord adduction, and then how do those neurons interact with the breathing circuit?” Wang says.

To figure that out, the researchers used a technique that allows them to map the synaptic connections between neurons. They knew that vocal cord adduction is controlled by laryngeal motor neurons, so they began by tracing backward to find the neurons that innervate those motor neurons.

This revealed that one major source of input is a group of premotor neurons found in the hindbrain region called the retroambiguus nucleus (RAm). Previous studies have shown that this area is involved in vocalization, but it wasn’t known exactly which part of the RAm was required or how it enabled sound production.

Image of green and magenta cells under a microscope.
Laryngeal premotor neurons (green) and Fos (magenta) labeling in the RAm. Image: Fan Wang

The researchers found that these synaptic tracing-labeled RAm neurons were strongly activated during USVs. This observation prompted the team to use an activity-dependent method to target these vocalization-specific RAm neurons, termed as RAmVOC. They used chemogenetics and optogenetics to explore what would happen if they silenced or stimulated their activity. When the researchers blocked the RAmVOC neurons, the mice were no longer able to produce USVs or any other kind of vocalization. Their vocal cords did not close, and their abdominal muscles did not contract, as they normally do during exhalation for vocalization.

Conversely, when the RAmVOC neurons were activated, the vocal cords closed, the mice exhaled, and USVs were produced. However, if the stimulation lasted two seconds or longer, these USVs would be interrupted by inhalations, suggesting that the process is under control of the same part of the brain that regulates breathing.

“Breathing is a survival need,” Wang says. “Even though these neurons are sufficient to elicit vocalization, they are under the control of breathing, which can override our optogenetic stimulation.”

Rhythm generation

Additional synaptic mapping revealed that neurons in a part of the brainstem called the pre-Bötzinger complex, which acts as a rhythm generator for inhalation, provide direct inhibitory input to the RAmVOC neurons.

“The pre-Bötzinger complex generates inhalation rhythms automatically and continuously, and the inhibitory neurons in that region project to these vocalization premotor neurons and essentially can shut them down,” Wang says.

This ensures that breathing remains dominant over speech production, and that we have to pause to breathe while speaking.

The researchers believe that although human speech production is more complex than mouse vocalization, the circuit they identified in mice plays the conserved role in speech production and breathing in humans.

“Even though the exact mechanism and complexity of vocalization in mice and humans is really different, the fundamental vocalization process, called phonation, which requires vocal cord closure and the exhalation of air, is shared in both the human and the mouse,” Park says.

The researchers now hope to study how other functions such as coughing and swallowing food may be affected by the brain circuits that control breathing and vocalization.

The research was funded by the National Institutes of Health.

Imaging method reveals new cells and structures in human brain tissue

Using a novel microscopy technique, MIT and Brigham and Women’s Hospital/Harvard Medical School researchers have imaged human brain tissue in greater detail than ever before, revealing cells and structures that were not previously visible.

McGovern Institute Investigator Edward Boyden. Photo: Justin Knight

Among their findings, the researchers discovered that some “low-grade” brain tumors contain more putative aggressive tumor cells than expected, suggesting that some of these tumors may be more aggressive than previously thought.

The researchers hope that this technique could eventually be deployed to diagnose tumors, generate more accurate prognoses, and help doctors choose treatments.

“We’re starting to see how important the interactions of neurons and synapses with the surrounding brain are to the growth and progression of tumors. A lot of those things we really couldn’t see with conventional tools, but now we have a tool to look at those tissues at the nanoscale and try to understand these interactions,” says Pablo Valdes, a former MIT postdoc who is now an assistant professor of neuroscience at the University of Texas Medical Branch and the lead author of the study.

Edward Boyden, the Y. Eva Tan Professor in Neurotechnology at MIT; a professor of biological engineering, media arts and sciences, and brain and cognitive sciences; a Howard Hughes Medical Institute investigator; and a member of MIT’s McGovern Institute for Brain Research and Koch Institute for Integrative Cancer Research; and E. Antonio Chiocca, a professor of neurosurgery at Harvard Medical School and chair of neurosurgery at Brigham and Women’s Hospital, are the senior authors of the study, which appears today in Science Translational Medicine.

Making molecules visible

The new imaging method is based on expansion microscopy, a technique developed in Boyden’s lab in 2015 based on a simple premise: Instead of using powerful, expensive microscopes to obtain high-resolution images, the researchers devised a way to expand the tissue itself, allowing it to be imaged at very high resolution with a regular light microscope.

The technique works by embedding the tissue into a polymer that swells when water is added, and then softening up and breaking apart the proteins that normally hold tissue together. Then, adding water swells the polymer, pulling all the proteins apart from each other. This tissue enlargement allows researchers to obtain images with a resolution of around 70 nanometers, which was previously possible only with very specialized and expensive microscopes such as scanning electron microscopes.

In 2017, the Boyden lab developed a way to expand preserved human tissue specimens, but the chemical reagents that they used also destroyed the proteins that the researchers were interested in labeling. By labeling the proteins with fluorescent antibodies before expansion, the proteins’ location and identity could be visualized after the expansion process was complete. However, the antibodies typically used for this kind of labeling can’t easily squeeze through densely packed tissue before it’s expanded.

So, for this study, the authors devised a different tissue-softening protocol that breaks up the tissue but preserves proteins in the sample. After the tissue is expanded, proteins can be labelled with commercially available fluorescent antibodies. The researchers then can perform several rounds of imaging, with three or four different proteins labeled in each round. This labeling of proteins enables many more structures to be imaged, because once the tissue is expanded, antibodies can squeeze through and label proteins they couldn’t previously reach.

The technique works by embedding the tissue into a polymer that swells when water is added, and then softening up and breaking apart the proteins that normally hold tissue together.

“We open up the space between the proteins so that we can get antibodies into crowded spaces that we couldn’t otherwise,” Valdes says. “We saw that we could expand the tissue, we could decrowd the proteins, and we could image many, many proteins in the same tissue by doing multiple rounds of staining.”

Working with MIT Assistant Professor Deblina Sarkar, the researchers demonstrated a form of this “decrowding” in 2022 using mouse tissue.

The new study resulted in a decrowding technique for use with human brain tissue samples that are used in clinical settings for pathological diagnosis and to guide treatment decisions. These samples can be more difficult to work with because they are usually embedded in paraffin and treated with other chemicals that need to be broken down before the tissue can be expanded.

In this study, the researchers labeled up to 16 different molecules per tissue sample. The molecules they targeted include markers for a variety of structures, including axons and synapses, as well as markers that identify cell types such as astrocytes and cells that form blood vessels. They also labeled molecules linked to tumor aggressiveness and neurodegeneration.

Using this approach, the researchers analyzed healthy brain tissue, along with samples from patients with two types of glioma — high-grade glioblastoma, which is the most aggressive primary brain tumor, with a poor prognosis, and low-grade gliomas, which are considered less aggressive.

“We wanted to look at brain tumors so that we can understand them better at the nanoscale level, and by doing that, to be able to develop better treatments and diagnoses in the future. At this point, it was more developing a tool to be able to understand them better, because currently in neuro-oncology, people haven’t done much in terms of super-resolution imaging,” Valdes says.

A diagnostic tool

To identify aggressive tumor cells in gliomas they studied, the researchers labeled vimentin, a protein that is found in highly aggressive glioblastomas. To their surprise, they found many more vimentin-expressing tumor cells in low-grade gliomas than had been seen using any other method.

“This tells us something about the biology of these tumors, specifically, how some of them probably have a more aggressive nature than you would suspect by doing standard staining techniques,” Valdes says.

When glioma patients undergo surgery, tumor samples are preserved and analyzed using immunohistochemistry staining, which can reveal certain markers of aggressiveness, including some of the markers analyzed in this study.

“These are incurable brain cancers, and this type of discovery will allow us to figure out which cancer molecules to target so we can design better treatments. It also proves the profound impact of having clinicians like us at the Brigham and Women’s interacting with basic scientists such as Ed Boyden at MIT to discover new technologies that can improve patient lives,” Chiocca says.

The researchers hope their expansion microscopy technique could allow doctors to learn much more about patients’ tumors, helping them to determine how aggressive the tumor is and guiding treatment choices. Valdes now plans to do a larger study of tumor types to try to establish diagnostic guidelines based on the tumor traits that can be revealed using this technique.

“Our hope is that this is going to be a diagnostic tool to pick up marker cells, interactions, and so on, that we couldn’t before,” he says. “It’s a practical tool that will help the clinical world of neuro-oncology and neuropathology look at neurological diseases at the nanoscale like never before, because fundamentally it’s a very simple tool to use.”

Boyden’s lab also plans to use this technique to study other aspects of brain function, in healthy and diseased tissue.

“Being able to do nanoimaging is important because biology is about nanoscale things — genes, gene products, biomolecules — and they interact over nanoscale distances,” Boyden says. “We can study all sorts of nanoscale interactions, including synaptic changes, immune interactions, and changes that occur during cancer and aging.”

The research was funded by K. Lisa Yang, the Howard Hughes Medical Institute, John Doerr, Open Philanthropy, the Bill and Melinda Gates Foundation, the Koch Institute Frontier Research Program, the National Institutes of Health, and the Neurosurgery Research and Education Foundation.

Simons Center’s collaborative approach propels autism research, at MIT and beyond

The secret to the success of MIT’s Simons Center for the Social Brain is in the name. With a founding philosophy of “collaboration and community” that has supported scores of scientists across more than a dozen Boston-area research institutions, the SCSB advances research by being inherently social.

SCSB’s mission is “to understand the neural mechanisms underlying social cognition and behavior and to translate this knowledge into better diagnosis and treatment of autism spectrum disorders.” When Director Mriganka Sur founded the center in 2012 in partnership with the Simons Foundation Autism Research Initiative (SFARI) of Jim and Marilyn Simons, he envisioned a different way to achieve urgently needed research progress than the traditional approach of funding isolated projects in individual labs. Sur wanted SCSB’s contribution to go beyond papers, though it has generated about 350 and counting. He sought the creation of a sustained, engaged autism research community at MIT and beyond.

“When you have a really big problem that spans so many issues  a clinical presentation, a gene, and everything in between  you have to grapple with multiple scales of inquiry,” says Sur, the Newton Professor of Neuroscience in MIT’s Department of Brain and Cognitive Sciences (BCS) and The Picower Institute for Learning and Memory. “This cannot be solved by one person or one lab. We need to span multiple labs and multiple ways of thinking. That was our vision.”

In parallel with a rich calendar of public colloquia, lunches, and special events, SCSB catalyzes multiperspective, multiscale research collaborations in two programmatic ways. Targeted projects fund multidisciplinary teams of scientists with complementary expertise to collectively tackle a pressing scientific question. Meanwhile, the center supports postdoctoral Simons Fellows with not one, but two mentors, ensuring a further cross-pollination of ideas and methods.

Complementary collaboration

In 11 years, SCSB has funded nine targeted projects. Each one, by design, involves a deep and multifaceted exploration of a major question with both fundamental importance and clinical relevance. The first project, back in 2013, for example, marshaled three labs spanning BCS, the Department of Biology, and The Whitehead Institute for Biomedical Research to advance understanding of how mutation of the Shank3 gene leads to the pathophysiology of Phelan-McDermid Syndrome by working across scales ranging from individual neural connections to whole neurons to circuits and behavior.

Other past projects have applied similarly integrated, multiscale approaches to topics ranging from how 16p11.2 gene deletion alters the development of brain circuits and cognition to the critical role of the thalamic reticular nucleus in information flow during sleep and wakefulness. Two others produced deep examinations of cognitive functions: how we go from hearing a string of words to understanding a sentence’s intended meaning, and the neural and behavioral correlates of deficits in making predictions about social and sensory stimuli. Yet another project laid the groundwork for developing a new animal model for autism research.

SFARI is especially excited by SCSB’s team science approach, says Kelsey Martin, executive vice president of autism and neuroscience at the Simons Foundation. “I’m delighted by the collaborative spirit of the SCSB,” Martin says. “It’s wonderful to see and learn about the multidisciplinary team-centered collaborations sponsored by the center.”

New projects

In the last year, SCSB has launched three new targeted projects. One team is investigating why many people with autism experience sensory overload and is testing potential interventions to help. The scientists hypothesize that patients experience a deficit in filtering out the mundane stimuli that neurotypical people predict are safe to ignore. Studies suggest the predictive filter relies on relatively low-frequency “alpha/beta” brain rhythms from deep layers of the cortex moderating the higher frequency “gamma” rhythms in superficial layers that process sensory information.

Together, the labs of Charles Nelson, professor of pediatrics at Boston Children’s Hospital (BCH), and BCS faculty members Bob Desimone, the Doris and Don Berkey Professor of Neuroscience at MIT and director of the McGovern Institute, and Earl K. Miller, the Picower Professor, are testing the hypothesis in two different animal models at MIT and in human volunteers at BCH. In the animals they’ll also try out a new real-time feedback system invented in Miller’s lab that can potentially correct the balance of these rhythms in the brain. And in an animal model engineered with a Shank3 mutation, Desimone’s lab will test a gene therapy, too.

“None of us could do all aspects of this project on our own,” says Miller, an investigator in the Picower Institute. “It could only come about because the three of us are working together, using different approaches.”

Right from the start, Desimone says, close collaboration with Nelson’s group at BCH has been essential. To ensure his and Miller’s measurements in the animals and Nelson’s measurements in the humans are as comparable as possible, they have tightly coordinated their research protocols.

“If we hadn’t had this joint grant we would have chosen a completely different, random set of parameters than Chuck, and the results therefore wouldn’t have been comparable. It would be hard to relate them,” says Desimone, who also directs MIT’s McGovern Institute for Brain Research. “This is a project that could not be accomplished by one lab operating in isolation.”

Another targeted project brings together a coalition of seven labs — six based in BCS (professors Evelina Fedorenko, Edward Gibson, Nancy Kanwisher, Roger Levy, Rebecca Saxe, and Joshua Tenenbaum) and one at Dartmouth College (Caroline Robertson) — for a synergistic study of the cognitive, neural, and computational underpinnings of conversational exchanges. The study will integrate the linguistic and non-linguistic aspects of conversational ability in neurotypical adults and children and those with autism.

Fedorenko said the project builds on advances and collaborations from the earlier language Targeted Project she led with Kanwisher.

“Many directions that we started to pursue continue to be active directions in our labs. But most importantly, it was really fun and allowed the PIs [principal investigators] to interact much more than we normally would and to explore exciting interdisciplinary questions,” Fedorenko says. “When Mriganka approached me a few years after the project’s completion asking about a possible new targeted project, I jumped at the opportunity.”

Gibson and Robertson are studying how people align their dialogue, not only in the content and form of their utterances, but using eye contact. Fedorenko and Kanwisher will employ fMRI to discover key components of a conversation network in the cortex. Saxe will examine the development of conversational ability in toddlers using novel MRI techniques. Levy and Tenenbaum will complement these efforts to improve computational models of language processing and conversation.

The newest Targeted Project posits that the immune system can be harnessed to help treat behavioral symptoms of autism. Four labs — three in BCS and one at Harvard Medical School (HMS) — will study mechanisms by which peripheral immune cells can deliver a potentially therapeutic cytokine to the brain. A study by two of the collaborators, MIT associate professor Gloria Choi and HMS associate professor Jun Huh, showed that when IL-17a reaches excitatory neurons in a region of the mouse cortex, it can calm hyperactivity in circuits associated with social and repetitive behavior symptoms. Huh, an immunologist, will examine how IL-17a can get from the periphery to the brain, while Choi will examine how it has its neurological effects. Sur and MIT associate professor Myriam Heiman will conduct studies of cell types that bridge neural circuits with brain circulatory systems.

“It is quite amazing that we have a core of scientists working on very different things coming together to tackle this one common goal,” Choi says. “I really value that.”

Multiple mentors

While SCSB Targeted Projects unify labs around research, the center’s Simons Fellowships unify labs around young researchers, providing not only funding, but a pair of mentors and free-flowing interactions between their labs. Fellows also gain opportunities to inform and inspire their fundamental research by visiting with patients with autism, Sur says.

“The SCSB postdoctoral program serves a critical role in ensuring that a diversity of outstanding scientists are exposed to autism research during their training, providing a pipeline of new talent and creativity for the field,” adds Martin, of the Simons Foundation.

Simons Fellows praise the extra opportunities afforded by additional mentoring. Postdoc Alex Major was a Simons Fellow in Miller’s lab and that of Nancy Kopell, a mathematics professor at Boston University renowned for her modeling of the brain wave phenomena that the Miller lab studies experimentally.

“The dual mentorship structure is a very useful aspect of the fellowship” Major says. “It is both a chance to network with another PI and provides experience in a different neuroscience sub-field.”

Miller says co-mentoring expands the horizons and capabilities of not only the mentees but also the mentors and their labs. “Collaboration is 21st century neuroscience,” Miller says. “Some our studies of the brain have gotten too big and comprehensive to be encapsulated in just one laboratory. Some of these big questions require multiple approaches and multiple techniques.”

Desimone, who recently co-mentored Seng Bum (Michael Yoo) along with BCS and McGovern colleague Mehrdad Jazayeri in a project studying how animals learn from observing others, agrees.

“We hear from postdocs all the time that they wish they had two mentors, just in general to get another point of view,” Desimone says. “This is a really good thing and it’s a way for faculty members to learn about what other faculty members and their postdocs are doing.”

Indeed, the Simons Center model suggests that research can be very successful when it’s collaborative and social.

Margaret Livingstone awarded the 2024 Scolnick Prize in Neuroscience

Today the McGovern Institute at MIT announces that the 2024 Edward M. Scolnick Prize in Neuroscience will be awarded to Margaret Livingstone, Takeda Professor of Neurobiology at Harvard Medical School. The Scolnick Prize is awarded annually by the McGovern Institute, for outstanding achievements in neuroscience.

“Margaret Livingstone’s driven curiosity and original experimental approaches have led to fundamental advances in our understanding of visual perception,” says Robert Desimone, director of the McGovern Institute and chair of the selection committee. “In particular, she has made major advances in resolving a long-standing debate over whether the brain domains and neurons that are specifically tuned to detect facial features are present from birth or arise from experience. Her developmental research shows that the cerebral cortex already contains topographic sensory maps at birth but that domain-specific maps, for example to recognize facial-features, require experience and sensory input to develop normally.”

“Margaret Livingstone’s driven curiosity and original experimental approaches have led to fundamental advances in our understanding of visual perception.” — Robert Desimone

Livingstone received a BS from MIT in 1972 and, under the mentorship of Edward Kravitz, a PhD in neurobiology from Harvard University in 1981. Her doctoral research in lobsters showed that the biogenic amines serotonin and octopamine control context-dependent behaviors such as offensive versus defensive postures. She followed up on this discovery as a postdoctoral fellow by researching biogenic amine signaling in learning and memory, with Prof. William Quinn at Princeton University. Using learning and memory mutants created in the fruit fly model she identified defects in dopamine-synthesizing enzymes and calcium-dependent enzymes that produce cAMP. Her results supported the then burgeoning idea that biogenic amines signal through second messengers enable behavioral plasticity.

To test whether biogenic amines also control neuronal function in mammals, Livingstone moved back to Harvard Medical School in 1983 to study the effects of sleep on visual processing with David Hubel, who was studying neuronal activity in the nonhuman primate visual cortex. Over the course of a 20-year collaboration, Livingstone and Hubel showed that the visual system is functionally and anatomically divided into parallel pathways that detect and process the distinct visual features of color, motion, and orientation.

Livingstone quickly rose through the academic ranks at Harvard to be appointed as an instructor and then assistant professor in 1983, associate professor in 1986 and full professor in 1988. With her own laboratory, Livingstone began to explore the organization of face-perception domains in the inferotemporal cortex of nonhuman primates. By combining single-cell recording and fMRI brain imaging data from the same animal, her then graduate student Doris Tsao, in collaboration with Winrich Freiwald, showed that an abundance of individual neurons within the face-recognition domain are tuned to a combination of facial features. These results helped to explain the long-standing question of how individual neurons show such exquisite selectivity to specific faces.

Three images of Mona Lisa, side by side, each with a different filter slightly obscuring the face.
Mona Lisa’s smile has been described as mysterious and fleeting because it seems to disappear when viewers look directly at it. Livingstone showed that Mona Lisa’s smile is more apparent in our peripheral vision than our central (or foveal) vision because our peripheral vision is more sensitive to low spatial frequencies, or shadows and shadings of black and white. These shadows make her lips seem to turn upward into a subtle smile. The three images above show the painting filtered to reveal very low spatial frequency features (left, with the smile more apparent) to high spatial frequency features (right, with the smile being less visible). Image: Margaret Livingstone

In researching face patches, Livingstone became fascinated with the question of whether face-perception domains are present from birth, as many scientists thought at the time. Livingstone and her postdoc Michael Arcaro carried out experiments that showed that the development of face patches requires visual exposure to faces in the early postnatal period. Moreover, they showed that entirely unnatural symbol-specific domains can form in animals that experienced intensive visual exposure to symbols early in development. Thus, experience is both necessary and sufficient for the formation of feature-specific domains in the inferotemporal cortex. Livingtone’s results support a consistent principle for the development of higher-level cortex, from a hard-wired sensory topographic map present at birth to the formation of experience-dependent domains that detect combined, stimulus-specific features.

Livingstone is also known for her scientifically based exploration of the visual arts. Her book “Vision and Art: The Biology of Seeing,” which has sold more than 40,000 copies to date, explores how both the techniques artists use and our anatomy and physiology influence our perception of art. Livingstone has presented this work to audiences around the country, from Pixar Studios, MicroSoft and IBM to The Metropolitan Museum of Art, The National Gallery and The Hirshhorn Museum.

In 2014, Livingstone was awarded the Takeda Professorship of Neurobiology at Harvard Medical School. She was awarded the Mika Salpeter Lifetime Achievement Award from the Society for Neuroscience in 2011, the Grossman Award from the Society of Neurological Surgeons in 2013 and the Roberts Prize for Best Paper in Physics in Medicine and Biology in 2013 and 2016. Livingstone was elected fellow of the American Academy of Arts and Sciences in 2018 and of the National Academy of Science in 2020. She will be awarded the Scolnick Prize in the spring of 2024.

Calling neurons to attention

The world assaults our senses, exposing us to more noise and color and scents and sensations than we can fully comprehend. Our brains keep us tuned in to what’s important, letting less relevant sights and sounds fade into the background while we focus on the most salient features of our surroundings. Now, scientists at MIT’s McGovern Institute have a better understanding of how the brain manages this critical task of directing our attention.

In the January 15, 2023, issue of the journal Neuron, a team led by Diego Mendoza-Halliday, a research scientist in McGovern Institute Director Robert Desimone’s lab, reports on a group of neurons in the brain’s prefrontal cortex that are critical for directing an animal’s visual attention. Their findings not only demonstrate this brain region’s important role in guiding attention, but also help establish attention as a function that is distinct from other cognitive functions, such as short-term memory, in the brain.

Attention and working memory

Mendoza-Halliday, who is now an assistant professor at the University of Pittsburgh, explains that attention has a close relationship to working memory, which the brain uses to temporarily store information after our senses take it in. The two brain functions strongly influence one another: We’re more likely to remember something if we pay attention to it, and paying attention to certain features of our environment may involve representing those features in our working memory. For example, he explains, both attention and working memory are called on when searching for a triangular red keychain on a cluttered desk: “What my brain does is it remembers that my keyholder is red and it’s a triangle, and then builds a working memory representation and uses it as a search template. So now everything that is red and everything that is a triangle receives preferential processing, or is attended to.”

Working memory and attention are so closely associated that some neuroscientists have proposed that the brain calls on the same neural mechanisms to create them. “This has led to the belief that maybe attention and working memory are just two sides of the same coin—that they’re basically the same function in different modes,” Mendoza-Halliday says. His team’s findings, however, say otherwise.

Circuit manipulation

To study the origins of attention in the brain, Mendoza-Halliday and colleagues trained monkeys to focus their attention on a visual feature that matches a cue they have seen before. After seeing a set of dots move across the screen, they must call on their working memory to remember the direction of that movement for a few seconds while the screen goes blank. Then the experimenters present the animals with more moving dots, this time traveling in multiple directions. By focusing on the dots moving in the same direction as the first set they saw, the monkeys are able to recognize when those dots briefly accelerate. Reporting on the speed change earns the animals a reward.

While the monkeys performed this task, the researchers monitored cells in several brain regions, including the prefrontal cortex, which Desimone’s team has proposed plays a role in directing attention. The activity patterns they recorded suggested that distinct groups of cells participated in the attention and working memory aspects of the task.

To better understand those cells’ roles, the researchers manipulated their activity. They used optogenetics, an approach in which a light-sensitive protein is introduced into neurons so that they can be switched on or off with a pulse of light. Desimone’s lab, in collaboration with Edward Boyden, the Y. Eva Tan Professor in Neurotechnology at MIT and a member of the McGovern Institute, pioneered the use of optogenetics in primates. “Optogenetics allows us to distinguish between correlation and causality in neural circuits,” says Desimone, the Doris and Don Berkey Professor of Neuroscience at MIT.  “If we turn off a circuit using optogenetics, and the animal can no longer perform the task, that is good evidence for a causal role of the circuit,” says Desimone, who is also a professor of brain and cognitive sciences at MIT.

Using this optogenetic method, they switched off neurons in a specific portion of the brain’s lateral prefrontal cortex for a few hundred milliseconds at a time as the monkeys performed their dot-tracking task. The researchers found that they could switch off signaling from the lateral prefrontal cortex early, when the monkeys needed their working memory but had no dots to attend to, without interfering with the animals’ ability to complete the task. But when they blocked signaling when the monkeys needed to focus their attention, the animals performed poorly.

The team also monitored activity in the brain visual’s cortex during the moving-dot task. When the lateral prefrontal cortex was shut off, neurons in connected visual areas showed less heightened reactivity to movement in the direction the monkey was attending to. Mendoza-Halliday says this suggests that cells in the lateral prefrontal cortex are important for telling sensory-processing circuits what visual features to pay attention to.

The discovery that at least part of the brain’s lateral prefrontal cortex is critical for attention but not for working memory offers a new view of the relationship between the two. “It is a physiological demonstration that working memory and attention cannot be the same function, since they rely on partially separate neuronal populations and neural mechanisms,” Mendoza-Halliday says.

Mapping healthy cells’ connections in the brain

Portrait of scientist in a suit and tie.
McGovern Institute Principal Research Scientist Ian Wickersham. Photo: Caitlin Cunningham

A new tool developed by researchers at MIT’s McGovern Institute gives neuroscientists the power to find connected neurons within the brain’s tangled network of cells, and then follow or manipulate those neurons over a prolonged period. Its development, led by Principal Research Scientist Ian Wickersham, transforms a powerful tool for exploring the anatomy of the brain into a sophisticated system for studying brain function.

Wickersham and colleagues have designed their system to enable long-term analysis and experiments on groups of neurons that reach through the brain to signal to select groups of cells. It is described in the January 11, 2024, issue of the journal Nature Neuroscience. “This second-generation system will allow imaging, recording, and control of identified networks of synaptically-connected neurons in the context of behavioral studies and other experimental designs lasting weeks, months, or years,” Wickersham says.

The system builds on an approach to anatomical tracing that Wickersham developed in 2007, as a graduate student in Edward Callaway’s lab at the Salk Institute for Biological Studies. Its key is a modified version of a rabies virus, whose natural—and deadly—life cycle involves traveling through the brain’s neural network.

Viral tracing

The rabies virus is useful for tracing neuronal connections because once it has infected the nervous system, it spreads through the neural network by co-opting the very junctions that neurons use to communicate with one another. Hopping across those junctions, or synapses, the virus can pass from cell to cell. Traveling in the opposite direction of neuronal signals, it reaches the brain, where it continues to spread.

Labeled illustration of rabies virus
Simplified illustration of rabies virus. Image: istockphoto

To use the rabies virus to identify specific connections within the brain, Wickersham modified it to limit its spread. His original tracing system uses a rabies virus that lacks an essential gene. When researchers deliver the modified virus to the neurons whose connections they want to map, they also instruct those neurons to make the protein encoded by the virus’s missing gene. That allows the virus to replicate and travel across the synapses that link an infected cell to others in the network. Once it is inside a new cell, the virus is deprived of the critical protein and can go no farther.

Under a microscope, a fluorescent protein delivered by the modified virus lights up, exposing infected cells: those to which the virus was originally delivered as well as any neurons that send it direct inputs. Because the virus crosses only one synapse after leaving the cell it originally infected, the technique is known as monosynaptic tracing.

Labs around the world now use this method to identify which brain cells send signals to a particular set of neurons. But while the virus used in the original system can’t spread through the brain like a natural rabies virus, it still sickens the cells it does infect. Infected cells usually die in about two weeks, and that has limited scientists’ ability to conduct further studies of the cells whose connections they trace. “If you want to then go on to manipulate those connected populations of cells, you have a very short time window,” Wickersham says.

Reducing toxicity

To keep cells healthy after monosynaptic tracing, Wickersham, postdoctoral researcher Lei Jin, and colleagues devised a new approach. They began by deleting a second gene from the modified virus they use to label cells. That gene encodes an enzyme the rabies virus needs to produce the proteins encoded in its own genome. As with the original system, neurons are instructed to create the virus’s missing proteins, equipping the virus to replicate inside those cells. In this case, this is done in mice that have been genetically modified to produce the second deleted viral gene in specific sets of neurons.

Brightly colored neurons under a microscope.
The initially-infected “starter cells” at the injection site in the substantia nigra, pars compacta. Blue: tyrosine hydroxylase immunostaining, showing dopaminergic cells; green: enhanced green fluorescent protein showing neurons able to be initially infected with the rabies virus; red: the red fluorescent protein tdTomato, reporting the presence of the second-generation rabies virus. Image: Ian Wickersham, Lei Jin

To limit toxicity, Wickersham and his team built in a control that allows researchers to switch off cells’ production of viral proteins once the virus has had time to replicate and begin its spread to connected neurons. With those proteins no longer available to support the viral life cycle, the tracing tool is rendered virtually harmless. After following mice for up to 10 weeks, the researchers detected minimal toxicity in neurons where monosynaptic tracing was initiated. And, Wickersham says, “as far as we can tell, the trans-synaptically labeled cells are completely unscathed.”

Neurons illuminated in red under a microscope
Transsynaptically labeled cells in the striatum, which provides input to the dopaminergic cells of the substantia nigra. These cells show no morphological abnormalities or any other indication of toxicity five weeks after the rabies virus injection. Image: Ian Wickersham, Lei Jin

That means neuroscientists can now pair monosynaptic tracing with many of neuroscience’s most powerful tools for functional studies. To facilitate those experiments, Wickersham’s team encoded enzymes called recombinases into their connection-tracing rabies virus, which enables the introduction of genetically encoded research tools to targeted cells. After tracing cells’ connections, researchers will be able to manipulate those neurons, follow their activity, and explore their contributions to animal behavior. Such experiments will deepen scientists’ understanding of the inputs select groups of neurons receive from elsewhere in the brain, as well as the cells that are sending those signals.

Jin, who is now a principal investigator at Lingang Laboratory in Shanghai, says colleagues are already eager to begin working with the new non-toxic tracing system. Meanwhile, Wickersham’s group has already started experimenting with a third-generation system, which they hope will improve efficiency and be even more powerful.

K. Lisa Yang Postbaccalaureate Program names new scholars

Funded by philanthropist Lisa Yang, the K. Lisa Yang Postbaccalaureate Scholar Program provides two years of paid laboratory experience, mentorship, and education to recent college graduates from backgrounds underrepresented in neuroscience. This year, two young researchers in McGovern Institute labs, Joseph Itiat and Sam Merrow, are the recipients of the Yang postbac program.

Itiat moved to the United States from Nigeria in 2019 to pursue a degree in psychology and cognitive neuroscience at Temple University. Today, he is a Yang postbac in John Gabrieli’s lab studying the relationship between learning and value processes and their influence on future-oriented decision-making. Ultimately, Itiat hopes to develop models that map the underlying mechanisms driving these processes.

“Being African, with limited research experience and little representation in the domain of neuroscience research,” Itiat says, “I chose to pursue a postbaccalaureate
research program to prepare me for a top graduate school and a career in cognitive neuroscience.”

Merrow first fell in love with science while working at the Barrow Neurological Institute in Arizona during high school. After graduating from Simmons University in Boston, Massachusetts, Merrow joined Guoping Feng’s lab as a Yang postbac to pursue research on glial cells and brain disorders. “As a queer, nonbinary, LatinX person, I have not met anyone like me in my field, nor have I had role models that hold a similar identity to myself,” says Merrow.

“My dream is to one day become a professor, where I will be able to show others that science is for anyone.”

Previous Yang postbacs include Alex Negron, Zoe Pearce, Ajani Stewart, and Maya Taliaferro.

A new way to see the activity inside a living cell

Living cells are bombarded with many kinds of incoming molecular signal that influence their behavior. Being able to measure those signals and how cells respond to them through downstream molecular signaling networks could help scientists learn much more about how cells work, including what happens as they age or become diseased.

Right now, this kind of comprehensive study is not possible because current techniques for imaging cells are limited to just a handful of different molecule types within a cell at one time. However, MIT researchers have developed an alternative method that allows them to observe up to seven different molecules at a time, and potentially even more than that.

“There are many examples in biology where an event triggers a long downstream cascade of events, which then causes a specific cellular function,” says Edward Boyden, the Y. Eva Tan Professor in Neurotechnology. “How does that occur? It’s arguably one of the fundamental problems of biology, and so we wondered, could you simply watch it happen?”

It’s arguably one of the fundamental problems of biology, and so we wondered, could you simply watch it happen? – Ed Boyden

The new approach makes use of green or red fluorescent molecules that flicker on and off at different rates. By imaging a cell over several seconds, minutes, or hours, and then extracting each of the fluorescent signals using a computational algorithm, the amount of each target protein can be tracked as it changes over time.

Boyden, who is also a professor of biological engineering and of brain and cognitive sciences at MIT, a Howard Hughes Medical Institute investigator, and a member of MIT’s McGovern Institute for Brain Research and Koch Institute for Integrative Cancer Research, as well as the co-director of the K. Lisa Yang Center for Bionics, is the senior author of the study, which appears today in Cell. MIT postdoc Yong Qian is the lead author of the paper.

Fluorescent signals

Labeling molecules inside cells with fluorescent proteins has allowed researchers to learn a great deal about the functions of many cellular molecules. This type of study is often done with green fluorescent protein (GFP), which was first deployed for imaging in the 1990s. Since then, several fluorescent proteins that glow in other colors have been developed for experimental use.

However, a typical light microscope can only distinguish two or three of these colors, allowing researchers only a tiny glimpse of the overall activity that is happening inside a cell. If they could track a greater number of labeled molecules, researchers could measure a brain cell’s response to different neurotransmitters during learning, for example, or investigate the signals that prompt a cancer cell to metastasize.

“Ideally, you would be able to watch the signals in a cell as they fluctuate in real time, and then you could understand how they relate to each other. That would tell you how the cell computes,” Boyden says. “The problem is that you can’t watch very many things at the same time.”

In 2020, Boyden’s lab developed a way to simultaneously image up to five different molecules within a cell, by targeting glowing reporters to distinct locations inside the cell. This approach, known as “spatial multiplexing,” allows researchers to distinguish signals for different molecules even though they may all be fluorescing the same color.

In the new study, the researchers took a different approach: Instead of distinguishing signals based on their physical location, they created fluorescent signals that vary over time. The technique relies on “switchable fluorophores” — fluorescent proteins that turn on and off at a specific rate. For this study, Boyden and his group members identified four green switchable fluorophores, and then engineered two more, all of which turn on and off at different rates. They also identified two red fluorescent proteins that switch at different rates, and engineered one additional red fluorophore.

Using four switchable fluorophores, MIT researchers were able to label and image four different kinases inside these cells (top four rows). In the bottom row, the cell nuclei are labeled in blue.
Image: Courtesy of the researchers

Each of these switchable fluorophores can be used to label a different type of molecule within a living cell, such an enzyme, signaling protein, or part of the cell cytoskeleton. After imaging the cell for several minutes, hours, or even days, the researchers use a computational algorithm to pick out the specific signal from each fluorophore, analogous to how the human ear can pick out different frequencies of sound.

“In a symphony orchestra, you have high-pitched instruments, like the flute, and low-pitched instruments, like a tuba. And in the middle are instruments like the trumpet. They all have different sounds, and our ear sorts them out,” Boyden says.

The mathematical technique that the researchers used to analyze the fluorophore signals is known as linear unmixing. This method can extract different fluorophore signals, similar to how the human ear uses a mathematical model known as a Fourier transform to extract different pitches from a piece of music.

Once this analysis is complete, the researchers can see when and where each of the fluorescently labeled molecules were found in the cell during the entire imaging period. The imaging itself can be done with a simple light microscope, with no specialized equipment required.

Biological phenomena

In this study, the researchers demonstrated their approach by labeling six different molecules involved in the cell division cycle, in mammalian cells. This allowed them to identify patterns in how the levels of enzymes called cyclin-dependent kinases change as a cell progresses through the cell cycle.

The researchers also showed that they could label other types of kinases, which are involved in nearly every aspect of cell signaling, as well as cell structures and organelles such as the cytoskeleton and mitochondria. In addition to their experiments using mammalian cells grown in a lab dish, the researchers showed that this technique could work in the brains of zebrafish larvae.

This method could be useful for observing how cells respond to any kind of input, such as nutrients, immune system factors, hormones, or neurotransmitters, according to the researchers. It could also be used to study how cells respond to changes in gene expression or genetic mutations. All of these factors play important roles in biological phenomena such as growth, aging, cancer, neurodegeneration, and memory formation.

“You could consider all of these phenomena to represent a general class of biological problem, where some short-term event — like eating a nutrient, learning something, or getting an infection — generates a long-term change,” Boyden says.

In addition to pursuing those types of studies, Boyden’s lab is also working on expanding the repertoire of switchable fluorophores so that they can study even more signals within a cell. They also hope to adapt the system so that it could be used in mouse models.

The research was funded by an Alana Fellowship, K. Lisa Yang, John Doerr, Jed McCaleb, James Fickel, Ashar Aziz, the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics at MIT, the Howard Hughes Medical Institute, and the National Institutes of Health.