Soft optical fibers block pain while moving and stretching with the body

Scientists have a new tool to precisely illuminate the roots of nerve pain.

Engineers at MIT have developed soft and implantable fibers that can deliver light to major nerves through the body. When these nerves are genetically manipulated to respond to light, the fibers can send pulses of light to the nerves to inhibit pain. The optical fibers are flexible and stretch with the body.

The new fibers are meant as an experimental tool that can be used by scientists to explore the causes and potential treatments for peripheral nerve disorders in animal models. Peripheral nerve pain can occur when nerves outside the brain and spinal cord are damaged, resulting in tingling, numbness, and pain in affected limbs. Peripheral neuropathy is estimated to affect more than 20 million people in the United States.

“Current devices used to study nerve disorders are made of stiff materials that constrain movement, so that we can’t really study spinal cord injury and recovery if pain is involved,” says Siyuan Rao, assistant professor of biomedical engineering at the University of Massachusetts at Amherst, who carried out part of the work as a postdoc at MIT. “Our fibers can adapt to natural motion and do their work while not limiting the motion of the subject. That can give us more precise information.”

“Now, people have a tool to study the diseases related to the peripheral nervous system, in very dynamic, natural, and unconstrained conditions,” adds Xinyue Liu PhD ’22, who is now an assistant professor at Michigan State University (MSU).

Details of their team’s new fibers are reported today in a study appearing in Nature Methods. Rao’s and Liu’s MIT co-authors include Atharva Sahasrabudhe, a graduate student in chemistry; Xuanhe Zhao, professor of mechanical engineering and civil and environmental engineering; and Polina Anikeeva, professor of materials science and engineering, along with others at MSU, UMass-Amherst, Harvard Medical School, and the National Institutes of Health.

Beyond the brain

The new study grew out of the team’s desire to expand the use of optogenetics beyond the brain. Optogenetics is a technique by which nerves are genetically engineered to respond to light. Exposure to that light can then either activate or inhibit the nerve, which can give scientists information about how the nerve works and interacts with its surroundings.

Neuroscientists have applied optogenetics in animals to precisely trace the neural pathways underlying a range of brain disorders, including addiction, Parkinson’s disease, and mood and sleep disorders — information that has led to targeted therapies for these conditions.

To date, optogenetics has been primarily employed in the brain, an area that lacks pain receptors, which allows for the relatively painless implantation of rigid devices. However, the rigid devices can still damage neural tissues. The MIT team wondered whether the technique could be expanded to nerves outside the brain. Just as with the brain and spinal cord, nerves in the peripheral system can experience a range of impairment, including sciatica, motor neuron disease, and general numbness and pain.

Optogenetics could help neuroscientists identify specific causes of peripheral nerve conditions as well as test therapies to alleviate them. But the main hurdle to implementing the technique beyond the brain is motion. Peripheral nerves experience constant pushing and pulling from the surrounding muscles and tissues. If rigid silicon devices were used in the periphery, they would constrain an animal’s natural movement and potentially cause tissue damage.

Crystals and light

The researchers looked to develop an alternative that could work and move with the body. Their new design is a soft, stretchable, transparent fiber made from hydrogel — a rubbery, biocompatible mix of polymers and water, the ratio of which they tuned to create tiny, nanoscale crystals of polymers scattered throughout a more Jell-O-like solution.

The fiber embodies two layers — a core and an outer shell or “cladding.” The team mixed the solutions of each layer to generate a specific crystal arrangement. This arrangement gave each layer a specific, different refractive index, and together the layers kept any light traveling through the fiber from escaping or scattering away.

The team tested the optical fibers in mice whose nerves were genetically modified to respond to blue light that would excite neural activity or yellow light that would inhibit their activity. They found that even with the implanted fiber in place, mice were able to run freely on a wheel. After two months of wheel exercises, amounting to some 30,000 cycles, the researchers found the fiber was still robust and resistant to fatigue, and could also transmit light efficiently to trigger muscle contraction.

The team then turned on a yellow laser and ran it through the implanted fiber. Using standard laboratory procedures for assessing pain inhibition, they observed that the mice were much less sensitive to pain than rodents that were not stimulated with light. The fibers were able to significantly inhibit sciatic pain in those light-stimulated mice.

The researchers see the fibers as a new tool that can help scientists identify the roots of pain and other peripheral nerve disorders.

“We are focusing on the fiber as a new neuroscience technology,” Liu says. “We hope to help dissect mechanisms underlying pain in the peripheral nervous system. With time, our technology may help identify novel mechanistic therapies for chronic pain and other debilitating conditions such as nerve degeneration or injury.”

This research was supported, in part, by the National Institutes of Health, the National Science Foundation, the U.S. Army Research Office, the McGovern Institute for Brain Research, the Hock E. Tan and K. Lisa Yang Center for Autism Research, the K. Lisa Yang Brain-Body Center, and the Brain and Behavior Research Foundation.

Ariel Furst and Fan Wang receive 2023 National Institutes of Health awards

The National Institutes of Health (NIH) has awarded grants to MIT’s Ariel Furst and Fan Wang, through its High-Risk, High-Reward Research program. The NIH High-Risk, High-Reward Research program awarded 85 new research grants to support exceptionally creative scientists pursuing highly innovative behavioral and biomedical research projects.

Ariel Furst was selected as the recipient of the NIH Director’s New Innovator Award, which has supported unusually innovative research since 2007. Recipients are early-career investigators who are within 10 years of their final degree or clinical residency and have not yet received a research project grant or equivalent NIH grant.

Furst, the Paul M. Cook Career Development Assistant Professor of Chemical Engineering at MIT, invents technologies to improve human and environmental health by increasing equitable access to resources. Her lab develops transformative technologies to solve problems related to health care and sustainability by harnessing the inherent capabilities of biological molecules and cells. She is passionate about STEM outreach and increasing the participation of underrepresented groups in engineering.

After completing her PhD at Caltech, where she developed noninvasive diagnostics for colorectal cancer, Furst became an A. O. Beckman Postdoctoral Fellow at the University of California at Berkeley. There she developed sensors to monitor environmental pollutants. In 2022, Furst was awarded the MIT UROP Outstanding Faculty Mentor Award for her work with undergraduate researchers. She is a now a 2023 Marion Milligan Mason Awardee, a CIFAR Azrieli Global Scholar for Bio-Inspired Solar Energy, and an ARO Early Career Grantee. She is also a co-founder of the regenerative agriculture company, Seia Bio.

Fan Wang received the Pioneer Award, which has been challenging researchers at all career levels to pursue new directions and develop groundbreaking, high impact approaches to a broad area of biomedical and behavioral sciences since 2004.

Wang, a professor in the Department of Brain and Cognitive Sciences and an investigator in the McGovern Institute for Brain Research, is uncovering the neural circuit mechanisms that govern bodily sensations, like touch, pain, and posture, as well as the mechanisms that control sensorimotor behaviors. Researchers in the Wang lab aim to generate an integrated understanding of the sensation-perception-action process, hoping to find better treatments for diseases like chronic pain, addiction, and movement disorders. Wang’s lab uses genetic, viral, in vivo large-scale electrophysiology and imaging techniques to gain traction in these pursuits.

Wang obtained her PhD at Columbia University, working with Professor Richard Axel. She conducted her postdoctoral work at Stanford University with Mark Tessier-Lavigne, and then subsequently joined Duke University as faculty in 2003. Wang was later appointed as the Morris N. Broad Distinguished Professor of Neurobiology at the Duke University School of Medicine. In January 2023, she joined the faculty of the MIT School of Science and the McGovern Institute.

The High-Risk, High-Reward Research program is funded through the NIH Common Fund, which supports a series of exceptionally high-impact programs that cross NIH Institutes and Centers.

“The HRHR program is a pillar for innovation here at NIH, providing support to transformational research, with advances in biomedical and behavioral science,” says Robert W. Eisinger, acting director of the Division of Program Coordination, Planning, and Strategic Initiatives, which oversees the NIH Common Fund. “These awards align with the Common Fund’s mandate to support science expected to have exceptionally high and broadly applicable impact.”

NIH issued eight Pioneer Awards, 58 New Innovator Awards, six Transformative Research Awards, and 13 Early Independence Awards in 2023. Funding for the awards comes from the NIH Common Fund; the National Institute of General Medical Sciences; the National Institute of Mental Health; the National Library of Medicine; the National Institute on Aging; the National Heart, Lung, and Blood Institute; and the Office of Dietary Supplements.

Study: Deep neural networks don’t see the world the way we do

Human sensory systems are very good at recognizing objects that we see or words that we hear, even if the object is upside down or the word is spoken by a voice we’ve never heard.

Computational models known as deep neural networks can be trained to do the same thing, correctly identifying an image of a dog regardless of what color its fur is, or a word regardless of the pitch of the speaker’s voice. However, a new study from MIT neuroscientists has found that these models often also respond the same way to images or words that have no resemblance to the target.

When these neural networks were used to generate an image or a word that they responded to in the same way as a specific natural input, such as a picture of a bear, most of them generated images or sounds that were unrecognizable to human observers. This suggests that these models build up their own idiosyncratic “invariances” — meaning that they respond the same way to stimuli with very different features.

The findings offer a new way for researchers to evaluate how well these models mimic the organization of human sensory perception, says Josh McDermott, an associate professor of brain and cognitive sciences at MIT and a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds, and Machines.

“This paper shows that you can use these models to derive unnatural signals that end up being very diagnostic of the representations in the model,” says McDermott, who is the senior author of the study. “This test should become part of a battery of tests that we as a field are using to evaluate models.”

Jenelle Feather PhD ’22, who is now a research fellow at the Flatiron Institute Center for Computational Neuroscience, is the lead author of the open-access paper, which appears today in Nature Neuroscience. Guillaume Leclerc, an MIT graduate student, and Aleksander Mądry, the Cadence Design Systems Professor of Computing at MIT, are also authors of the paper.

Different perceptions

In recent years, researchers have trained deep neural networks that can analyze millions of inputs (sounds or images) and learn common features that allow them to classify a target word or object roughly as accurately as humans do. These models are currently regarded as the leading models of biological sensory systems.

It is believed that when the human sensory system performs this kind of classification, it learns to disregard features that aren’t relevant to an object’s core identity, such as how much light is shining on it or what angle it’s being viewed from. This is known as invariance, meaning that objects are perceived to be the same even if they show differences in those less important features.

“Classically, the way that we have thought about sensory systems is that they build up invariances to all those sources of variation that different examples of the same thing can have,” Feather says. “An organism has to recognize that they’re the same thing even though they show up as very different sensory signals.”

The researchers wondered if deep neural networks that are trained to perform classification tasks might develop similar invariances. To try to answer that question, they used these models to generate stimuli that produce the same kind of response within the model as an example stimulus given to the model by the researchers.

They term these stimuli “model metamers,” reviving an idea from classical perception research whereby stimuli that are indistinguishable to a system can be used to diagnose its invariances. The concept of metamers was originally developed in the study of human perception to describe colors that look identical even though they are made up of different wavelengths of light.

To their surprise, the researchers found that most of the images and sounds produced in this way looked and sounded nothing like the examples that the models were originally given. Most of the images were a jumble of random-looking pixels, and the sounds resembled unintelligible noise. When researchers showed the images to human observers, in most cases the humans did not classify the images synthesized by the models in the same category as the original target example.

“They’re really not recognizable at all by humans. They don’t look or sound natural and they don’t have interpretable features that a person could use to classify an object or word,” Feather says.

The findings suggest that the models have somehow developed their own invariances that are different from those found in human perceptual systems. This causes the models to perceive pairs of stimuli as being the same despite their being wildly different to a human.

Idiosyncratic invariances

The researchers found the same effect across many different vision and auditory models. However, each of these models appeared to develop their own unique invariances. When metamers from one model were shown to another model, the metamers were just as unrecognizable to the second model as they were to human observers.

“The key inference from that is that these models seem to have what we call idiosyncratic invariances,” McDermott says. “They have learned to be invariant to these particular dimensions in the stimulus space, and it’s model-specific, so other models don’t have those same invariances.”

The researchers also found that they could induce a model’s metamers to be more recognizable to humans by using an approach called adversarial training. This approach was originally developed to combat another limitation of object recognition models, which is that introducing tiny, almost imperceptible changes to an image can cause the model to misrecognize it.

The researchers found that adversarial training, which involves including some of these slightly altered images in the training data, yielded models whose metamers were more recognizable to humans, though they were still not as recognizable as the original stimuli. This improvement appears to be independent of the training’s effect on the models’ ability to resist adversarial attacks, the researchers say.

“This particular form of training has a big effect, but we don’t really know why it has that effect,” Feather says. “That’s an area for future research.”

Analyzing the metamers produced by computational models could be a useful tool to help evaluate how closely a computational model mimics the underlying organization of human sensory perception systems, the researchers say.

“This is a behavioral test that you can run on a given model to see whether the invariances are shared between the model and human observers,” Feather says. “It could also be used to evaluate how idiosyncratic the invariances are within a given model, which could help uncover potential ways to improve our models in the future.”

The research was funded by the National Science Foundation, the National Institutes of Health, a Department of Energy Computational Science Graduate Fellowship, and a Friends of the McGovern Institute Fellowship.

New cellular census maps the complexity of a primate brain

A new atlas developed by researchers at MIT’s McGovern Institute and Harvard Medical School catalogs a diverse array of brain cells throughout the marmoset brain. The atlas helps establish marmosets—small monkeys whose brains share many functional and structural features with the human brain—as a valuable model for neuroscience research.

Data from more than two million brain cells are included in the atlas, which spans 18 regions of the marmoset brain. A research team led by Guoping Feng, associate director of the McGovern Institute and member of the Broad Institute of Harvard and MIT, Harvard biologist and member of the Broad Institute of Harvard and MIT Steven McCarroll, and Princeton neurobiologist Fenna Krienen classified each cell according to its particular pattern of genetic activity, providing an important reference for studies of the marmoset brain. The team’s analysis, reported October 13, 2023, in the journal Science Advances, also reveals the profound influence of a cell’s developmental origin on its identity in the primate brain.

Regional variation in neocortical cell types and expression patterns. Image courtesy of the researchers.

Cellular diversity

Brains are made up of a tremendous diversity of cells. Neurons with dramatically different gene expression, shapes, and activities work together to process information and drive behavior, supported by an assortment of immune cells and other cell types. Scientists have only recently begun to catalog this cellular diversity—first in mice, and now in primates.

The marmoset is a quick-breeding monkey whose small brain has many of features similar to those that enable higher cognitive processes in humans. Feng says neuroscientists have begun turning to marmosets as a research model in recent years because new gene editing technology has made it easier to modify the animal’s DNA, so scientists can now study the genetic factors that shape marmosets’ brains and behavior. Feng, McCarroll, Krienen and others hope these animals will offer insights into how primate brains handle complex decision-making, social interactions, and other higher brain functions that are difficult to study in mice. Likewise, Feng says, the monkeys will help scientists investigate the impact of genetic mutations associated with brain disorders and explore potential therapeutic strategies.

To make marmosets a practical model for neuroscience, scientists need to understand the fundamental composition of their brains. Feng and McCarroll’s team have begun that characterization with their cell census, which was supported by the National Institutes of Health’s Brain Research Through Advancing Innovative Neurotechnologies (BRAIN) Initiative’s Cell Census Network (BICCN), as part a larger effort to map cellular features in the brains of mice, non-human primates, and humans. It is an essential first step in the creation of a comprehensive atlas charting the molecular, anatomical, and functional features of cells in the marmoset brain.

“Hopefully, when the BRAIN Initiative is complete, we will have a very complete map of these cells: where they are located, their abundance, their functional properties,” says Feng. “This not only gives you knowledge of the normal brain, but you can also look at what aspects change in diseases of the brain. So it’s a really powerful database.”

To catalog the diversity of cells in the marmoset brain, the researchers undertook an expansive analysis of the molecular contents of 2.4 million brain cells from adult marmosets. For each of these cells, they analyzed the complete set of RNA copies of its genes that the cell had produced, known as the cell’s transcriptome. Because the transcriptome captures patterns of genetic activity inside a cell, it is an indication of the cell’s function and can be used to assess cellular identity.

Gene expression across neural populations. Image courtesy of the researchers.

The team’s analysis is one of the first to compare patterns of gene activity in cells from disparate regions of the marmoset brain. Doing so yielded surprising insights into the factors that shape brain cells’ transcriptomic identities. “What we found is that the cell’s transcriptome contains breadcrumbs that link back to the developmental origin of that cell type,” says Krienen, who led the cellular census as a postdoctoral researcher in McCarroll’s lab. That suggests that comparing cells’ transcriptomes can help scientists figure out how primate brains are assembled, which might lead to insights into neurodevelopmental disorders, she says.

The team also learned that a cell’s location in the brain was critical to shaping its transcriptomic identity. For example, Krienen says, “it turns out that an inhibitory neuron in the cortex doesn’t look very anything like an inhibitory neuron in the thalamus, probably because they have distinct embryonic origins.”

Expanding the cell census

This new picture of cellular diversity in the marmoset brain will help researchers understand how genetic perturbations affect different brain cells and interpret the results of future experiments. Importantly, Krienen says, it could help researchers pinpoint exactly which cells are affected in brain disorders, and how the effects of a disease might localize to specific brain regions.

Krienen, McCarroll, and Feng went beyond their initial survey of cellular diversity with analyses of specific subsets of cells, charting the spatial distribution of interneurons in a key region of the prefrontal cortex and visualizing the shapes of several molecularly-defined cell types. Now, they have begun expanding their cell census beyond the 18 brain structures represented in the reported work. As part of the BRAIN Initiative’s Brain Cell Atlas Network (BICAN), the team will profile cells throughout the entire adult marmoset brain, including multiple data types in their analysis. Building on cell census data, NIH BRAIN Initiative has also launched BRAIN CONNECTS projects to map cellular connectivity in the brain.

This work was supported by the National Institutes of Health, the National Science Foundation, MathWorks, MIT, Harvard Medical School, the Broad Institute’s Stanley Center for Psychiatric Research, the Hock E. Tan and K. Lisa Yang Center for Autism Research at MIT, the Poitras Center for Psychiatric Disorders Research at MIT, and the McGovern Institute for Brain Research at MIT.

Practicing mindfulness with an app may improve children’s mental health

Many studies have found that practicing mindfulness — defined as cultivating an open-minded attention to the present moment — has benefits for children. Children who receive mindfulness training at school have demonstrated improvements in attention and behavior, as well as greater mental health.

When the Covid-19 pandemic began in 2020, sending millions of students home from school, a group of MIT researchers wondered if remote, app-based mindfulness practices could offer similar benefits. In a study conducted during 2020 and 2021, they report that children who used a mindfulness app at home for 40 days showed improvements in several aspects of mental health, including reductions in stress and negative emotions such as loneliness and fear.

The findings suggest that remote, app-based mindfulness interventions, which could potentially reach a larger number of children than school-based approaches, could offer mental health benefits, the researchers say.

“There is growing and compelling scientific evidence that mindfulness can support mental well-being and promote mental health in diverse children and adults,” says John Gabrieli, the Grover Hermann Professor of Health Sciences and Technology, a professor of brain and cognitive sciences at MIT, and the senior author of the study, which appears this week in the journal Mindfulness.

Researchers in Gabrieli’s lab also recently reported that children who showed higher levels of mindfulness were more emotionally resilient to the negative impacts of the Covid-19 pandemic.

“To some extent, the impact of Covid is out of your control as an individual, but your ability to respond to it and to interpret it may be something that mindfulness can help with,” says MIT graduate student Isaac Treves, who is the lead author of both studies.

Pandemic resilience

After the pandemic began in early 2020, Gabrieli’s lab decided to investigate the effects of mindfulness on children who had to leave school and isolate from friends. In a study that appeared in the journal PLOS One in July, the researchers explored whether mindfulness could boost children’s resilience to negative emotions that the pandemic generated, such as frustration and loneliness.

Working with students between 8 and 10 years old, the researchers measured the children’s mindfulness using a standardized assessment that captures their tendency to blame themselves, ruminate on negative thoughts, and suppress their feelings.

The researchers also asked the children questions about how much the pandemic had affected different aspects of their lives, as well as questions designed to assess their levels of anxiety, depression, stress, and negative emotions such as worry or fear.

Among children who showed the highest levels of mindfulness, there was no correlation between how much the pandemic impacted them and negative feelings. However, in children with lower levels of mindfulness, there was a strong correlation between Covid-19 impact and negative emotions.

The children in this study did not receive any kind of mindfulness training, so their responses reflect their tendency to be mindful at the time they answered the researchers’ questions. The findings suggest that children with higher levels of mindfulness were less likely to get caught up in negative emotions or blame themselves for the negative things they experienced during the pandemic.

“This paper was our best attempt to look at mindfulness specifically in the context of Covid and to think about what are the factors that may help children adapt to the changing circumstances,” Treves says. “The takeaway is not that we shouldn’t worry about pandemics because we can just help the kids with mindfulness. People are able to be resilient when they’re in systems that support them, and in families that support them.”

Remote interventions

The researchers then built on that study by exploring whether a remote, app-based intervention could effectively increase mindfulness and improve mental health. Researchers in Gabrieli’s lab have previously shown that students who received mindfulness training in middle school showed better academic performance, received fewer suspensions, and reported less stress than those who did not receive the training.

For the new study, reported today in Mindfulness, the researchers worked with the same children they had recruited for the PLOS One study and divided them into three groups of about 80 students each.

One group received mindfulness training through an app created by Inner Explorer, a nonprofit that also develops school-based meditation programs. Those children were instructed to engage in mindfulness training five days a week, including relaxation exercises, breathing exercises, and other forms of meditation.

For comparison purposes, the other two groups were asked to use an app for listening to audiobooks (not related to mindfulness). One group was simply given the audiobook app and encouraged to listen at their own pace, while the other group also had weekly one-on-one virtual meetings with a facilitator.

At the beginning and end of the study, the researchers evaluated each participant’s levels of mindfulness, along with measures of mental health such as anxiety, stress, and depression. They found that in all three groups, mental health improved over the course of the eight-week study, and each group also showed increases in mindfulness and prosociality (engaging in helpful behavior).

Additionally, children in the mindfulness group showed some improvements that the other groups didn’t, including a more significant decrease in stress. They also found that parents in the mindfulness group reported that their children experienced more significant decreases in negative emotions such as anger and sadness. Students who practiced the mindfulness exercises the most days showed the greatest benefits.

The researchers were surprised to see that there were no significant differences in measures of anxiety and depression between the mindfulness group and audiobook groups; they hypothesize that may be because students who interacted with a facilitator in one of the audiobook groups also experienced beneficial effects on their mental health.

Overall, the findings suggest that there is value in remote, app-based mindfulness training, especially if children engage with the exercises consistently and receive encouragement from parents, the researchers say. Apps also offer the ability to reach a larger number of children than school-based programs, which require more training and resources.

“There are a lot of great ways to incorporate mindfulness training into schools, but in general, it’s more resource-intensive than having people download an app. So, in terms of pure scalability and cost-effectiveness, apps are useful,” Treves says. “Another good thing about apps is that the kids can go at their own pace and repeat practices that they like, so there’s more freedom of choice.”

The research was funded by the Chan Zuckerberg Initiative as part of the Reach Every Reader Project, the National Institutes of Health, and the National Science Foundation.

Twelve with MIT ties elected to the National Academy of Medicine for 2023

The National Academy of Medicine announced the election of 100 new members to join their esteemed ranks in 2023, among them five MIT faculty members and seven additional affiliates.

MIT professors Daniel Anderson, Regina Barzilay, Guoping Feng, Darrell Irvine, and Morgen Shen were among the new members. Justin Hanes PhD ’96, Said Ibrahim MBA ’16, and Jennifer West ’92, along with three former students in the Harvard-MIT Program in Health Sciences and Technology (HST) — Michael Chiang, Siddhartha Mukherjee, and Robert Vonderheide — were also elected, as was Yi Zhang, an associate member of The Broad Institute of MIT and Harvard.

Election to the academy is considered one of the highest honors in the fields of health and medicine and recognizes individuals who have demonstrated outstanding professional achievement and commitment to service, the academy noted in announcing the election of its new members.

MIT faculty

Daniel G. Anderson, professor in the Department of Chemical Engineering and the Institute for Medical Engineering and Science, was elected “for pioneering the area of non-viral gene therapy and cellular delivery. His work has resulted in fundamental scientific advances; over 500 papers, patents, and patent applications; and the creation of companies, products, and technologies that are now in the clinic.” Anderson is an affiliate of the Broad Institute of MIT and Harvard and of the Ragon Institute at MGH, MIT and Harvard.

Regina Barzilay, the School of Engineering Distinguished Professor for AI and Health within the Department of Electrical Engineering and Computer Science at MIT, was elected “for the development of machine learning tools that have been transformational for breast cancer screening and risk assessment, and for the development of molecular design tools broadly utilized for drug discovery.” Barzilay is the AI faculty lead within the MIT Abdul Latif Jameel Clinic for Machine Learning in Health and an affiliate of the Computer Science and Artificial Intelligence Laboratory and Institute for Medical Engineering and Science.

Guoping Feng, the associate director of the McGovern Institute for Brain Research, James W. (1963) and Patricia T. Professor of Neuroscience in MIT’s Department of Brain and Cognitive Sciences, and an affiliate of the Broad Institute of MIT and Harvard, was elected “for his breakthrough discoveries regarding the pathological mechanisms of neurodevelopmental and psychiatric disorders, providing foundational knowledges and molecular targets for developing effective therapeutics for mental illness such as OCD, ASD, and ADHD.”

Darrell J. Irvine ’00, the Underwood-Prescott Professor of Biological Engineering and Materials Science at MIT and a member of the Koch Institute for Integrative Cancer Research, was elected “for the development of novel methods for delivery of immunotherapies and vaccines for cancer and infectious diseases.”

Morgan Sheng, professor of neuroscience in the Department of Brain and Cognitive Sciences, with affiliations in the McGovern Institute and The Picower Institute for Learning and Memory at MIT, as well as the Broad Institute of MIT and Harvard, was elected “for transforming the understanding of excitatory synapses. He revealed the postsynaptic density as a protein network controlling synaptic signaling and morphology; established the paradigm of signaling complexes organized by PDZ scaffolds; and pioneered the concept of localized regulation of mitochondria, apoptosis, and complement for targeted synapse elimination.”

Additional MIT affiliates

Michael F. Chiang, a former student in the Harvard-MIT Program in Health Sciences and Technology (HST) who is now director of the National Eye Institute of the National Institutes of Health, was honored “for pioneering applications of biomedical informatics to ophthalmology in artificial intelligence, telehealth, pediatric retinal disease, electronic health records, and data science, including methodological and diagnostic advances in AI for pediatric retinopathy of prematurity, and for contributions to developing and implementing the largest ambulatory care registry in the United States.”

Justin Hanes PhD ’96, who earned his PhD from the MIT Department of Chemical Engineering and is now a professor at Johns Hopkins University, was honored “for pioneering discoveries and inventions of innovative drug delivery technologies, especially mucosal, ocular, and central nervous system drug delivery systems; and for international leadership in research and education at the interface of engineering, medicine, and entrepreneurship, leading to clinical translation of drug delivery technologies.”

Said Ibrahim MBA ’16, a graduate of the MIT Sloan School of Management who is now a senior vice president and chair, department of medicine at the Zucker School of Medicine at Hofstra/Northwell, was honored for influential “health services research on racial disparities in elective joint replacement that has provided a national model for advancing health equity research beyond the identification of inequities and toward their remediation, and for his research that has been leveraged to engage diverse and innovative emerging scholars.”

Siddhartha Mukherjee, a former student in HST who is now an associate professor of medicine at Columbia University School of Medicine, was honored “for contributing important research in the immunotherapy of myeloid malignancies, such as acute myeloid leukemia, for establishing international centers for immunotherapy for childhood cancers, and for the discovery of tissue-resident stem cells.”

Robert H. Vonderheide, a former student in HST who is now a professor and vice dean at the Perelman School of Medicine and vice president of cancer programs at the University of Pennsylvania Health System, was honored “for developing immune combination therapies for patients with pancreatic cancer by driving proof-of-concept from lab to clinic, then leading national, randomized clinical trials for therapy, maintenance, and interception; and for improving access of minority individuals to clinical trials while directing an NCI comprehensive cancer center.”

Jennifer West ’92, a graduate of the MIT Department of Chemical Engineering who is now a professor of biomedical engineering and dean of the School of Engineering and Applied Science at the University of Virginia at Charlottesville, was honored “for the invention, development, and translation of novel biomaterials including bioactive, photopolymerizable hydrogels and theranostic nanoparticles.”

Yi Zhang, associate member of the Broad Institute, was honored “for making fundamental contributions to the epigenetics field through systematic identification and characterization of chromatin modifying enzymes, including EZH2, JmjC, and Tet. His proof-of-principle work on EZH2 inhibitors led to the founding of Epizyme and eventual making of tazemetostat, a drug approved for epithelioid sarcoma and follicular lymphoma.”

“It is my honor to welcome this truly exceptional class of new members to the National Academy of Medicine,” said NAM President Victor J. Dzau. “Their contributions to health and medicine are unparalleled, and their leadership and expertise will be essential to helping the NAM tackle today’s urgent health challenges, inform the future of health care, and ensure health equity for the benefit of all around the globe.”

Thousands of programmable DNA-cutters found in algae, snails, and other organisms

A diverse set of species, from snails to algae to amoebas, make programmable DNA-cutting enzymes called Fanzors—and a new study from scientists at MIT’s McGovern Institute has identified thousands of them. Fanzors are RNA-guided enzymes that can be programmed to cut DNA at specific sites, much like the bacterial enzymes that power the widely used gene-editing system known as CRISPR. The newly recognized diversity of natural Fanzor enzymes, reported September 27, 2023, in the journal Science Advances, gives scientists an extensive set of programmable enzymes that might be adapted into new tools for research or medicine.

“RNA-guided biology is what lets you make programmable tools that are really easy to use. So the more we can find, the better,” says McGovern fellow Omar Abudayyeh, who led the research with McGovern fellow Jonathan Gootenberg.

CRISPR, an ancient bacterial defense system, has made it clear how useful RNA-guided enzymes can be when they are adapted for use in the lab. CRISPR-based genome editing tools developed by McGovern investigator Feng Zhang, Abudayyeh, Gootenberg and others have changed the way scientists modify DNA, accelerating research and enabling the development of many experimental gene therapies.

Researchers have since uncovered other RNA-guide enzymes throughout the bacterial world, many with features that make them valuable in the lab. The discovery of Fanzors, whose ability to cut DNA in an RNA-guided manner was reported by Zhang’s group earlier this year, opens a new frontier of RNA-guided biology. Fanzors were the first such enzymes to be found in eukaryotic organisms—a wide group of lifeforms, including plants, animals, and fungi, defined by the membrane-bound nucleus that holds each cell’s genetic material. (Bacteria, which lack nuclei, belong to a group known as prokaryotes.)

Structural illustration of Fanzors.
Predicted structural image of Fanzors. Image: Jonathan Gootenberg and Omar Abudayyeh

“People have been searching for interesting tools in prokaryotic systems for a long time, and I think that that has been incredibly fruitful,” says Gootenberg. “Eukaryotic systems are really just a whole new kind of playground to work in.”

One hope, Abudayyeh and Gootenberg say, is that enzymes that naturally evolved in eukaryotic organisms might be better suited to function safely and efficiently in the cells of other eukaryotic organisms, including humans. Zhang’s group has shown that Fanzor enzymes can be engineered to precisely cut specific DNA sequences in human cells. In the new work, Abudayyeh and Gootenberg discovered that some Fanzors can target DNA sequences in human cells even without optimization. “The fact that they work quite efficiently in mammalian cells was really fantastic to see,” Gootenberg says.

Prior to the current study, hundreds of Fanzors had been found among eukaryotic organisms. Through an extensive search of genetic databases led by lab member Justin Lim, Gootenberg and Abudayyeh’s team has now expanded the known diversity of these enzymes by an order of magnitude.

Among the more than 3,600 Fanzors that the team found in eukaryotes and the viruses that infect them, the researchers were able to identify five different families of the enzymes. By comparing these enzymes’ precise makeup, they found evidence of a long evolutionary history.

Fanzors likely evolved from RNA-guided DNA-cutting bacterial enzymes called TnpBs. In fact, it was Fanzors’ genetic similarities to these bacterial enzymes that first caught the attention of both Zhang’s group and Gootenberg and Abudayyeh’s team.

The evolutionary connections that Gootenberg and Abudayyeh traced suggest that these bacterial predecessors of Fanzors probably entered eukaryotic cells, initiating their evolution, more than once. Some were likely transmitted by viruses, while others may have been introduced by symbiotic bacteria. The research also suggests that after they were taken up by eukaryotes, the enzymes evolved features suited to their new environment, such as a signal that allows them to enter a cell nucleus, where they have access to DNA.

Through genetic and biochemical experiments led by graduate student Kaiyi Jiang, the team determined that Fanzors have evolved a DNA-cutting active site that is distinct from that of their bacterial predecessors. This seems to allow the enzyme to cut its target sequence more precisely the ancestors of TnpB, when targeted to a sequence of DNA in a test tube, become activated and cut other sequences in the tube; Fanzors lack this promiscuous activity. When they used an RNA guide to direct the enzymes to cut specific sites in the genome of human cells, they found that certain Fanzors were able to cut these target sequences with about 10 to 20 percent efficiency.

With further research, Abudayyeh and Gootenberg hope that a variety of sophisticated genome editing tools can be developed from Fanzors. “It’s a new platform, and they have many capabilities,” says Gootenberg. “Opening up the whole eukaryotic world to these types of RNA-guided systems is going to give us a lot to work on,” Abudayyeh adds.

Four McGovern Investigators receive NIH BRAIN Initiative grants

In the human brain, 86 billion neurons form more than 100 trillion connections with other neurons at junctions called synapses. Scientists at the McGovern Institute are working with their collaborators to develop technologies to map these connections across the brain, from mice to humans.

Today, the National Institutes of Health (NIH) announced a new program to support research projects that have the potential to reveal an unprecedented and dynamic picture of the connected networks in the brain. Four of these NIH-funded research projects will take place in McGovern labs.

BRAIN Initiative

In 2013, the Obama administration announced the Brain Research Through Advancing Innovative Neurotechnologies® (BRAIN) Initiative, a public-private research effort to support the development and application of new technologies to understand brain function.

Today, the NIH announced its third project supported by the BRAIN Initiative, called BRAIN Initiative Connectivity Across Scales (BRAIN CONNECTS). The new project complements two previous large-scale projects, which together aim to transform neuroscience research by generating wiring diagrams that can span entire brains across multiple species. These detailed wiring diagrams can help uncover the logic of the brain’s neural code, leading to a better understanding of how this circuitry makes us who we are and how it could be rewired to treat brain diseases.

BRAIN CONNECTS at McGovern

The initial round of BRAIN CONNECTS awards will support researchers at more than 40 university and research institutions across the globe with 11 grants totaling $150 million over five years. Four of these grants have been awarded to McGovern researchers Guoping Feng, Ila Fiete, Satra Ghosh, and Ian Wickersham, whose projects are outlined below:

BRAIN CONNECTS: Comprehensive regional projection map of marmoset with single axon and cell type resolution
Team: Guoping Feng (McGovern Institute, MIT), Partha Mitra (Cold Spring Harbor Laboratory), Xiao Wang (Broad Institute), Ian Wickersham (McGovern Institute, MIT)

Summary: This project will establish an integrated experimental-computational platform to create the first comprehensive brain-wide mesoscale connectivity map in a non-human primate (NHP), the common marmoset (Callithrix jacchus). It will do so by tracing axonal projections of RNA barcode-identified neurons brain-wide in the marmoset, utilizing a sequencing-based imaging method that also permits simultaneous transcriptomic cell typing of the identified neurons. This work will help bridge the gap between brain-wide mesoscale connectivity data available for the mouse from a decade of mapping efforts using modern techniques and the absence of comparable data in humans and NHPs.

BRAIN CONNECTS: A center for high-throughput integrative mouse connectomics
Team: Jeff Lichtman (Harvard University), Ila Fiete (McGovern Institute, MIT), Sebastian Seung (Princeton University), David Tank (Princeton University), Hongkui Zeng (Allen Institute), Viren Jain (Google), Greg Jeffries (Oxford University)

Summary: This project aims to produce a large-scale synapse-level brain map (connectome) that includes all the main areas of the mouse hippocampus. This region is of clinical interest because it is an essential part of the circuit underlying spatial navigation and memory and the earliest impairments and degeneration related to Alzheimer’s disease.

BRAIN CONNECTS: The center for Large-scale Imaging of Neural Circuits (LINC)
Team: Anastasia Yendiki (MGH), Satra Ghosh (McGovern, MIT), Suzanne Haber (University of Rochester), Elizabeth Hillman (Columbia University)

Summary: This project will generate connectional diagrams of the monkey and human brain at unprecedented resolutions. These diagrams will be linked both to the neuroanatomic literature and to in vivo neuroimaging techniques, bridging between the rigor of the former and the clinical relevance of the latter. The data to be generated by this project will advance our understanding of brain circuits that are implicated in motor and psychiatric disorders, and that are targeted by deep-brain stimulation to treat these disorders.

BRAIN CONNECTS: Mapping brain-wide connectivity of neuronal types using barcoded connectomics
Team: Xiaoyin Chen (Allen Institute), Ian Wickersham (McGovern Institute, MIT), and Justus Kebschull of JHU

Summary: This project aims to optimize and develop barcode sequencing-based neuroanatomical techniques to achieve brain-wide, high-throughput, highly multiplexed mapping of axonal projections and synaptic connectivity of neuronal types at cellular resolution in primate brains. The team will work together to apply these techniques to generate an unprecedented multi-resolution map of brain-wide projections and synaptic inputs of neurons in the macaque visual cortex at cellular resolution.

 

Re-imagining our theories of language

Over a decade ago, the neuroscientist Ev Fedorenko asked 48 English speakers to complete tasks like reading sentences, recalling information, solving math problems, and listening to music. As they did this, she scanned their brains using functional magnetic resonance imaging to see which circuits were activated. If, as linguists have proposed for decades, language is connected to thought in the human brain, then the language processing regions would be activated even during nonlinguistic tasks.

Fedorenko’s experiment, published in 2011 in the Proceedings of the National Academy of Sciences, showed that when it comes to arithmetic, musical processing, general working memory, and other nonlinguistic tasks, language regions of the human brain showed no response. Contrary to what many linguistists have claimed, complex thought and language are separate things. One does not require the other. “We have this highly specialized place in the brain that doesn’t respond to other activities,” says Fedorenko, who is an associate professor at the Department of Brain and Cognitive Sciences (BCS) and the McGovern Institute for Brain Research. “It’s not true that thought critically needs language.”

The design of the experiment, using neuroscience to understand how language works, how it evolved, and its relation to other cognitive functions, is at the heart of Fedorenko’s research. She is part of a unique intellectual triad at MIT’s Department of BCS, along with her colleagues Roger Levy and Ted Gibson. (Gibson and Fedorenko have been married since 2007). Together they have engaged in a years-long collaboration and built a significant body of research focused on some of the biggest questions in linguistics and human cognition. While working in three independent labs — EvLab, TedLab, and the Computational Psycholinguistics Lab — the researchers are motivated by a shared fascination with the human mind and how language works in the brain. “We have a great deal of interaction and collaboration,” says Levy. “It’s a very broadly collaborative, intellectually rich and diverse landscape.”

Using combinations of computational modeling, psycholinguistic experimentation, behavioral data, brain imaging, and large naturalistic language datasets, the researchers also share an answer to a fundamental question: What is the purpose of language? Of all the possible answers to why we have language, perhaps the simplest and most obvious is communication. “Believe it or not,” says Ted Gibson, “that is not the standard answer.”

Gibson first came to MIT in 1993 and joined the faculty of the Linguistics Department in 1997. Recalling the experience today, he describes it as frustrating. The field of linguistics at that time was dominated by the ideas of Noam Chomsky, one of the founders of MIT’s Graduate Program in Linguistics, who has been called the father of modern linguistics. Chomsky’s “nativist” theories of language posited that the purpose of language is the articulation of thought and that language capacity is built-in in advance of any learning. But Gibson, with his training in math and computer science, felt that researchers didn’t satisfyingly test these ideas. He believed that finding the answer to many outstanding questions about language required quantitative research, a departure from standard linguistic methodology. “There’s no reason to rely only on you and your friends, which is how linguistics has worked,” Gibson says. “The data you can get can be much broader if you crowdsource lots of people using experimental methods.” Chomsky’s ascendancy in linguistics presented Gibson with what he saw as a challenge and an opportunity. “I felt like I had to figure it out in detail and see if there was truth in these claims,” he says.

Three decades after he first joined MIT, Gibson believes that the collaborative research at BCS is persuasive and provocative, pointing to new ways of thinking about human culture and cognition. “Now we’re at a stage where it is not just arguments against. We have a lot of positive stuff saying what language is,” he explains. Levy adds: “I would say all three of us are of the view that communication plays a very import role in language learning and processing, but also in the structure of language itself.”

Levy points out that the three researchers completed PhDs in different subjects: Fedorenko in neuroscience, Gibson in computer science, Levy in linguistics. Yet for years before their paths finally converged at MIT, their shared interests in quantitative linguistic research led them to follow each other’s work closely and be influenced by it. The first collaboration between the three was in 2005 and focused on language processing in Russian relative clauses. Around that time, Gibson recalls, Levy was presenting what he describes as “lovely work” that was instrumental in helping him to understand the links between language structure and communication. “Communicative pressures drive the structures,” says Gibson. “Roger was crucial for that. He was the one helping me think about those things a long time ago.”

Levy’s lab is focused on the intersection of artificial intelligence, linguistics, and psychology, using natural language processing tools. “I try to use the tools that are afforded by mathematical and computer science approaches to language to formalize scientific hypotheses about language and the human mind and test those hypotheses,” he says.

Levy points to ongoing research between him and Gibson focused on language comprehension as an example of the benefits of collaboration. “One of the big questions is: When language understanding fails, why does it fail?” Together, the researchers have applied the concept of a “noisy channel,” first developed by the information theorist Claude Shannon in the 1950s, which says that information or messages are corrupted in transmission. “Language understanding unfolds over time, involving an ongoing integration of the past with the present,” says Levy. “Memory itself is an imperfect channel conveying the past from our brain a moment ago to our brain now in order to support successful language understanding.” Indeed, the richness of our linguistic environment, the experience of hundreds of millions of words by adulthood, may create a kind of statistical knowledge guiding our expectations, beliefs, predictions, and interpretations of linguistic meaning. “Statistical knowledge of language actually interacts with the constraints of our memory,” says Levy. “Our experience shapes our memory for language itself.”

All three researchers say they share the belief that by following the evidence, they will eventually discover an even bigger and more complete story about language. “That’s how science goes,” says Fedorenko. “Ted trained me, along with Nancy Kanwisher, and both Ted and Roger are very data-driven. If the data is not giving you the answer you thought, you don’t just keep pushing your story. You think of new hypotheses. Almost everything I have done has been like that.” At times, Fedorenko’s research into parts of the brain’s language system has surprised her and forced her to abandon her hypotheses. “In a certain project I came in with a prior idea that there would be some separation between parts that cared about combinatorics versus words meanings,” she says, “but every little bit of the language system is sensitive to both. At some point, I was like, this is what the data is telling us, and we have to roll with it.”

The researchers’ work pointing to communication as the constitutive purpose of language opens new possibilities for probing and studying non-human language. The standard claim is that human language has a drastically more extensive lexicon than animals, which have no grammar. “But many times, we don’t even know what other species are communicating,” says Gibson. “We say they can’t communicate, but we don’t know. We don’t speak their language.” Fedorenko hopes that more opportunities to make cross-species linguistic comparisons will open up. “Understanding where things are similar and where things diverge would be super useful,” she says.

Meanwhile, the potential applications of language research are far-reaching. One of Levy’s current research projects focuses on how people read and use machine learning algorithms informed by the psychology of eye movements to develop proficiency tests. By tracking the eye movements of people who speak English as a second language while they read texts in English, Levy can predict how good they are at English, an approach that could one day replace the Test of English as a Foreign Language. “It’s an implicit measure of language rather than a much more game-able test,” he says.

The researchers agree that some of the most exciting opportunities in the neuroscience of language lies with large language models that provide new opportunities for asking new questions and making new discoveries. “In the neuroscience of language, the kind of stories that we’ve been able to tell about how the brain does language were limited to verbal, descriptive hypotheses,” says Fedorenko. Computationally implemented models are now amazingly good at language and show some degree of alignment to the brain, she adds. Now, researchers can ask questions such as: what are the actual computations that cells are doing to get meaning from strings of words? “You can now use these models as tools to get insights into how humans might be processing language,” she says. “And you can take the models apart in ways you can’t take apart the brain.”

Nuevo podcast de neurociencia en español celebra su tercera temporada

Sylvia Abente, neuróloga clínica de la Universidad Nacional de Asunción (Paraguay), investiga la variedad de síntomas que son característicos de la epilepsia. Trabaja con los pueblos indígenas de Paraguay, y su dominio del español y el guaraní, los dos idiomas oficiales de Paraguay, le permite ayudar a los pacientes a encontrar las palabras que ayuden a describir sus síntomas de epilepsia para poder tratarlos.

Juan Carlos Caicedo Mera, neurocientífico de la Universidad Externado de Colombia, utiliza modelos de roedores para investigar los efectos neurobiológicos del estrés en los primeros años de vida. Ha desempeñado un papel decisivo en despertar la conciencia pública sobre los efectos biológicos y conductuales del castigo físico a edades tempranas, lo que ha propiciado cambios políticos encaminados a reducir su prevalencia como práctica cultural en Colombia.

Woman interviews a man at a table with a camera recording the interview in the foreground.
Jessica Chomik-Morales (right) interviews Pedro Maldonado at the Biomedical Neuroscience Institute of Chile at the University of Chile. Photo: Jessica Chomik-Morales

Estos son solo dos de los 33 neurocientíficos de siete países latinoamericanos que Jessica Chomik-Morales entrevistó durante 37 días para la tercera temporada de su podcast en español “Mi Última Neurona,” que se estrenará el 18 de septiembre a las 5:00 p. m. en YouTube. Cada episodio dura entre 45 y 90 minutos.

“Quise destacar sus historias para disipar la idea errónea de que la ciencia de primer nivel solo puede hacerse en Estados Unidos y Europa,” dice Chomik-Morales, “o que no se consigue en Sudamérica debido a barreras financieras y de otro tipo.”

Chomik-Morales, graduada universitaria de primera generación que creció en Asunción (Paraguay) y Boca Ratón (Florida), es ahora investigadora académica de post licenciatura en el MIT. Aquí trabaja con Laura Schulz, profesora de Ciencia Cognitiva, y Nancy Kanwisher, investigadora del McGovern Institute y la profesora Walter A. Rosenblith de Neurociencia Cognitiva, utilizando imágenes cerebrales funcionales para investigar de qué forma el cerebro explica el pasado, predice el futuro e interviene sobre el presente a traves del razonamiento causal.

“El podcast está dirigido al público en general y es apto para todas las edades,” afirma. “Se explica la neurociencia de forma fácil para inspirar a los jóvenes en el sentido de que ellos también pueden llegar a ser científicos y para mostrar la amplia variedad de investigaciones que se realizan en los países de origen de los escuchas.”

El viaje de toda una vida

“Mi Última Neurona” comenzó como una idea en 2021 y creció rápidamente hasta convertirse en una serie de conversaciones con destacados científicos hispanos, entre ellos L. Rafael Reif, ingeniero electricista venezolano-estadounidense y 17.º presidente del MIT.

Woman interviews man at a table while another man adjusts microphone.
Jessica Chomik-Morales (left) interviews the 17th president of MIT, L. Rafael Reif (right), for her podcast while Héctor De Jesús-Cortés (center) adjusts the microphone. Photo: Steph Stevens

Con las relaciones profesionales que estableció en las temporadas uno y dos, Chomik-Morales amplió su visión y reunió una lista de posibles invitados en América Latina para la tercera temporada. Con la ayuda de su asesor científico, Héctor De Jesús-Cortés, un investigador Boricua de posdoctorado del MIT, y el apoyo financiero del McGovern Institute, el Picower Institute for Learning and Memory, el Departamento de Ciencias Cerebrales y Cognitivas, y las Iniciativas Internacionales de Ciencia y Tecnología del MIT, Chomik-Morales organizó entrevistas con científicos en México, Perú, Colombia, Chile, Argentina, Uruguay y Paraguay durante el verano de 2023.

Viajando en avión cada cuatro o cinco días, y consiguiendo más posibles participantes de una etapa del viaje a la siguiente por recomendación, Chomik-Morales recorrió más de 10,000 millas y recopiló 33 historias para su tercera temporada. Las áreas de especialización de los científicos abarcan toda una variedad de temas, desde los aspectos sociales de los ciclos de sueño y vigilia hasta los trastornos del estado de ánimo y la personalidad, pasando por la lingüística y el lenguaje en el cerebro o el modelado por computadoras como herramienta de investigación.

“Si alguien estudia la depresión y la ansiedad, quiero hablar sobre sus opiniones con respecto a diversas terapias, incluidos los fármacos y también las microdosis con alucinógenos,” dice Chomik-Morales. “Estas son las cosas de las que habla la gente.” No le teme a abordar temas delicados, como la relación entre las hormonas y la orientación sexual, porque “es importante que la gente escuche a los expertos hablar de estas cosas,” comenta.

El tono de las entrevistas va de lo informal (“el investigador y yo somos como amigos”, dice) a lo pedagógico (“de profesor a alumno”). Lo que no cambia es la accesibilidad (se evitan términos técnicos) y las preguntas iniciales y finales en cada entrevista. Para empezar: “¿Cómo ha llegado hasta aquí? ¿Qué le atrajo de la neurociencia?”. Para terminar: “¿Qué consejo le daría a un joven estudiante latino interesado en Ciencias, Ingeniería, Tecnología y Matemáticas[1]?

Permite que el marco de referencia de sus escuchas sea lo que la guíe. “Si no entendiera algo o pensara que se podría explicar mejor, diría: ‘Hagamos una pausa’. ¿Qué significa esta palabra?”, aunque ella conociera la definición. Pone el ejemplo de la palabra “MEG” (magnetoencefalografía): la medición del campo magnético generado por la actividad eléctrica de las neuronas, que suele combinarse con la resonancia magnética para producir imágenes de fuentes magnéticas. Para aterrizar el concepto, preguntaría: “¿Cómo funciona? ¿Este tipo de exploración hace daño al paciente?”.

Allanar el camino para la creación de redes globales

El equipo de Chomik-Morales era escaso: tres micrófonos Yeti y una cámara de video Canon conectada a su computadora portátil. Las entrevistas se realizaban en salones de clase, oficinas universitarias, en la casa de los investigadores e incluso al aire libre, ya que no había estudios insonorizados disponibles. Ha estado trabajando con el ingeniero de sonido David Samuel Torres, de Puerto Rico, para obtener un sonido más claro.

Ninguna limitación tecnológica podía ocultar la importancia del proyecto para los científicos participantes.

Two women talking at a table in front of a camera.
Jessica Chomik-Morales (left) interviews Josefina Cruzat (right) at Adolfo Ibañez University in Chile. Photo: Jessica Chomik-Morales

“Mi Última Neurona” muestra nuestro conocimiento diverso en un escenario global, proporcionando un retrato más preciso del panorama científico en América Latina,” dice Constanza Baquedano, originaria de Chile. “Es un avance hacia la creación de una representación más inclusiva en la ciencia”. Baquendano es profesora adjunta de psicología en la Universidad Adolfo Ibáñez, en donde utiliza electrofisiología y mediciones electroencefalográficas y conductuales para investigar la meditación y otros estados contemplativos. “Estaba ansiosa por ser parte de un proyecto que buscara brindar reconocimiento a nuestras experiencias compartidas como mujeres latinoamericanas en el campo de la neurociencia.”

“Comprender los retos y las oportunidades de los neurocientíficos que trabajan en América Latina es primordial,” afirma Agustín Ibáñez, profesor y director del Instituto Latinoamericano de Salud Cerebral (BrainLat) de la Universidad Adolfo Ibáñez de Chile. “Esta región, que se caracteriza por tener importantes desigualdades que afectan la salud cerebral, también presenta desafíos únicos en el campo de la neurociencia,” afirma Ibáñez, quien se interesa principalmente en la intersección de la neurociencia social, cognitiva y afectiva. “Al centrarse en América Latina, el podcast da a conocer las historias que frecuentemente no se cuentan en la mayoría de los medios. Eso tiende puentes y allana el camino para la creación de redes globales.”

Por su parte, Chomik-Morales confía en que su podcast generará un gran número de seguidores en América Latina. “Estoy muy agradecida por el espléndido patrocinio del MIT,” dice Chomik-Morales. “Este es el proyecto más gratificante que he hecho en mi vida.”

__

[1] En inglés Science, Technology, Engineering and Mathematics (STEM)