Face-specific brain area responds to faces even in people born blind

More than 20 years ago, neuroscientist Nancy Kanwisher and others discovered that a small section of the brain located near the base of the skull responds much more strongly to faces than to other objects we see. This area, known as the fusiform face area, is believed to be specialized for identifying faces.

Now, in a surprising new finding, Kanwisher and her colleagues have shown that this same region also becomes active in people who have been blind since birth, when they touch a three-dimensional model of a face with their hands. The finding suggests that this area does not require visual experience to develop a preference for faces.

“That doesn’t mean that visual input doesn’t play a role in sighted subjects — it probably does,” she says. “What we showed here is that visual input is not necessary to develop this particular patch, in the same location, with the same selectivity for faces. That was pretty astonishing.”

Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience and a member of MIT’s McGovern Institute for Brain Research, is the senior author of the study. N. Apurva Ratan Murty, an MIT postdoc, is the lead author of the study, which appears this week in the Proceedings of the National Academy of Sciences. Other authors of the paper include Santani Teng, a former MIT postdoc; Aude Oliva, a senior research scientist, co-director of the MIT Quest for Intelligence, and MIT director of the MIT-IBM Watson AI Lab; and David Beeler and Anna Mynick, both former lab technicians.

Selective for faces

Studying people who were born blind allowed the researchers to tackle longstanding questions regarding how specialization arises in the brain. In this case, they were specifically investigating face perception, but the same unanswered questions apply to many other aspects of human cognition, Kanwisher says.

“This is part of a broader question that scientists and philosophers have been asking themselves for hundreds of years, about where the structure of the mind and brain comes from,” she says. “To what extent are we products of experience, and to what extent do we have built-in structure? This is a version of that question asking about the particular role of visual experience in constructing the face area.”

The new work builds on a 2017 study from researchers in Belgium. In that study, congenitally blind subjects were scanned with functional magnetic resonance imaging (fMRI) as they listened to a variety of sounds, some related to faces (such as laughing or chewing), and others not. That study found higher responses in the vicinity of the FFA to face-related sounds than to sounds such as a ball bouncing or hands clapping.

In the new study, the MIT team wanted to use tactile experience to measure more directly how the brains of blind people respond to faces. They created a ring of 3D-printed objects that included faces, hands, chairs, and mazes, and rotated them so that the subject could handle each one while in the fMRI scanner.

They began with normally sighted subjects and found that when they handled the 3D objects, a small area that corresponded to the location of the FFA was preferentially active when the subjects touched the faces, compared to when they touched other objects. This activity, which was weaker than the signal produced when sighted subjects looked at faces, was not surprising to see, Kanwisher says.

“We know that people engage in visual imagery, and we know from prior studies that visual imagery can activate the FFA. So the fact that you see the response with touch in a sighted person is not shocking because they’re visually imagining what they’re feeling,” she says.

The researchers then performed the same experiments, using tactile input only, with 15 subjects who reported being blind since birth. To their surprise, they found that the brain showed face-specific activity in the same area as the sighted subjects, at levels similar to when sighted people handled the 3D-printed faces.

“When we saw it in the first few subjects, it was really shocking, because no one had seen individual face-specific activations in the fusiform gyrus in blind subjects previously,” Murty says.

Patterns of connection

The researchers also explored several hypotheses that have been put forward to explain why face-selectivity always seems to develop in the same region of the brain. One prominent hypothesis suggests that the FFA develops face-selectivity because it receives visual input from the fovea (the center of the retina), and we tend to focus on faces at the center of our visual field. However, since this region developed in blind people with no foveal input, the new findings do not support this idea.

Another hypothesis is that the FFA has a natural preference for curved shapes. To test that idea, the researchers performed another set of experiments in which they asked the blind subjects to handle a variety of 3D-printed shapes, including cubes, spheres, and eggs. They found that the FFA did not show any preference for the curved objects over the cube-shaped objects.

The researchers did find evidence for a third hypothesis, which is that face selectivity arises in the FFA because of its connections to other parts of the brain. They were able to measure the FFA’s “connectivity fingerprint” — a measure of the correlation between activity in the FFA and activity in other parts of the brain — in both blind and sighted subjects.

They then used the data from each group to train a computer model to predict the exact location of the brain’s selective response to faces based on the FFA connectivity fingerprint. They found that when the model was trained on data from sighted patients, it could accurately predict the results in blind subjects, and vice versa. They also found evidence that connections to the frontal and parietal lobes of the brain, which are involved in high-level processing of sensory information, may be the most important in determining the role of the FFA.

“It’s suggestive of this very interesting story that the brain wires itself up in development not just by taking perceptual information and doing statistics on the input and allocating patches of brain, according to some kind of broadly agnostic statistical procedure,” Kanwisher says. “Rather, there are endogenous constraints in the brain present at birth, in this case, in the form of connections to higher-level brain regions, and these connections are perhaps playing a causal role in its development.”

The research was funded by the National Institutes of Health Shared Instrumentation Grant to the Athinoula Martinos Center at MIT, a National Eye Institute Training Grant, the Smith-Kettlewell Eye Research Institute’s Rehabilitation Engineering Research Center, an Office of Naval Research Vannevar Bush Faculty Fellowship, an NIH Pioneer Award, and a National Science Foundation Science and Technology Center Grant.

Full paper at PNAS

Key brain region was “recycled” as humans developed the ability to read

Humans began to develop systems of reading and writing only within the past few thousand years. Our reading abilities set us apart from other animal species, but a few thousand years is much too short a timeframe for our brains to have evolved new areas specifically devoted to reading.

To account for the development of this skill, some scientists have hypothesized that parts of the brain that originally evolved for other purposes have been “recycled” for reading. As one example, they suggest that a part of the visual system that is specialized to perform object recognition has been repurposed for a key component of reading called orthographic processing — the ability to recognize written letters and words.

A new study from MIT neuroscientists offers evidence for this hypothesis. The findings suggest that even in nonhuman primates, who do not know how to read, a part of the brain called the inferotemporal (IT) cortex is capable of performing tasks such as distinguishing words from nonsense words, or picking out specific letters from a word.

“This work has opened up a potential linkage between our rapidly developing understanding of the neural mechanisms of visual processing and an important primate behavior — human reading,” says James DiCarlo, the head of MIT’s Department of Brain and Cognitive Sciences, an investigator in the McGovern Institute for Brain Research and the Center for Brains, Minds, and Machines, and the senior author of the study.

Rishi Rajalingham, an MIT postdoc, is the lead author of the study, which appears in Nature Communications. Other MIT authors are postdoc Kohitij Kar and technical associate Sachi Sanghavi. The research team also includes Stanislas Dehaene, a professor of experimental cognitive psychology at the Collège de France.

Word recognition

Reading is a complex process that requires recognizing words, assigning meaning to those words, and associating words with their corresponding sound. These functions are believed to be spread out over different parts of the human brain.

Functional magnetic resonance imaging (fMRI) studies have identified a region called the visual word form area (VWFA) that lights up when the brain processes a written word. This region is involved in the orthographic stage: It discriminates words from jumbled strings of letters or words from unknown alphabets. The VWFA is located in the IT cortex, a part of the visual cortex that is also responsible for identifying objects.

DiCarlo and Dehaene became interested in studying the neural mechanisms behind word recognition after cognitive psychologists in France reported that baboons could learn to discriminate words from nonwords, in a study that appeared in Science in 2012.

Using fMRI, Dehaene’s lab has previously found that parts of the IT cortex that respond to objects and faces become highly specialized for recognizing written words once people learn to read.

“However, given the limitations of human imaging methods, it has been challenging to characterize these representations at the resolution of individual neurons, and to quantitatively test if and how these representations might be reused to support orthographic processing,” Dehaene says. “These findings inspired us to ask if nonhuman primates could provide a unique opportunity to investigate the neuronal mechanisms underlying orthographic processing.”

The researchers hypothesized that if parts of the primate brain are predisposed to process text, they might be able to find patterns reflecting that in the neural activity of nonhuman primates as they simply look at words.

To test that idea, the researchers recorded neural activity from about 500 neural sites across the IT cortex of macaques as they looked at about 2,000 strings of letters, some of which were English words and some of which were nonsensical strings of letters.

“The efficiency of this methodology is that you don’t need to train animals to do anything,” Rajalingham says. “What you do is just record these patterns of neural activity as you flash an image in front of the animal.”

The researchers then fed that neural data into a simple computer model called a linear classifier. This model learns to combine the inputs from each of the 500 neural sites to predict whether the string of letters that provoked that activity pattern was a word or not. While the animal itself is not performing this task, the model acts as a “stand-in” that uses the neural data to generate a behavior, Rajalingham says.

Using that neural data, the model was able to generate accurate predictions for many orthographic tasks, including distinguishing words from nonwords and determining if a particular letter is present in a string of words. The model was about 70 percent accurate at distinguishing words from nonwords, which is very similar to the rate reported in the 2012 Science study with baboons. Furthermore, the patterns of errors made by model were similar to those made by the animals.

Neuronal recycling

The researchers also recorded neural activity from a different brain area that also feeds into IT cortex: V4, which is part of the visual cortex. When they fed V4 activity patterns into the linear classifier model, the model poorly predicted (compared to IT) the human or baboon performance on the orthographic processing tasks.

The findings suggest that the IT cortex is particularly well-suited to be repurposed for skills that are needed for reading, and they support the hypothesis that some of the mechanisms of reading are built upon highly evolved mechanisms for object recognition, the researchers say.

The researchers now plan to train animals to perform orthographic tasks and measure how their neural activity changes as they learn the tasks.

The research was funded by the Simons Foundation and the U.S. Office of Naval Research.

Full Paper at Nature Communications

Ila Fiete studies how the brain performs complex computations

While doing a postdoc about 15 years ago, Ila Fiete began searching for faculty jobs in computational neuroscience — a field that uses mathematical tools to investigate brain function. However, there were no advertised positions in theoretical or computational neuroscience at that time in the United States.

“It wasn’t really a field,” she recalls. “That has changed completely, and [now] there are 15 to 20 openings advertised per year.” She ended up finding a position in the Center for Learning and Memory at the University of Texas at Austin, which along with a small handful of universities including MIT, was open to neurobiologists with a computational background.

Computation is the cornerstone of Fiete’s research at MIT’s McGovern Institute for Brain Research, where she has been a faculty member since 2018. Using computational and mathematical techniques, she studies how the brain encodes information in ways that enable cognitive tasks such as learning, memory, and reasoning about our surroundings.

One major research area in Fiete’s lab is how the brain is able to continuously compute the body’s position in space and make constant adjustments to that estimate as we move about.

“When we walk through the world, we can close our eyes and still have a pretty good estimate of where we are,” she says. “This involves being able to update our estimate based on our sense of self-motion. There are also many computations in the brain that involve moving through abstract or mental rather than physical space, and integrating velocity signals of some variety or another. Some of the same ideas and even circuits for spatial navigation might be involved in navigating through these mental spaces.”

No better fit

Fiete spent her childhood between Mumbai, India, and the United States, where her mathematician father held a series of visiting or permanent appointments at the Institute for Advanced Study in Princeton, NJ, the University of California at Berkeley, and the University of Michigan at Ann Arbor.

In India, Fiete’s father did research at the Tata Institute of Fundamental Research, and she grew up spending time with many other children of academics. She was always interested in biology, but also enjoyed math, following in her father’s footsteps.

“My father was not a hands-on parent, wanting to teach me a lot of mathematics, or even asking me about how my math schoolwork was going, but the influence was definitely there. There’s a certain aesthetic to thinking mathematically, which I absorbed very indirectly,” she says. “My parents did not push me into academics, but I couldn’t help but be influenced by the environment.”

She spent her last two years of high school in Ann Arbor and then went to the University of Michigan, where she majored in math and physics. While there, she worked on undergraduate research projects, including two summer stints at Indiana University and the University of Virginia, which gave her firsthand experience in physics research. Those projects covered a range of topics, including proton radiation therapy, the magnetic properties of single crystal materials, and low-temperature physics.

“Those three experiences are what really made me sure that I wanted to go into academics,” Fiete says. “It definitely seemed like the path that I knew the best, and I think it also best suited my temperament. Even now, with more exposure to other fields, I cannot think of a better fit.”

Although she was still interested in biology, she took only one course in the subject in college, holding back because she did not know how to marry quantitative approaches with biological sciences. She began her graduate studies at Harvard University planning to study low-temperature physics, but while there, she decided to start explore quantitative classes in biology. One of those was a systems biology course taught by then-MIT professor Sebastian Seung, which transformed her career trajectory.

“It was truly inspiring,” she recalls. “Thinking mathematically about interacting systems in biology was really exciting. It was really my first introduction to systems biology, and it had me hooked immediately.”

She ended up doing most of her PhD research in Seung’s lab at MIT, where she studied how the brain uses incoming signals of the velocity of head movement to control eye position. For example, if we want to keep our gaze fixed on a particular location while our head is moving, the brain must continuously calculate and adjust the amount of tension needed in the muscles surrounding the eyes, to compensate for the movement of the head.

“Bizarre” cells

After earning her PhD, Fiete and her husband, a theoretical physicist, went to the Kavli Institute for Theoretical Physics at the University of California at Santa Barbara, where they each held fellowships for independent research. While there, Fiete began working on a research topic that she still studies today — grid cells. These cells, located in the entorhinal cortex of the brain, enable us to navigate our surroundings by helping the brain to create a neural representation of space.

Midway through her position there, she learned of a new discovery, that when a rat moves across an open room, a grid cell in its brain fires at many different locations arranged geometrically in a regular pattern of repeating triangles. Together, a population of grid cells forms a lattice of triangles representing the entire room. These cells have also been found in the brains of various other mammals, including humans.

“It’s amazing. It’s this very crystalline response,” Fiete says. “When I read about that, I fell out of my chair. At that point I knew this was something bizarre that would generate so many questions about development, function, and brain circuitry that could be studied computationally.”

One question Fiete and others have investigated is why the brain needs grid cells at all, since it also has so-called place cells that each fire in one specific location in the environment. A possible explanation that Fiete has explored is that grid cells of different scales, working together, can represent a vast number of possible positions in space and also multiple dimensions of space.

“If you have a few cells that can parsimoniously generate a very large coding space, then you can afford to not use most of that coding space,” she says. “You can afford to waste most of it, which means you can separate things out very well, in which case it becomes not so susceptible to noise.”

Since returning to MIT, she has also pursued a research theme related to what she explored in her PhD thesis — how the brain maintains neural representations of where the head is located in space. In a paper published last year, she uncovered that the brain generates a one-dimensional ring of neural activity that acts as a compass, allowing the brain to calculate the current direction of the head relative to the external world.

Her lab also studies cognitive flexibility — the brain’s ability to perform so many different types of cognitive tasks.

“How it is that we can repurpose the same circuits and flexibly use them to solve many different problems, and what are the neural codes that are amenable to that kind of reuse?” she says. “We’re also investigating the principles that allow the brain to hook multiple circuits together to solve new problems without a lot of reconfiguration.”

Looking into the black box of deep learning networks

Deep learning systems are revolutionizing technology around us, from voice recognition that pairs you with your phone to autonomous vehicles that are increasingly able to see and recognize obstacles ahead. But much of this success involves trial and error when it comes to the deep learning networks themselves. A group of MIT researchers recently reviewed their contributions to a better theoretical understanding of deep learning networks, providing direction for the field moving forward.

“Deep learning was in some ways an accidental discovery,” explains Tomaso Poggio, investigator at the McGovern Institute for Brain Research, director of the Center for Brains, Minds, and Machines (CBMM), and the Eugene McDermott Professor in Brain and Cognitive Sciences. “We still do not understand why it works. A theoretical framework is taking form, and I believe that we are now close to a satisfactory theory. It is time to stand back and review recent insights.”

Climbing data mountains

Our current era is marked by a superabundance of data — data from inexpensive sensors of all types, text, the internet, and large amounts of genomic data being generated in the life sciences. Computers nowadays ingest these multidimensional datasets, creating a set of problems dubbed the “curse of dimensionality” by the late mathematician Richard Bellman.

One of these problems is that representing a smooth, high-dimensional function requires an astronomically large number of parameters. We know that deep neural networks are particularly good at learning how to represent, or approximate, such complex data, but why? Understanding why could potentially help advance deep learning applications.

“Deep learning is like electricity after Volta discovered the battery, but before Maxwell,” explains Poggio.

“Useful applications were certainly possible after Volta, but it was Maxwell’s theory of electromagnetism, this deeper understanding that then opened the way to the radio, the TV, the radar, the transistor, the computers, and the internet,” says Poggio, who is the founding scientific advisor of The Core, MIT Quest for Intelligence, and an investigator in the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT.

The theoretical treatment by Poggio, Andrzej Banburski, and Qianli Liao points to why deep learning might overcome data problems such as “the curse of dimensionality.” Their approach starts with the observation that many natural structures are hierarchical. To model the growth and development of a tree doesn’t require that we specify the location of every twig. Instead, a model can use local rules to drive branching hierarchically. The primate visual system appears to do something similar when processing complex data. When we look at natural images — including trees, cats, and faces — the brain successively integrates local image patches, then small collections of patches, and then collections of collections of patches.

“The physical world is compositional — in other words, composed of many local physical interactions,” explains Qianli Liao, an author of the study, and a graduate student in the Department of Electrical Engineering and Computer Science and a member of the CBMM. “This goes beyond images. Language and our thoughts are compositional, and even our nervous system is compositional in terms of how neurons connect with each other. Our review explains theoretically why deep networks are so good at representing this complexity.”

The intuition is that a hierarchical neural network should be better at approximating a compositional function than a single “layer” of neurons, even if the total number of neurons is the same. The technical part of their work identifies what “better at approximating” means and proves that the intuition is correct.

Generalization puzzle

There is a second puzzle about what is sometimes called the unreasonable effectiveness of deep networks. Deep network models often have far more parameters than data to fit them, despite the mountains of data we produce these days. This situation ought to lead to what is called “overfitting,” where your current data fit the model well, but any new data fit the model terribly. This is dubbed poor generalization in conventional models. The conventional solution is to constrain some aspect of the fitting procedure. However, deep networks do not seem to require this constraint. Poggio and his colleagues prove that, in many cases, the process of training a deep network implicitly “regularizes” the solution, providing constraints.

The work has a number of implications going forward. Though deep learning is actively being applied in the world, this has so far occurred without a comprehensive underlying theory. A theory of deep learning that explains why and how deep networks work, and what their limitations are, will likely allow development of even much more powerful learning approaches.

“In the long term, the ability to develop and build better intelligent machines will be essential to any technology-based economy,” explains Poggio. “After all, even in its current — still highly imperfect — state, deep learning is impacting, or about to impact, just about every aspect of our society and life.”

Mapping the brain’s sensory gatekeeper

Many people with autism experience sensory hypersensitivity, attention deficits, and sleep disruption. One brain region that has been implicated in these symptoms is the thalamic reticular nucleus (TRN), which is believed to act as a gatekeeper for sensory information flowing to the cortex.

A team of researchers from MIT and the Broad Institute of MIT and Harvard has now mapped the TRN in unprecedented detail, revealing that the region contains two distinct subnetworks of neurons with different functions. The findings could offer researchers more specific targets for designing drugs that could alleviate some of the sensory, sleep, and attention symptoms of autism, says Guoping Feng, one of the leaders of the research team.

These cross-sections of the thalamic reticular nucleus (TRN) show two distinct populations of neurons, labeled in purple and green. A team of researchers from MIT and the Broad Institute of MIT and Harvard has now mapped the TRN in unprecedented detail.
Image: courtesy of the researchers

“The idea is that you could very specifically target one group of neurons, without affecting the whole brain and other cognitive functions,” says Feng, the James W. and Patricia Poitras Professor of Neuroscience at MIT and a member of MIT’s McGovern Institute for Brain Research.

Feng; Zhanyan Fu, associate director of neurobiology at the Broad Institute’s Stanley Center for Psychiatric Research; and Joshua Levin, a senior group leader at the Broad Institute, are the senior authors of the study, which appears today in Nature. The paper’s lead authors are former MIT postdoc Yinqing Li, former Broad Institute postdoc Violeta Lopez-Huerta, and Broad Institute research scientist Xian Adiconis.

Distinct populations

When sensory input from the eyes, ears, or other sensory organs arrives in our brains, it goes first to the thalamus, which then relays it to the cortex for higher-level processing. Impairments of these thalamo-cortical circuits can lead to attention deficits, hypersensitivity to noise and other stimuli, and sleep problems.

One of the major pathways that controls information flow between the thalamus and the cortex is the TRN, which is responsible for blocking out distracting sensory input. In 2016, Feng and MIT Assistant Professor Michael Halassa, who is also an author of the new Nature paper, discovered that loss of a gene called Ptchd1 significantly affects TRN function. In boys, loss of this gene, which is carried on the X chromosome, can lead to attention deficits, hyperactivity, aggression, intellectual disability, and autism spectrum disorders.

In that study, the researchers found that when the Ptchd1 gene was knocked out in mice, the animals showed many of the same behavioral defects seen in human patients. When it was knocked out only in the TRN, the mice showed only hyperactivity, attention deficits, and sleep disruption, suggesting that the TRN is responsible for those symptoms.

In the new study, the researchers wanted to try to learn more about the specific types of neurons found in the TRN, in hopes of finding new ways to treat hyperactivity and attention deficits. Currently, those symptoms are most often treated with stimulant drugs such as Ritalin, which have widespread effects throughout the brain.

“Our goal was to find some specific ways to modulate the function of thalamo-cortical output and relate it to neurodevelopmental disorders,” Feng says. “We decided to try using single-cell technology to dissect out what cell types are there, and what genes are expressed. Are there specific genes that are druggable as a target?”

To explore that possibility, the researchers sequenced the messenger RNA molecules found in neurons of the TRN, which reveals genes that are being expressed in those cells. This allowed them to identify hundreds of genes that could be used to differentiate the cells into two subpopulations, based on how strongly they express those particular genes.

They found that one of these cell populations is located in the core of the TRN, while the other forms a very thin layer surrounding the core. These two populations also form connections to different parts of the thalamus, the researchers found. Based on those connections, the researchers hypothesize that cells in the core are involved in relaying sensory information to the brain’s cortex, while cells in the outer layer appear to help coordinate information that comes in through different senses, such as vision and hearing.

“Druggable targets”

The researchers now plan to study the varying roles that these two populations of neurons may have in a variety of neurological symptoms, including attention deficits, hypersensitivity, and sleep disruption. Using genetic and optogenetic techniques, they hope to determine the effects of activating or inhibiting different TRN cell types, or genes expressed in those cells.

“That can help us in the future really develop specific druggable targets that can potentially modulate different functions,” Feng says. “Thalamo-cortical circuits control many different things, such as sensory perception, sleep, attention, and cognition, and it may be that these can be targeted more specifically.”

This approach could also be useful for treating attention or hypersensitivity disorders even when they aren’t caused by defects in TRN function, the researchers say.

“TRN is a target where if you enhance its function, you might be able to correct problems caused by impairments of the thalamo-cortical circuits,” Feng says. “Of course we are far away from the development of any kind of treatment, but the potential is that we can use single-cell technology to not only understand how the brain organizes itself, but also how brain functions can be segregated, allowing you to identify much more specific targets that modulate specific functions.”

The research was funded by the Simons Center for the Social Brain at MIT, the Hock E. Tan and K. Lisa Yang Center for Autism Research at MIT, the James and Patricia Poitras Center for Psychiatric Disorders Research at MIT, the Stanley Center for Psychiatric Research at the Broad Institute, the National Institutes of Health/National Institute for Mental Health, the Klarman Cell Observatory at the Broad Institute, the Pew Foundation, and the Human Frontiers Science Program.

A mechanical way to stimulate neurons

In addition to responding to electrical and chemical stimuli, many of the body’s neural cells can also respond to mechanical effects, such as pressure or vibration. But these responses have been more difficult for researchers to study, because there has been no easily controllable method for inducing such mechanical stimulation of the cells. Now, researchers at MIT and elsewhere have found a new method for doing just that.

The finding might offer a step toward new kinds of therapeutic treatments, similar to electrically based neurostimulation that has been used to treat Parkinson’s disease and other conditions. Unlike those systems, which require an external wire connection, the new system would be completely contact-free after an initial injection of particles, and could be reactivated at will through an externally applied magnetic field.

The finding is reported in the journal ACS Nano, in a paper by former MIT postdoc Danijela Gregurec, Alexander Senko PhD ’19, Associate Professor Polina Anikeeva, and nine others at MIT, at Boston’s Brigham and Women’s Hospital, and in Spain.

The new method opens a new pathway for the stimulation of nerve cells within the body, which has so far almost entirely relied on either chemical pathways, through the use of pharmaceuticals, or on electrical pathways, which require invasive wires to deliver voltage into the body. This mechanical stimulation, which activates entirely different signaling pathways within the neurons themselves, could provide a significant area of study, the researchers say.

“An interesting thing about the nervous system is that neurons can actually detect forces,” Senko says. “That’s how your sense of touch works, and also your sense of hearing and balance.” The team targeted a particular group of neurons within a structure known as the dorsal root ganglion, which forms an interface between the central and peripheral nervous systems, because these cells are particularly sensitive to mechanical forces.

The applications of the technique could be similar to those being developed in the field of bioelectronic medicines, Senko says, but those require electrodes that are typically much bigger and stiffer than the neurons being stimulated, limiting their precision and sometimes damaging cells.

The key to the new process was developing minuscule discs with an unusual magnetic property, which can cause them to start fluttering when subjected to a certain kind of varying magnetic field. Though the particles themselves are only 100 or so nanometers across, roughly a hundredth of the size of the neurons they are trying to stimulate, they can be made and injected in great quantities, so that collectively their effect is strong enough to activate the cell’s pressure receptors. “We made nanoparticles that actually produce forces that cells can detect and respond to,” Senko says.

Anikeeva says that conventional magnetic nanoparticles would have required impractically large magnetic fields to be activated, so finding materials that could provide sufficient force with just moderate magnetic activation was “a very hard problem.” The solution proved to be a new kind of magnetic nanodiscs.

These discs, which are hundreds of nanometers in diameter, contain a vortex configuration of atomic spins when there are no external magnetic fields applied. This makes the particles behave as if they were not magnetic at all, making them exceptionally stable in solutions. When these discs are subjected to a very weak varying magnetic field of a few millitesla, with a low frequency of just several hertz, they switch to a state where the internal spins are all aligned in the disc plane. This allows these nanodiscs to act as levers — wiggling up and down with the direction of the field.

Anikeeva, who is an associate professor in the departments of Materials Science and Engineering and Brain and Cognitive Sciences, says this work combines several disciplines, including new chemistry that led to development of these nanodiscs, along with electromagnetic effects and work on the biology of neurostimulation.

The team first considered using particles of a magnetic metal alloy that could provide the necessary forces, but these were not biocompatible materials, and they were prohibitively expensive. The researchers found a way to use particles made from hematite, a benign iron oxide, which can form the required disc shapes. The hematite was then converted into magnetite, which has the magnetic properties they needed and is known to be benign in the body. This chemical transformation from hematite to magnetite dramatically turns a blood-red tube of particles to jet black.

“We had to confirm that these particles indeed supported this really unusual spin state, this vortex,” Gregurec says. They first tried out the newly developed nanoparticles and proved, using holographic imaging systems provided by colleagues in Spain, that the particles really did react as expected, providing the necessary forces to elicit responses from neurons. The results came in late December and “everyone thought that was a Christmas present,” Anikeeva recalls, “when we got our first holograms, and we could really see that what we have theoretically predicted and chemically suspected actually was physically true.”

The work is still in its infancy, she says. “This is a very first demonstration that it is possible to use these particles to transduce large forces to membranes of neurons in order to stimulate them.”

She adds “that opens an entire field of possibilities. … This means that anywhere in the nervous system where cells are sensitive to mechanical forces, and that’s essentially any organ, we can now modulate the function of that organ.” That brings science a step closer, she says, to the goal of bioelectronic medicine that can provide stimulation at the level of individual organs or parts of the body, without the need for drugs or electrodes.

The work was supported by the U.S. Defense Advanced Research Projects Agency, the National Institute of Mental Health, the Department of Defense, the Air Force Office of Scientific Research, and the National Defense Science and Engineering Graduate Fellowship.

Full paper at ACS Nano

Signs of COVID19 may be hidden in speech signals

It’s often easy to tell when colleagues are struggling with a cold — they sound sick. Maybe their voices are lower or have a nasally tone. Infections change the quality of our voices in various ways. But MIT Lincoln Laboratory researchers are detecting these changes in Covid-19 patients even when these changes are too subtle for people to hear or even notice in themselves.

By processing speech recordings of people infected with Covid-19 but not yet showing symptoms, these researchers found evidence of vocal biomarkers, or measurable indicators, of the disease. These biomarkers stem from disruptions the infection causes in the movement of muscles across the respiratory, laryngeal, and articulatory systems. A technology letter describing this research was recently published in IEEE Open Journal of Engineering in Medicine and Biology.

While this research is still in its early stages, the initial findings lay a framework for studying these vocal changes in greater detail. This work may also hold promise for using mobile apps to screen people for the disease, particularly those who are asymptomatic.

Talking heads

“I had this ‘aha’ moment while I was watching the news,” says Thomas Quatieri, a senior staff member in the laboratory’s Human Health and Performance Systems Group. Quatieri has been leading the group’s research in vocal biomarkers for the past decade; their focus has been on discovering vocal biomarkers of neurological disorders such as amyotrophic lateral sclerosis (ALS) and Parkinson’s disease. These diseases, and many others, change the brain’s ability to turn thoughts into words, and those changes can be detected by processing speech signals.

He and his team wondered whether vocal biomarkers might also exist for COVID19. The symptoms led them to think so. When symptoms manifest, a person typically has difficulty breathing. Inflammation in the respiratory system affects the intensity with which air is exhaled when a person talks. This air interacts with hundreds of other potentially inflamed muscles on its journey to speech production. These interactions impact the loudness, pitch, steadiness, and resonance of the voice — measurable qualities that form the basis of their biomarkers.

While watching the news, Quatieri realized there were speech samples in front of him of people who had tested positive for COVID19. He and his colleagues combed YouTube for clips of celebrities or TV hosts who had given interviews while they were COVID19 positive but asymptomatic. They identified five subjects. Then, they downloaded interviews of those people from before they had COVID19, matching audio conditions as best they could.

They then used algorithms to extract features from the vocal signals in each audio sample. “These vocal features serve as proxies for the underlying movements of the speech production systems,” says Tanya Talkar, a PhD candidate in the Speech and Hearing Bioscience and Technology program at Harvard University.

The signal’s amplitude, or loudness, was extracted as a proxy for movement in the respiratory system. For studying movements in the larynx, they measured pitch and the steadiness of pitch, two indicators of how stable the vocal cords are. As a proxy for articulator movements — like those of the tongue, lips, jaw, and more — they extracted speech formants. Speech formants are frequency measurements that correspond to how the mouth shapes sound waves to create a sequence of phonemes (vowels and consonants) and to contribute to a certain vocal quality (nasally versus warm, for example).

They hypothesized that Covid19 inflammation causes muscles across these systems to become overly coupled, resulting in a less complex movement. “Picture these speech subsystems as if they are the wrist and fingers of a skilled pianist; normally, the movements are independent and highly complex,” Quatieri says. Now, picture if the wrist and finger movements were to become stuck together, moving as one. This coupling would force the pianist to play a much simpler tune.

The researchers looked for evidence of coupling in their features, measuring how each feature changed in relation to another in 10 millisecond increments as the subject spoke. These values were then plotted on an eigenspectrum; the shape of this eigenspectrum plot indicates the complexity of the signals. “If the eigenspace of the values forms a sphere, the signals are complex. If there is less complexity, it might look more like a flat oval,” Talkar says.

In the end, they found a decreased complexity of movement in the Covid-19 interviews as compared to the pre-Covid-19 interviews. “The coupling was less prominent between larynx and articulator motion, but we’re seeing a reduction in complexity between respiratory and larynx motion,” Talkar says.

Early detections

These preliminary results hint that biomarkers derived from vocal system coordination can indicate the presence of Covid-19. However, the researchers note that it’s still early to draw conclusions, and more data are needed to validate their findings. They’re working now with a publicly released dataset from Carnegie Mellon University that contains audio samples from individuals who have tested positive for COVID19.

Beyond collecting more data to fuel this research, the team is looking at using mobile apps to implement it. A partnership is underway with Satra Ghosh at the MIT McGovern Institute for Brain Research to integrate vocal screening for Covid-19 into its VoiceUp app, which was initially developed to study the link between voice and depression. A follow-on effort could add this vocal screening into the How We Feel app. This app asks users questions about their daily health status and demographics, with the aim to use these data to pinpoint hotspots and predict the percentage of people who have the disease in different regions of the country. Asking users to also submit a daily voice memo to screen for biomarkers of Covid-19 could potentially help scientists catch on to an outbreak.

“A sensing system integrated into a mobile app could pick up on infections early, before people feel sick or, especially, for these subsets of people who don’t ever feel sick or show symptoms,” says Jeffrey Palmer, who leads the research group. “This is also something the U.S. Army is interested in as part of a holistic Covid-19 monitoring system.” Even after a diagnosis, this sensing ability could help doctors remotely monitor their patients’ progress or monitor the effects of a vaccine or drug treatment.

As the team continues their research, they plan to do more to address potential confounders that could cause inaccuracies in their results, such as different recording environments, the emotional status of the subjects, or other illnesses causing vocal changes. They’re also supporting similar research. The Mass General Brigham Center for COVID Innovation has connected them to international scientists who are following the team’s framework to analyze coughs.

“There are a lot of other interesting areas to look at. Here, we looked at the physiological impacts on the vocal tract. We’re also looking to expand our biomarkers to consider neurophysiological impacts linked to Covid-19, like the loss of taste and smell,” Quatieri says. “Those symptoms can affect speaking, too.”

Nine MIT School of Science professors receive tenure for 2020

Beginning July 1, nine faculty members in the MIT School of Science have been granted tenure by MIT. They are appointed in the departments of Brain and Cognitive Sciences, Chemistry, Mathematics, and Physics.

Physicist Ibrahim Cisse investigates living cells to reveal and study collective behaviors and biomolecular phase transitions at the resolution of single molecules. The results of his work help determine how disruptions in genes can cause diseases like cancer. Cisse joined the Department of Physics in 2014 and now holds a joint appointment with the Department of Biology. His education includes a bachelor’s degree in physics from North Carolina Central University, concluded in 2004, and a doctoral degree in physics from the University of Illinois at Urbana-Champaign, achieved in 2009. He followed his PhD with a postdoc at the École Normale Supérieure of Paris and a research specialist appointment at the Howard Hughes Medical Institute’s Janelia Research Campus.

Jörn Dunkel is a physical applied mathematician. His research focuses on the mathematical description of complex nonlinear phenomena in a variety of fields, especially biophysics. The models he develops help predict dynamical behaviors and structure formation processes in developmental biology, fluid dynamics, and even knot strengths for sailing, rock climbing and construction. He joined the Department of Mathematics in 2013 after completing postdoctoral appointments at Oxford University and Cambridge University. He received diplomas in physics and mathematics from Humboldt University of Berlin in 2004 and 2005, respectively. The University of Augsburg awarded Dunkel a PhD in statistical physics in 2008.

A cognitive neuroscientist, Mehrdad Jazayeri studies the neurobiological underpinnings of mental functions such as planning, inference, and learning by analyzing brain signals in the lab and using theoretical and computational models, including artificial neural networks. He joined the Department of Brain and Cognitive Sciences in 2013. He achieved a BS in electrical engineering from the Sharif University of Technology in 1994, an MS in physiology at the University of Toronto in 2001, and a PhD in neuroscience from New York University in 2007. Prior to joining MIT, he was a postdoc at the University of Washington. Jazayeri is also an investigator at the McGovern Institute for Brain Research.

Yen-Jie Lee is an experimental particle physicist in the field of proton-proton and heavy-ion physics. Utilizing the Large Hadron Colliders, Lee explores matter in extreme conditions, providing new insight into strong interactions and what might have existed and occurred at the beginning of the universe and in distant star cores. His work on jets and heavy flavor particle production in nuclei collisions improves understanding of the quark-gluon plasma, predicted by quantum chromodynamics (QCD) calculations, and the structure of heavy nuclei. He also pioneered studies of high-density QCD with electron-position annihilation data. Lee joined the Department of Physics in 2013 after a fellowship at CERN and postdoc research at the Laboratory for Nuclear Science at MIT. His bachelor’s and master’s degrees were awarded by the National Taiwan University in 2002 and 2004, respectively, and his doctoral degree by MIT in 2011. Lee is a member of the Laboratory for Nuclear Science.

Josh McDermott investigates the sense of hearing. His research addresses both human and machine audition using tools from experimental psychology, engineering, and neuroscience. McDermott hopes to better understand the neural computation underlying human hearing, to improve devices to assist hearing impaired, and to enhance machine interpretation of sounds. Prior to joining MIT’s Department of Brain and Cognitive Sciences, he was awarded a BA in 1998 in brain and cognitive sciences by Harvard University, a master’s degree in computational neuroscience in 2000 by University College London, and a PhD in brain and cognitive sciences in 2006 by MIT. Between his doctoral time at MIT and returning as a faculty member, he was a postdoc at the University of Minnesota and New York University, and a visiting scientist at Oxford University. McDermott is also an associate investigator at the McGovern Institute for Brain Research and an investigator in the Center for Brains, Minds and Machines.

Solving environmental challenges by studying and manipulating chemical reactions is the focus of Yogesh Surendranath’s research. Using chemistry, he works at the molecular level to understand how to efficiently interconvert chemical and electrical energy. His fundamental studies aim to improve energy storage technologies, such as batteries, fuel cells, and electrolyzers, that can be used to meet future energy demand with reduced carbon emissions. Surendranath joined the Department of Chemistry in 2013 after a postdoc at the University of California at Berkeley. His PhD was completed in 2011 at MIT, and BS in 2006 at the University of Virginia. Suendranath is also a collaborator in the MIT Energy Initiative.

A theoretical astrophysicist, Mark Vogelsberger is interested in large-scale structures of the universe, such as galaxy formation. He combines observational data, theoretical models, and simulations that require high-performance supercomputers to improve and develop detailed models that simulate galaxy diversity, clustering, and their properties, including a plethora of physical effects like magnetic fields, cosmic dust, and thermal conduction. Vogelsberger also uses simulations to generate scenarios involving alternative forms of dark matter. He joined the Department of Physics in 2014 after a postdoc at the Harvard-Smithsonian Center for Astrophysics. Vogelsberger is a 2006 graduate of the University of Mainz undergraduate program in physics, and a 2010 doctoral graduate of the University of Munich and the Max Plank Institute for Astrophysics. He is also a principal investigator in the MIT Kavli Institute for Astrophysics and Space Research.

Adam Willard is a theoretical chemist with research interests that fall across molecular biology, renewable energy, and material science. He uses theory, modeling, and molecular simulation to study the disorder that is inherent to systems over nanometer-length scales. His recent work has highlighted the fundamental and unexpected role that such disorder plays in phenomena such as microscopic energy transport in semiconducting plastics, ion transport in batteries, and protein hydration. Joining the Department of Chemistry in 2013, Willard was formerly a postdoc at Lawrence Berkeley National Laboratory and then the University of Texas at Austin. He holds a PhD in chemistry from the University of California at Berkeley, achieved in 2009, and a BS in chemistry and mathematics from the University of Puget Sound, granted in 2003.

Lindley Winslow seeks to understand the fundamental particles shaped the evolution of our universe. As an experimental particle and nuclear physicist, she develops novel detection technology to search for axion dark matter and a proposed nuclear decay that makes more matter than antimatter. She started her faculty position in the Department of Physics in 2015 following a postdoc at MIT and a subsequent faculty position at the University of California at Los Angeles. Winslow achieved her BA in physics and astronomy in 2001 and PhD in physics in 2008, both at the University of California at Berkeley. She is also a member of the Laboratory for Nuclear Science.

Producing a gaseous messenger molecule inside the body, on demand

Nitric oxide is an important signaling molecule in the body, with a role in building nervous system connections that contribute to learning and memory. It also functions as a messenger in the cardiovascular and immune systems.

But it has been difficult for researchers to study exactly what its role is in these systems and how it functions. Because it is a gas, there has been no practical way to direct it to specific individual cells in order to observe its effects. Now, a team of scientists and engineers at MIT and elsewhere has found a way of generating the gas at precisely targeted locations inside the body, potentially opening new lines of research on this essential molecule’s effects.

The findings are reported today in the journal Nature Nanotechnology, in a paper by MIT professors Polina Anikeeva, Karthish Manthiram, and Yoel Fink; graduate student Jimin Park; postdoc Kyoungsuk Jin; and 10 others at MIT and in Taiwan, Japan, and Israel.

“It’s a very important compound,” says Anikeeva, who is also an Investigator at the McGovern Institute. But figuring out the relationships between the delivery of nitric oxide to particular cells and synapses, and the resulting higher-level effects on the learning process has been difficult. So far, most studies have resorted to looking at systemic effects, by knocking out genes responsible for the production of enzymes the body uses to produce nitric oxide where it’s needed as a messenger.

But that approach, she says, is “very brute force. This is a hammer to the system because you’re knocking it out not just from one specific region, let’s say in the brain, but you essentially knock it out from the entire organism, and this can have other side effects.”

Others have tried introducing compounds into the body that release nitric oxide as they decompose, which can produce somewhat more localized effects, but these still spread out, and it is a very slow and uncontrolled process.

The team’s solution uses an electric voltage to drive the reaction that produces nitric oxide. This is similar to what is happening on a much larger scale with some industrial electrochemical production processes, which are relatively modular and controllable, enabling local and on-demand chemical synthesis. “We’ve taken that concept and said, you know what? You can be so local and so modular with an electrochemical process that you can even do this at the level of the cell,” Manthiram says. “And I think what’s even more exciting about this is that if you use electric potential, you have the ability to start production and stop production in a heartbeat.”

The team’s key achievement was finding a way for this kind of electrochemically controlled reaction to be operated efficiently and selectively at the nanoscale. That required finding a suitable catalyst material that could generate nitric oxide from a benign precursor material. They found that nitrite offered a promising precursor for electrochemical nitric oxide generation.

“We came up with the idea of making a tailored nanoparticle to catalyze the reaction,” Jin says. They found that the enzymes that catalyze nitric oxide generation in nature contain iron-sulfur centers. Drawing inspiration from these enzymes, they devised a catalyst that consisted of nanoparticles of iron sulfide, which activates the nitric oxide-producing reaction in the presence of an electric field and nitrite. By further doping these nanoparticles with platinum, the team was able to enhance their electrocatalytic efficiency.

To miniaturize the electrocatalytic cell to the scale of biological cells, the team has created custom fibers containing the positive and negative microelectrodes, which are coated with the iron sulfide nanoparticles, and a microfluidic channel for the delivery of sodium nitrite, the precursor material. When implanted in the brain, these fibers direct the precursor to the specific neurons. Then the reaction can be activated at will electrochemically, through the electrodes in the same fiber, producing an instant burst of nitric oxide right at that spot so that its effects can be recorded in real-time.

Device created by the Anikeeva lab. The tube at top is connected to a supply of the precursor material, sodium nitrite, which then passes through a channel in the fiber at the bottom and into the body, which also contains the electrodes to stimulate the release of nitric oxide. The electrodes are connected through the four-pin connector on the left.
Photo: Anikeeva Lab

As a test, they used the system in a rodent model to activate a brain region that is known to be a reward center for motivation and social interaction, and that plays a role in addiction. They showed that it did indeed provoke the expected signaling responses, demonstrating its effectiveness.

Anikeeva says this “would be a very useful biological research platform, because finally, people will have a way to study the role of nitric oxide at the level of single cells, in whole organisms that are performing tasks.” She points out that there are certain disorders that are associated with disruptions of the nitric oxide signaling pathway, so more detailed studies of how this pathway operates could help lead to treatments.

The method could be generalizable, Park says, as a way of producing other molecules of biological interest within an organism. “Essentially we can now have this really scalable and miniaturized way to generate many molecules, as long as we find the appropriate catalyst, and as long as we find an appropriate starting compound that is also safe.” This approach to generating signaling molecules in situ could have wide applications in biomedicine, he says.

“One of our reviewers for this manuscript pointed out that this has never been done — electrolysis in a biological system has never been leveraged to control biological function,” Anikeeva says. “So, this is essentially the beginning of a field that could potentially be very useful” to study molecules that can be delivered at precise locations and times, for studies in neurobiology or any other biological functions. That ability to make molecules on demand inside the body could be useful in fields such as immunology or cancer research, she says.

The project got started as a result of a chance conversation between Park and Jin, who were friends working in different fields — neurobiology and electrochemistry. Their initial casual discussions ended up leading to a full-blown collaboration between several departments. But in today’s locked-down world, Jin says, such chance encounters and conversations have become less likely. “In the context of how much the world has changed, if this were in this era in which we’re all apart from each other, and not in 2018, there is some chance that this collaboration may just not ever have happened.”

“This work is a milestone in bioelectronics,” says Bozhi Tian, an associate professor of chemistry at the University of Chicago, who was not connected to this work. “It integrates nanoenabled catalysis, microfluidics, and traditional bioelectronics … and it solves a longstanding challenge of precise neuromodulation in the brain, by in situ generation of signaling molecules. This approach can be widely adopted by the neuroscience community and can be generalized to other signaling systems, too.”

Besides MIT, the team included researchers at National Chiao Tung University in Taiwan, NEC Corporation in Japan, and the Weizman Institute of Science in Israel. The work was supported by the National Institute for Neurological Disorders and Stroke, the National Institutes of Health, the National Science Foundation, and MIT’s Department of Chemical Engineering.

A focused approach to imaging neural activity in the brain

When neurons fire an electrical impulse, they also experience a surge of calcium ions. By measuring those surges, researchers can indirectly monitor neuron activity, helping them to study the role of individual neurons in many different brain functions.

One drawback to this technique is the crosstalk generated by the axons and dendrites that extend from neighboring neurons, which makes it harder to get a distinctive signal from the neuron being studied. MIT engineers have now developed a way to overcome that issue, by creating calcium indicators, or sensors, that accumulate only in the body of a neuron.

“People are using calcium indicators for monitoring neural activity in many parts of the brain,” says Edward Boyden, the Y. Eva Tan Professor in Neurotechnology and a professor of biological engineering and of brain and cognitive sciences at MIT. “Now they can get better results, obtaining more accurate neural recordings that are less contaminated by crosstalk.”

To achieve this, the researchers fused a commonly used calcium indicator called GCaMP to a short peptide that targets it to the cell body. The new molecule, which the researchers call SomaGCaMP, can be easily incorporated into existing workflows for calcium imaging, the researchers say.

Boyden is the senior author of the study, which appears today in Neuron. The paper’s lead authors are Research Scientist Or Shemesh, postdoc Changyang Linghu, and former postdoc Kiryl Piatkevich.

Molecular focus

The GCaMP calcium indicator consists of a fluorescent protein attached to a calcium-binding protein called calmodulin, and a calmodulin-binding protein called M13 peptide. GCaMP fluoresces when it binds to calcium ions in the brain, allowing researchers to indirectly measure neuron activity.

“Calcium is easy to image, because it goes from a very low concentration inside the cell to a very high concentration when a neuron is active,” says Boyden, who is also a member of MIT’s McGovern Institute for Brain Research, Media Lab, and Koch Institute for Integrative Cancer Research.

The simplest way to detect these fluorescent signals is with a type of imaging called one-photon microscopy. This is a relatively inexpensive technique that can image large brain samples at high speed, but the downside is that it picks up crosstalk between neighboring neurons. GCaMP goes into all parts of a neuron, so signals from the axons of one neuron can appear as if they are coming from the cell body of a neighbor, making the signal less accurate.

A more expensive technique called two-photon microscopy can partly overcome this by focusing light very narrowly onto individual neurons, but this approach requires specialized equipment and is also slower.

Boyden’s lab decided to take a different approach, by modifying the indicator itself, rather than the imaging equipment.

“We thought, rather than optically focusing light, what if we molecularly focused the indicator?” he says. “A lot of people use hardware, such as two-photon microscopes, to clean up the imaging. We’re trying to build a molecular version of what other people do with hardware.”

In a related paper that was published last year, Boyden and his colleagues used a similar approach to reduce crosstalk between fluorescent probes that directly image neurons’ membrane voltage. In parallel, they decided to try a similar approach with calcium imaging, which is a much more widely used technique.

To target GCaMP exclusively to cell bodies of neurons, the researchers tried fusing GCaMP to many different proteins. They explored two types of candidates — naturally occurring proteins that are known to accumulate in the cell body, and human-designed peptides — working with MIT biology Professor Amy Keating, who is also an author of the paper. These synthetic proteins are coiled-coil proteins, which have a distinctive structure in which multiple helices of the proteins coil together.

Less crosstalk

The researchers screened about 30 candidates in neurons grown in lab dishes, and then chose two — one artificial coiled-coil and one naturally occurring peptide — to test in animals. Working with Misha Ahrens, who studies zebrafish at the Janelia Research Campus, they found that both proteins offered significant improvements over the original version of GCaMP. The signal-to-noise ratio — a measure of the strength of the signal compared to background activity — went up, and activity between adjacent neurons showed reduced correlation.

In studies of mice, performed in the lab of Xue Han at Boston University, the researchers also found that the new indicators reduced the correlations between activity of neighboring neurons. Additional studies using a miniature microscope (called a microendoscope), performed in the lab of Kay Tye at the Salk Institute for Biological Studies, revealed a significant increase in signal-to-noise ratio with the new indicators.

“Our new indicator makes the signals more accurate. This suggests that the signals that people are measuring with regular GCaMP could include crosstalk,” Boyden says. “There’s the possibility of artifactual synchrony between the cells.”

In all of the animal studies, they found that the artificial, coiled-coil protein produced a brighter signal than the naturally occurring peptide that they tested. Boyden says it’s unclear why the coiled-coil proteins work so well, but one possibility is that they bind to each other, making them less likely to travel very far within the cell.

Boyden hopes to use the new molecules to try to image the entire brains of small animals such as worms and fish, and his lab is also making the new indicators available to any researchers who want to use them.

“It should be very easy to implement, and in fact many groups are already using it,” Boyden says. “They can just use the regular microscopes that they already are using for calcium imaging, but instead of using the regular GCaMP molecule, they can substitute our new version.”

The research was primarily funded by the National Institute of Mental Health and the National Institute of Drug Abuse, as well as a Director’s Pioneer Award from the National Institutes of Health, and by Lisa Yang, John Doerr, the HHMI-Simons Faculty Scholars Program, and the Human Frontier Science Program.