A mechanical way to stimulate neurons

In addition to responding to electrical and chemical stimuli, many of the body’s neural cells can also respond to mechanical effects, such as pressure or vibration. But these responses have been more difficult for researchers to study, because there has been no easily controllable method for inducing such mechanical stimulation of the cells. Now, researchers at MIT and elsewhere have found a new method for doing just that.

The finding might offer a step toward new kinds of therapeutic treatments, similar to electrically based neurostimulation that has been used to treat Parkinson’s disease and other conditions. Unlike those systems, which require an external wire connection, the new system would be completely contact-free after an initial injection of particles, and could be reactivated at will through an externally applied magnetic field.

The finding is reported in the journal ACS Nano, in a paper by former MIT postdoc Danijela Gregurec, Alexander Senko PhD ’19, Associate Professor Polina Anikeeva, and nine others at MIT, at Boston’s Brigham and Women’s Hospital, and in Spain.

The new method opens a new pathway for the stimulation of nerve cells within the body, which has so far almost entirely relied on either chemical pathways, through the use of pharmaceuticals, or on electrical pathways, which require invasive wires to deliver voltage into the body. This mechanical stimulation, which activates entirely different signaling pathways within the neurons themselves, could provide a significant area of study, the researchers say.

“An interesting thing about the nervous system is that neurons can actually detect forces,” Senko says. “That’s how your sense of touch works, and also your sense of hearing and balance.” The team targeted a particular group of neurons within a structure known as the dorsal root ganglion, which forms an interface between the central and peripheral nervous systems, because these cells are particularly sensitive to mechanical forces.

The applications of the technique could be similar to those being developed in the field of bioelectronic medicines, Senko says, but those require electrodes that are typically much bigger and stiffer than the neurons being stimulated, limiting their precision and sometimes damaging cells.

The key to the new process was developing minuscule discs with an unusual magnetic property, which can cause them to start fluttering when subjected to a certain kind of varying magnetic field. Though the particles themselves are only 100 or so nanometers across, roughly a hundredth of the size of the neurons they are trying to stimulate, they can be made and injected in great quantities, so that collectively their effect is strong enough to activate the cell’s pressure receptors. “We made nanoparticles that actually produce forces that cells can detect and respond to,” Senko says.

Anikeeva says that conventional magnetic nanoparticles would have required impractically large magnetic fields to be activated, so finding materials that could provide sufficient force with just moderate magnetic activation was “a very hard problem.” The solution proved to be a new kind of magnetic nanodiscs.

These discs, which are hundreds of nanometers in diameter, contain a vortex configuration of atomic spins when there are no external magnetic fields applied. This makes the particles behave as if they were not magnetic at all, making them exceptionally stable in solutions. When these discs are subjected to a very weak varying magnetic field of a few millitesla, with a low frequency of just several hertz, they switch to a state where the internal spins are all aligned in the disc plane. This allows these nanodiscs to act as levers — wiggling up and down with the direction of the field.

Anikeeva, who is an associate professor in the departments of Materials Science and Engineering and Brain and Cognitive Sciences, says this work combines several disciplines, including new chemistry that led to development of these nanodiscs, along with electromagnetic effects and work on the biology of neurostimulation.

The team first considered using particles of a magnetic metal alloy that could provide the necessary forces, but these were not biocompatible materials, and they were prohibitively expensive. The researchers found a way to use particles made from hematite, a benign iron oxide, which can form the required disc shapes. The hematite was then converted into magnetite, which has the magnetic properties they needed and is known to be benign in the body. This chemical transformation from hematite to magnetite dramatically turns a blood-red tube of particles to jet black.

“We had to confirm that these particles indeed supported this really unusual spin state, this vortex,” Gregurec says. They first tried out the newly developed nanoparticles and proved, using holographic imaging systems provided by colleagues in Spain, that the particles really did react as expected, providing the necessary forces to elicit responses from neurons. The results came in late December and “everyone thought that was a Christmas present,” Anikeeva recalls, “when we got our first holograms, and we could really see that what we have theoretically predicted and chemically suspected actually was physically true.”

The work is still in its infancy, she says. “This is a very first demonstration that it is possible to use these particles to transduce large forces to membranes of neurons in order to stimulate them.”

She adds “that opens an entire field of possibilities. … This means that anywhere in the nervous system where cells are sensitive to mechanical forces, and that’s essentially any organ, we can now modulate the function of that organ.” That brings science a step closer, she says, to the goal of bioelectronic medicine that can provide stimulation at the level of individual organs or parts of the body, without the need for drugs or electrodes.

The work was supported by the U.S. Defense Advanced Research Projects Agency, the National Institute of Mental Health, the Department of Defense, the Air Force Office of Scientific Research, and the National Defense Science and Engineering Graduate Fellowship.

Full paper at ACS Nano

Producing a gaseous messenger molecule inside the body, on demand

Nitric oxide is an important signaling molecule in the body, with a role in building nervous system connections that contribute to learning and memory. It also functions as a messenger in the cardiovascular and immune systems.

But it has been difficult for researchers to study exactly what its role is in these systems and how it functions. Because it is a gas, there has been no practical way to direct it to specific individual cells in order to observe its effects. Now, a team of scientists and engineers at MIT and elsewhere has found a way of generating the gas at precisely targeted locations inside the body, potentially opening new lines of research on this essential molecule’s effects.

The findings are reported today in the journal Nature Nanotechnology, in a paper by MIT professors Polina Anikeeva, Karthish Manthiram, and Yoel Fink; graduate student Jimin Park; postdoc Kyoungsuk Jin; and 10 others at MIT and in Taiwan, Japan, and Israel.

“It’s a very important compound,” says Anikeeva, who is also an Investigator at the McGovern Institute. But figuring out the relationships between the delivery of nitric oxide to particular cells and synapses, and the resulting higher-level effects on the learning process has been difficult. So far, most studies have resorted to looking at systemic effects, by knocking out genes responsible for the production of enzymes the body uses to produce nitric oxide where it’s needed as a messenger.

But that approach, she says, is “very brute force. This is a hammer to the system because you’re knocking it out not just from one specific region, let’s say in the brain, but you essentially knock it out from the entire organism, and this can have other side effects.”

Others have tried introducing compounds into the body that release nitric oxide as they decompose, which can produce somewhat more localized effects, but these still spread out, and it is a very slow and uncontrolled process.

The team’s solution uses an electric voltage to drive the reaction that produces nitric oxide. This is similar to what is happening on a much larger scale with some industrial electrochemical production processes, which are relatively modular and controllable, enabling local and on-demand chemical synthesis. “We’ve taken that concept and said, you know what? You can be so local and so modular with an electrochemical process that you can even do this at the level of the cell,” Manthiram says. “And I think what’s even more exciting about this is that if you use electric potential, you have the ability to start production and stop production in a heartbeat.”

The team’s key achievement was finding a way for this kind of electrochemically controlled reaction to be operated efficiently and selectively at the nanoscale. That required finding a suitable catalyst material that could generate nitric oxide from a benign precursor material. They found that nitrite offered a promising precursor for electrochemical nitric oxide generation.

“We came up with the idea of making a tailored nanoparticle to catalyze the reaction,” Jin says. They found that the enzymes that catalyze nitric oxide generation in nature contain iron-sulfur centers. Drawing inspiration from these enzymes, they devised a catalyst that consisted of nanoparticles of iron sulfide, which activates the nitric oxide-producing reaction in the presence of an electric field and nitrite. By further doping these nanoparticles with platinum, the team was able to enhance their electrocatalytic efficiency.

To miniaturize the electrocatalytic cell to the scale of biological cells, the team has created custom fibers containing the positive and negative microelectrodes, which are coated with the iron sulfide nanoparticles, and a microfluidic channel for the delivery of sodium nitrite, the precursor material. When implanted in the brain, these fibers direct the precursor to the specific neurons. Then the reaction can be activated at will electrochemically, through the electrodes in the same fiber, producing an instant burst of nitric oxide right at that spot so that its effects can be recorded in real-time.

Device created by the Anikeeva lab. The tube at top is connected to a supply of the precursor material, sodium nitrite, which then passes through a channel in the fiber at the bottom and into the body, which also contains the electrodes to stimulate the release of nitric oxide. The electrodes are connected through the four-pin connector on the left.
Photo: Anikeeva Lab

As a test, they used the system in a rodent model to activate a brain region that is known to be a reward center for motivation and social interaction, and that plays a role in addiction. They showed that it did indeed provoke the expected signaling responses, demonstrating its effectiveness.

Anikeeva says this “would be a very useful biological research platform, because finally, people will have a way to study the role of nitric oxide at the level of single cells, in whole organisms that are performing tasks.” She points out that there are certain disorders that are associated with disruptions of the nitric oxide signaling pathway, so more detailed studies of how this pathway operates could help lead to treatments.

The method could be generalizable, Park says, as a way of producing other molecules of biological interest within an organism. “Essentially we can now have this really scalable and miniaturized way to generate many molecules, as long as we find the appropriate catalyst, and as long as we find an appropriate starting compound that is also safe.” This approach to generating signaling molecules in situ could have wide applications in biomedicine, he says.

“One of our reviewers for this manuscript pointed out that this has never been done — electrolysis in a biological system has never been leveraged to control biological function,” Anikeeva says. “So, this is essentially the beginning of a field that could potentially be very useful” to study molecules that can be delivered at precise locations and times, for studies in neurobiology or any other biological functions. That ability to make molecules on demand inside the body could be useful in fields such as immunology or cancer research, she says.

The project got started as a result of a chance conversation between Park and Jin, who were friends working in different fields — neurobiology and electrochemistry. Their initial casual discussions ended up leading to a full-blown collaboration between several departments. But in today’s locked-down world, Jin says, such chance encounters and conversations have become less likely. “In the context of how much the world has changed, if this were in this era in which we’re all apart from each other, and not in 2018, there is some chance that this collaboration may just not ever have happened.”

“This work is a milestone in bioelectronics,” says Bozhi Tian, an associate professor of chemistry at the University of Chicago, who was not connected to this work. “It integrates nanoenabled catalysis, microfluidics, and traditional bioelectronics … and it solves a longstanding challenge of precise neuromodulation in the brain, by in situ generation of signaling molecules. This approach can be widely adopted by the neuroscience community and can be generalized to other signaling systems, too.”

Besides MIT, the team included researchers at National Chiao Tung University in Taiwan, NEC Corporation in Japan, and the Weizman Institute of Science in Israel. The work was supported by the National Institute for Neurological Disorders and Stroke, the National Institutes of Health, the National Science Foundation, and MIT’s Department of Chemical Engineering.

A focused approach to imaging neural activity in the brain

When neurons fire an electrical impulse, they also experience a surge of calcium ions. By measuring those surges, researchers can indirectly monitor neuron activity, helping them to study the role of individual neurons in many different brain functions.

One drawback to this technique is the crosstalk generated by the axons and dendrites that extend from neighboring neurons, which makes it harder to get a distinctive signal from the neuron being studied. MIT engineers have now developed a way to overcome that issue, by creating calcium indicators, or sensors, that accumulate only in the body of a neuron.

“People are using calcium indicators for monitoring neural activity in many parts of the brain,” says Edward Boyden, the Y. Eva Tan Professor in Neurotechnology and a professor of biological engineering and of brain and cognitive sciences at MIT. “Now they can get better results, obtaining more accurate neural recordings that are less contaminated by crosstalk.”

To achieve this, the researchers fused a commonly used calcium indicator called GCaMP to a short peptide that targets it to the cell body. The new molecule, which the researchers call SomaGCaMP, can be easily incorporated into existing workflows for calcium imaging, the researchers say.

Boyden is the senior author of the study, which appears today in Neuron. The paper’s lead authors are Research Scientist Or Shemesh, postdoc Changyang Linghu, and former postdoc Kiryl Piatkevich.

Molecular focus

The GCaMP calcium indicator consists of a fluorescent protein attached to a calcium-binding protein called calmodulin, and a calmodulin-binding protein called M13 peptide. GCaMP fluoresces when it binds to calcium ions in the brain, allowing researchers to indirectly measure neuron activity.

“Calcium is easy to image, because it goes from a very low concentration inside the cell to a very high concentration when a neuron is active,” says Boyden, who is also a member of MIT’s McGovern Institute for Brain Research, Media Lab, and Koch Institute for Integrative Cancer Research.

The simplest way to detect these fluorescent signals is with a type of imaging called one-photon microscopy. This is a relatively inexpensive technique that can image large brain samples at high speed, but the downside is that it picks up crosstalk between neighboring neurons. GCaMP goes into all parts of a neuron, so signals from the axons of one neuron can appear as if they are coming from the cell body of a neighbor, making the signal less accurate.

A more expensive technique called two-photon microscopy can partly overcome this by focusing light very narrowly onto individual neurons, but this approach requires specialized equipment and is also slower.

Boyden’s lab decided to take a different approach, by modifying the indicator itself, rather than the imaging equipment.

“We thought, rather than optically focusing light, what if we molecularly focused the indicator?” he says. “A lot of people use hardware, such as two-photon microscopes, to clean up the imaging. We’re trying to build a molecular version of what other people do with hardware.”

In a related paper that was published last year, Boyden and his colleagues used a similar approach to reduce crosstalk between fluorescent probes that directly image neurons’ membrane voltage. In parallel, they decided to try a similar approach with calcium imaging, which is a much more widely used technique.

To target GCaMP exclusively to cell bodies of neurons, the researchers tried fusing GCaMP to many different proteins. They explored two types of candidates — naturally occurring proteins that are known to accumulate in the cell body, and human-designed peptides — working with MIT biology Professor Amy Keating, who is also an author of the paper. These synthetic proteins are coiled-coil proteins, which have a distinctive structure in which multiple helices of the proteins coil together.

Less crosstalk

The researchers screened about 30 candidates in neurons grown in lab dishes, and then chose two — one artificial coiled-coil and one naturally occurring peptide — to test in animals. Working with Misha Ahrens, who studies zebrafish at the Janelia Research Campus, they found that both proteins offered significant improvements over the original version of GCaMP. The signal-to-noise ratio — a measure of the strength of the signal compared to background activity — went up, and activity between adjacent neurons showed reduced correlation.

In studies of mice, performed in the lab of Xue Han at Boston University, the researchers also found that the new indicators reduced the correlations between activity of neighboring neurons. Additional studies using a miniature microscope (called a microendoscope), performed in the lab of Kay Tye at the Salk Institute for Biological Studies, revealed a significant increase in signal-to-noise ratio with the new indicators.

“Our new indicator makes the signals more accurate. This suggests that the signals that people are measuring with regular GCaMP could include crosstalk,” Boyden says. “There’s the possibility of artifactual synchrony between the cells.”

In all of the animal studies, they found that the artificial, coiled-coil protein produced a brighter signal than the naturally occurring peptide that they tested. Boyden says it’s unclear why the coiled-coil proteins work so well, but one possibility is that they bind to each other, making them less likely to travel very far within the cell.

Boyden hopes to use the new molecules to try to image the entire brains of small animals such as worms and fish, and his lab is also making the new indicators available to any researchers who want to use them.

“It should be very easy to implement, and in fact many groups are already using it,” Boyden says. “They can just use the regular microscopes that they already are using for calcium imaging, but instead of using the regular GCaMP molecule, they can substitute our new version.”

The research was primarily funded by the National Institute of Mental Health and the National Institute of Drug Abuse, as well as a Director’s Pioneer Award from the National Institutes of Health, and by Lisa Yang, John Doerr, the HHMI-Simons Faculty Scholars Program, and the Human Frontier Science Program.

COMMANDing drug delivery

While we are starting to get a handle on drugs and therapeutics that might to help alleviate brain disorders, efficient delivery remains a roadblock to tackling these devastating diseases. Research from the Graybiel, Cima, and Langer labs now uses a computational approach, one that accounts for the irregular shape of the target brain region, to deliver drugs effectively and specifically.

“Identifying therapeutic molecules that can treat neural disorders is just the first step,” says McGovern Investigator Ann Graybiel.

“There is still a formidable challenge when it comes to precisely delivering the therapeutic to the cells most affected in the disorder,” explains Graybiel, an MIT Institute Professor and a senior author on the paper. “Because the brain is so structurally complex, and subregions are irregular in shape, new delivery approaches are urgently needed.”

Fine targeting

Brain disorders often arise from dysfunction in specific regions. Parkinson’s disease, for example, arise from loss of neurons in a specific forebrain region, the striatum. Targeting such structures is a major therapeutic goal, and demands both overcoming the blood brain barrier, while also being specific to the structures affected by the disorder.

Such targeted therapy can potentially be achieved using intracerebral catheters. While this is a more specific form of delivery compared to systemic administration of a drug through the bloodstream, many brain regions are irregular in shape. This means that delivery throughout a specific brain region using a single catheter, while also limiting the spread of a given drug beyond the targeted area, is difficult. Indeed, intracerebral delivery of promising therapeutics has not led to the desired long-term alleviation of disorders.

“Accurate delivery of drugs to reach these targets is really important to ensure optimal efficacy and avoid off-target adverse effects. Our new system, called COMMAND, determines how best to dose targets,” says Michael Cima, senior author on the study and the David H. Koch Professor of Engineering in the Department of Materials Science and Engineering and a member of MIT’s Koch Institute for Integrative Cancer Research.

3D renderings of simulated multi-bolus delivery to various brain structures (striatum, amygdala, substantia nigra, and hippocampus) with one to four boluses.

COMMAND response

In the case of Parkinson’s disease, implants are available that limit symptoms, but these are only effective in a subset of patients. There are, however, a number of promising potential therapeutic treatments, such as GDNF administration, where long-term, precise delivery is needed to move the therapy forward.

The Graybiel, Cima, and Langer labs developed COMMAND (computational mapping algorithms for neural drug delivery) that helps to target a drug to a specific brain region at multiple sites (multi-bolus delivery).

“Many clinical trials are believed to have failed due to poor drug distribution following intracerebral injection,” explained Khalil Ramadi, PhD ’19, one of the lead researchers on the paper, and a postdoctoral fellow at the Koch and McGovern Institute. “We rationalized that both research experiments and clinical therapies would benefit from computationally optimized infusion, to enable greater consistency across groups and studies, as well as more efficacious therapeutic delivery.”

The COMMAND system finds balance between the twin challenges of drug delivery by maximizing on-target and minimizing off-target delivery. COMMAND is essentially an algorithm that minimizes an error that reflects leakage beyond the bounds of a specific target area, in this case the striatum. A second error is also minimized, and this encapsulates the need to target across this irregularly shaped brain region. The strategy to overcome this is to deliver multiple “boluses” to different areas of the striatum to target this region precisely, yet completely.

“COMMAND applies a simple principle when determining where to place the drug: Maximize the amount of drug falling within the target brain structure and minimize tissues exposed beyond the target region,” explains Ashvin Bashyam, PhD ’19, co-lead author and a former graduate student with Michael Cima at MIT. “This balance is specified based drug properties such as minimum effective therapeutic concentration, toxicity, and diffusivity within brain tissue.”

The number of drug sites applied is kept as low as possible, keeping surgery simple while still providing enough flexibility to cover the target region. In computational simulations, the researchers were able to deliver drugs to compact brain structures, such as the striatum and amygdala, but also broader and more irregular regions, such as hippocampus.

To examine the spatiotemporal dynamics of actual delivery, the researchers used positron emission tomography (PET) and a ‘labeled’ solution, Cu-64, that allowed them to image and follow an infused bolus after delivery with a microprobe. Using this system, the researchers successfully used PET to validate the accuracy of multi-bolus delivery to the rat striatum and its coverage as guided by COMMAND.

“We anticipate that COMMAND can improve researchers’ ability to precisely target brain structures to better understand their function, and become a platform to standardize methods across neuroscience experiments,” explains Graybiel. “Beyond the lab, we hope COMMAND will lay the foundation to help bring multifocal, chronic drug delivery to patients.”

Optogenetics with SOUL

Optogenetics has revolutionized neurobiology, allowing researchers to use light to activate or deactivate neurons that are genetically modified to express a light-sensitive channel. This ability to manipulate neuron activity has allowed causal testing of the function of specific neurons, and also has therapeutic potential to reduce symptoms in brain disorders. However, activating neurons deep within a given brain, especially a large primate brain but even a small mouse brain, is challenging and currently requires implanting fibers that could cause damage or inflammation.

McGovern Investigator Guoping Feng and colleagues have now overcome this challenge, developing optogenetic tools that allow non-invasive stimulation of neurons in the deep brain.

“Neuroscientists have dreamed of methods to turn neurons on and off, to understand the function of different neurons, but also to repair brain malfunctions that lead to psychiatric disorders, and optogenetics made this possible” explained Feng, the James W. (1963) and Patricia T. Poitras Professor in Brain and Cognitive Sciences. “We were trying to improve the light sensitivity of optogenetic tools to broaden applications.”

Engineering with light

In order to stimulate neurons with minimal invasiveness, Feng and colleagues engineered a new type of opsin. The original breakthrough optogenetics protocol used channelrhodopsin, a light-sensitive channel discovered in algae. By expressing this channel in neurons, light of the right wavelength can be used to activate the neuron in a dish or in vivo. However, in vivo application requires the implantation of optical fibers to deliver the light close to the specific brain region being stimulated, especially if the target region is in the deep brain. In addition, if the neuron being targeted is in the deep brain, it is hard for light to reach the region in the absence of invasive tools that can damage tissue and impact the behavior of the animal.

Our study creates a method that can activate any mouse brain region, independent of its location, non-invasively.

“Prior to our study, a few studies have contributed in various ways to the development of optogenetic stimulation methods that would be minimally invasive to the brain. However, all of these studies had various limitations in the extent of brain regions they could activate,” said co-senior study author Robert Desimone, director of the McGovern Institute and the Doris and Don Berkey Professor of Neuroscience at MIT.

Probing the brain with SOUL

Feng and colleagues turned instead to new opsins, in particular SOUL, a new type of opsin that is very sensitive to even low-level light. The Feng group engineered this opsin, based on SSFO a second generation optogenetics tool, to have increased light sensitivity, and took advantage of a second property: that SOUL is activated in multiple steps, and once activated, it stays active for longer than other commonly used opsins. This means that a burst of a few seconds of low-level light can cause neurons to stay active for 10-30 minutes.

In order to put SOUL through its paces, the Feng lab expressed this channel in the lateral hypothalamus of the mouse brain. This is a deep region, challenging to reach with light, but with neurons that have clear functions that will lead to changes in behavior. Feng’s group was able to turn on this region non-invasively with light from outside the skull, and cause changes in feeding behavior.

“We were really surprised that SOUL was able to activate one of the deepest areas in the mouse brain, the lateral hypothalamus, which is 6 mm deep,” explains Feng.

But there were more surprises. When the authors activated a region of the primate brain using SOUL, they saw oscillations, waves of synchronized neuronal activity coming together like a choir. Such waves are believed to be important for many brain functions, and this result suggests that the new opsin can manipulate these brain waves, allowing scientists to study their role in the brain.

The authors are planning to move the study in several directions, studying models of brain disorders to identify circuits that may be suitable targets for therapy, as well as moving the methodology so that it can be used beyond the superficial cortex in larger animals. While it is too early to discuss applying the system to humans, the research brings us one step closer to future treatment of neurological disorders.

Researchers achieve remote control of hormone release

Abnormal levels of stress hormones such as adrenaline and cortisol are linked to a variety of mental health disorders, including depression and posttraumatic stress disorder (PTSD). MIT researchers have now devised a way to remotely control the release of these hormones from the adrenal gland, using magnetic nanoparticles.

This approach could help scientists to learn more about how hormone release influences mental health, and could eventually offer a new way to treat hormone-linked disorders, the researchers say.

“We’re looking how can we study and eventually treat stress disorders by modulating peripheral organ function, rather than doing something highly invasive in the central nervous system,” says Polina Anikeeva, an MIT professor of materials science and engineering and of brain and cognitive sciences.

To achieve control over hormone release, Dekel Rosenfeld, an MIT-Technion postdoc in Anikeeva’s group, has developed specialized magnetic nanoparticles that can be injected into the adrenal gland. When exposed to a weak magnetic field, the particles heat up slightly, activating heat-responsive channels that trigger hormone release. This technique can be used to stimulate an organ deep in the body with minimal invasiveness.

Anikeeva and Alik Widge, an assistant professor of psychiatry at the University of Minnesota and a former research fellow at MIT’s Picower Institute for Learning and Memory, are the senior authors of the study. Rosenfeld is the lead author of the paper, which appears today in Science Advances.

Controlling hormones

Anikeeva’s lab has previously devised several novel magnetic nanomaterials, including particles that can release drugs at precise times in specific locations in the body.

In the new study, the research team wanted to explore the idea of treating disorders of the brain by manipulating organs that are outside the central nervous system but influence it through hormone release. One well-known example is the hypothalamic-pituitary-adrenal (HPA) axis, which regulates stress response in mammals. Hormones secreted by the adrenal gland, including cortisol and adrenaline, play important roles in depression, stress, and anxiety.

“Some disorders that we consider neurological may be treatable from the periphery, if we can learn to modulate those local circuits rather than going back to the global circuits in the central nervous system,” says Anikeeva, who is a member of MIT’s Research Laboratory of Electronics and McGovern Institute for Brain Research.

As a target to stimulate hormone release, the researchers decided on ion channels that control the flow of calcium into adrenal cells. Those ion channels can be activated by a variety of stimuli, including heat. When calcium flows through the open channels into adrenal cells, the cells begin pumping out hormones. “If we want to modulate the release of those hormones, we need to be able to essentially modulate the influx of calcium into adrenal cells,” Rosenfeld says.

Unlike previous research in Anikeeva’s group, in this study magnetothermal stimulation was applied to modulate the function of cells without artificially introducing any genes.

To stimulate these heat-sensitive channels, which naturally occur in adrenal cells, the researchers designed nanoparticles made of magnetite, a type of iron oxide that forms tiny magnetic crystals about 1/5000 the thickness of a human hair. In rats, they found these particles could be injected directly into the adrenal glands and remain there for at least six months. When the rats were exposed to a weak magnetic field — about 50 millitesla, 100 times weaker than the fields used for magnetic resonance imaging (MRI) — the particles heated up by about 6 degrees Celsius, enough to trigger the calcium channels to open without damaging any surrounding tissue.

The heat-sensitive channel that they targeted, known as TRPV1, is found in many sensory neurons throughout the body, including pain receptors. TRPV1 channels can be activated by capsaicin, the organic compound that gives chili peppers their heat, as well as by temperature. They are found across mammalian species, and belong to a family of many other channels that are also sensitive to heat.

This stimulation triggered a hormone rush — doubling cortisol production and boosting noradrenaline by about 25 percent. That led to a measurable increase in the animals’ heart rates.

Treating stress and pain

The researchers now plan to use this approach to study how hormone release affects PTSD and other disorders, and they say that eventually it could be adapted for treating such disorders. This method would offer a much less invasive alternative to potential treatments that involve implanting a medical device to electrically stimulate hormone release, which is not feasible in organs such as the adrenal glands that are soft and highly vascularized, the researchers say.

Another area where this strategy could hold promise is in the treatment of pain, because heat-sensitive ion channels are often found in pain receptors.

“Being able to modulate pain receptors with this technique potentially will allow us to study pain, control pain, and have some clinical applications in the future, which hopefully may offer an alternative to medications or implants for chronic pain,” Anikeeva says. With further investigation of the existence of TRPV1 in other organs, the technique can potentially be extended to other peripheral organs such as the digestive system and the pancreas.

The research was funded by the U.S. Defense Advance Research Projects Agency ElectRx Program, a Bose Research Grant, the National Institutes of Health BRAIN Initiative, and a MIT-Technion fellowship.

How We Feel app to track spread of COVID-19 symptoms

A major challenge with containing the spread of COVID-19 in many countries, has been an ability to quickly detect infection. Feng Zhang, along with Pinterest CEO Ben Silberman, and collaborators across scientific and medical disciplines, are coming together to launch an app called How We Feel, that will allow citizen scientists to self-report symptoms.

“It is so important to find a way to connect scientists to fight this pandemic,” explained Zhang. We wanted to find a fast and agile way to ultimately build a dynamic picture of symptoms associated with the virus.”

Designed to help scientists track and stop the spread of the novel coronavirus by creating an exchange of information between the citizens and scientists at scale, the new How We Feel app does just this. The app lets people self-report symptoms in 30 seconds or less and see how others in their area are feeling. To protect user privacy, the app explicitly does not require an account sign in, and doesn’t ask for identifying information such as the user’s name, phone number, or email address before they donate their data. Reporting symptoms only takes about 30 seconds, but the data shared by users has the potential to reveal and even predict outbreak hotspots, potentially providing insight into the spread and progression of COVID-19. To further contribute to the fight against COVID-19, Ben and Divya Silbermann will donate a meal to Feeding America for every download of the How We Feel app—up to 10 million meals.

The app was created by the How We Feel Project, a nonprofit collaboration between Silbermann, doctors, and an interdisciplinary group of researchers including Feng Zhang, investigator at the McGovern Institute for Brain Research, Broad Institute, and the James and Patricia Poitras Professor of Neuroscience at MIT. Other institutions currently involved include Harvard University T.H. Chan School of Public Health and Faculty of Arts and Sciences, University of Pennsylvania, Stanford University, University of Maryland School of Medicine, and the Weizmann Institute of Science.

Silbermann partnered closely with Feng Zhang, best known for his work on CRISPR, a pioneering gene-editing technique designed to treat diseases. Zhang and Silbermann first met in high school in Iowa. As the outbreak grew in the US, they called each other to figure out how the fields of biochemistry and technology could come together to find a solution for the lack of reliable health data from testing.

“Since high school, my friend Feng Zhang and I have been talking about the potential of the internet to connect regular people and scientists for the public good,” said Ben Silbermann, co-founder and CEO of, Pinterest. “When we saw how quickly COVID-19 was spreading, it felt like a critical moment to finally build that bridge between citizens and scientists that we’ve always wanted. I believe we’ve done that with How We Feel.”

Silbermann and Zhang formed the new HWF nonprofit because they believed a fully independent organization with a keen understanding of the needs of doctors and researchers should develop and manage the app. Now, they’re looking for opportunities to collaborate globally. Zhang is working to organize an international consortium of researchers from 11 countries that have developed similar health status surveys. The consortium is called the Coronavirus Census Collective (CCC).

The How We Feel app is available for download today in the US on iOS and Android, and via the web at http://www.howwefeel.org.

Protecting healthcare workers during the COVID-19 pandemic

“When the COVID-19 crisis hit the US this March, my biggest concern was the shortage of face masks, which are a key weapon for healthcare providers, frontline service workers, and the public to protect against respiratory transmission of COVID-19. In mid-March I kicked off a gofundme campaign for simple masks to protect frontline service workers but, when it was first announced that frontline healthcare providers were short, I completed the campaign and joined groups of scientists and physicians working on N95 mask reuse in Boston (MGB Center for COVID Innovation) and nation-wide (N95DECON). The N95DECON team and used zoom to connect volunteer scientists, engineers, clinicians and students from across the US to address this problem.

I am deeply committed to helping conserve and decontaminate the N95 masks that are essential for our healthcare workers to most safely treat COVID-19 patients.

I personally love zoom meetings from home for many reasons. For one thing, you can meet people instantaneously from all over the world, no need to travel at all. Also, it is less hierarchical than a typical conference because people all have the same place at the table, rather than some people being relegated to ‘the back of the room.’

McGovern research scientist Jill Crittenden (top left) in a zoom meeting with the Boston-based COVID-19 Innovation Center N95 Reuse team. Photo: Jill Crittenden

For two weeks, we met online daily and exchanged information, suggestions and ideas in a free, open, and transparent way. We reviewed a large body of the information on N95 decontamination and deliberated different methods based on evidence from scientific literature and available data. Our discussions followed the same principles I use in my own work in the Graybiel lab; exploring whether data is convincing, definitive, complete, and reproducible. I am so proud of our resulting report, which provides a summary of this critical information.

I am deeply committed to helping conserve and decontaminate the N95 masks that are essential for our healthcare workers to most safely treat COVID-19 patients. I know physicians personally who are very grateful that teams of scientists are doing the in-depth data analysis so that they can feel confident in what is best for their own health.”


Jill Crittenden is a research scientist in Ann Graybiel‘s lab at the McGovern Institute. She studies neural microcircuits in the basal ganglia that are relevant to Huntington’s and Parkinson’s diseases, dystonia, drug addiction, and repetitive movement disorders such as autism and obsessive-compulsive disorder. Read more about her N95DECON project on our news site.

Jill has also developed a set of helpful guidelines for face masks (either purchased or DIY). She discussed these guidelines, among other COVID-19 related topics on the podcast Dear Discreet Guide.

#WeAreMcGovern

How the brain encodes landmarks that help us navigate

When we move through the streets of our neighborhood, we often use familiar landmarks to help us navigate. And as we think to ourselves, “OK, now make a left at the coffee shop,” a part of the brain called the retrosplenial cortex (RSC) lights up.

While many studies have linked this brain region with landmark-based navigation, exactly how it helps us find our way is not well-understood. A new study from MIT neuroscientists now reveals how neurons in the RSC use both visual and spatial information to encode specific landmarks.

“There’s a synthesis of some of these signals — visual inputs and body motion — to represent concepts like landmarks,” says Mark Harnett, an assistant professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research. “What we went after in this study is the neuron-level and population-level representation of these different aspects of spatial navigation.”

In a study of mice, the researchers found that this brain region creates a “landmark code” by combining visual information about the surrounding environment with spatial feedback of the mice’s own position along a track. Integrating these two sources of information allowed the mice to learn where to find a reward, based on landmarks that they saw.

“We believe that this code that we found, which is really locked to the landmarks, and also gives the animals a way to discriminate between landmarks, contributes to the animals’ ability to use those landmarks to find rewards,” says Lukas Fischer, an MIT postdoc and the lead author of the study.

Harnett is the senior author of the study, which appears today in the journal eLife. Other authors are graduate student Raul Mojica Soto-Albors and recent MIT graduate Friederike Buck.

Encoding landmarks

Previous studies have found that people with damage to the RSC have trouble finding their way from one place to another, even though they can still recognize their surroundings. The RSC is also one of the first areas affected in Alzheimer’s patients, who often have trouble navigating.

The RSC is wedged between the primary visual cortex and the motor cortex, and it receives input from both of those areas. It also appears to be involved in combining two types of representations of space — allocentric, meaning the relationship of objects to each other, and egocentric, meaning the relationship of objects to the viewer.

“The evidence suggests that RSC is really a place where you have a fusion of these different frames of reference,” Harnett says. “Things look different when I move around in the room, but that’s because my vantage point has changed. They’re not changing with respect to one another.”

In this study, the MIT team set out to analyze the behavior of individual RSC neurons in mice, including how they integrate multiple inputs that help with navigation. To do that, they created a virtual reality environment for the mice by allowing them to run on a treadmill while they watch a video screen that makes it appear they are running along a track. The speed of the video is determined by how fast the mice run.

At specific points along the track, landmarks appear, signaling that there’s a reward available a certain distance beyond the landmark. The mice had to learn to distinguish between two different landmarks, and to learn how far beyond each one they had to run to get the reward.

Once the mice learned the task, the researchers recorded neural activity in the RSC as the animals ran along the virtual track. They were able to record from a few hundred neurons at a time, and found that most of them anchored their activity to a specific aspect of the task.

There were three primary anchoring points: the beginning of the trial, the landmark, and the reward point. The majority of the neurons were anchored to the landmarks, meaning that their activity would consistently peak at a specific point relative to the landmark, say 50 centimeters before it or 20 centimeters after it.

Most of those neurons responded to both of the landmarks, but a small subset responded to only one or the other. The researchers hypothesize that those strongly selective neurons help the mice to distinguish between the landmarks and run the correct distance to get the reward.

When the researchers used optogenetics (a tool that can turn off neuron activity) to block activity in the RSC, the mice’s performance on the task became much worse.

Combining inputs

The researchers also did an experiment in which the mice could choose to run or not while the video played at a constant speed, unrelated to the mice’s movement. The mice could still see the landmarks, but the location of the landmarks was no longer linked to a reward or to the animals’ own behavior. In that situation, RSC neurons did respond to the landmarks, but not as strongly as they did when the mice were using them for navigation.

Further experiments allowed the researchers to tease out just how much neuron activation is produced by visual input (seeing the landmarks) and by feedback on the mouse’s own movement. However, simply adding those two numbers yielded totals much lower than the neuron activity seen when the mice were actively navigating the track.

“We believe that is evidence for a mechanism of nonlinear integration of these inputs, where they get combined in a way that creates a larger response than what you would get if you just added up those two inputs in a linear fashion,” Fischer says.

The researchers now plan to analyze data that they have already collected on how neuron activity evolves over time as the mice learn the task. They also hope to perform further experiments in which they could try to separately measure visual and spatial inputs into different locations within RSC neurons.

The research was funded by the National Institutes of Health, the McGovern Institute, the NEC Corporation Fund for Research in Computers and Communications at MIT, and the Klingenstein-Simons Fellowship in Neuroscience.

2020 MacVicar Faculty Fellows named

The Office of the Vice Chancellor and the Registrar’s Office have announced this year’s Margaret MacVicar Faculty Fellows: materials science and engineering Professor Polina Anikeeva, literature Professor Mary Fuller, chemical engineering Professor William Tisdale, and electrical engineering and computer science Professor Jacob White.

Role models both in and out of the classroom, the new fellows have tirelessly sought to improve themselves, their students, and the Institute writ large. They have reimagined curricula, crossed disciplines, and pushed the boundaries of what education can be. They join a matchless academy of scholars committed to exceptional instruction and innovation.

Vice Chancellor Ian Waitz will honor the fellows at this year’s MacVicar Day symposium, “Learning through Experience: Education for a Fulfilling and Engaged Life.” In a series of lightning talks, student and faculty speakers will examine how MIT — through its many opportunities for experiential learning — supports students’ aspirations and encourages them to become engaged citizens and thoughtful leaders.

The event will be held on March 13 from 2:30-4 p.m. in Room 6-120. A reception will follow in Room 2-290. All in the MIT community are welcome to attend.

For nearly three decades, the MacVicar Faculty Fellows Program has been recognizing exemplary undergraduate teaching and advising around the Institute. The program was named after Margaret MacVicar, the first dean for undergraduate education and founder of the Undergraduate Research Opportunities Program (UROP). Nominations are made by departments and include letters of support from colleagues, students, and alumni. Fellows are appointed to 10-year terms in which they receive $10,000 per year of discretionary funds.

Polina Anikeeva

“I’m speechless,” Polina Anikeeva, associate professor of materials science and engineering and brain and cognitive sciences, says of becoming a MacVicar Fellow. “In my opinion, this is the greatest honor one could have at MIT.”

Anikeeva received her PhD from MIT in 2009 and became a professor in the Department of Materials Science and Engineering two years later. She attended St. Petersburg State Polytechnic University for her undergraduate education. Through her research — which combines materials science, electronics, and neurobiology — she works to better understand and treat brain disorders.

Anikeeva’s colleague Christopher Schuh says, “Her ability and willingness to work with students however and whenever they need help, her engaging classroom persona, and her creative solutions to real-time challenges all culminate in one of MIT’s most talented and beloved undergraduate professors.”

As an instructor, advisor, and marathon runner, Anikeeva has learned the importance of finding balance. Her colleague Lionel Kimerling reflects on this delicate equilibrium: “As a teacher, Professor Anikeeva is among the elite who instruct, inspire, and nurture at the same time. It is a difficult task to demand rigor with a gentle mentoring hand.”

Students call her classes “incredibly hard” but fun and exciting at the same time. She is “the consummate scientist, splitting her time evenly between honing her craft, sharing knowledge with students and colleagues, and mentoring aspiring researchers,” wrote one.

Her passion for her work and her devotion to her students are evident in the nomination letters. One student recounted their first conversation: “We spoke for 15 minutes, and after talking to her about her research and materials science, I had never been so viscerally excited about anything.” This same student described the guidance and support Anikeeva provided her throughout her time at MIT.

After working with Anikeeva to apply what she learned in the classroom to a real-world problem, this student recalled, “I honestly felt like an engineer and a scientist for the first time ever. I have never felt so fulfilled and capable. And I realize that’s what I want for the rest of my life — to feel the highs and lows of discovery.”

Anikeeva champions her students in faculty and committee meetings as well. She is a “reliable advocate for student issues,” says Caroline Ross, associate department head and professor in DMSE. “Professor Anikeeva is always engaged with students, committed to student well-being, and passionate about education.”

“Undergraduate teaching has always been a crucial part of my MIT career and life,” Anikeeva reflects. “I derive my enthusiasm and energy from the incredibly talented MIT students — every year they surprise me with their ability to rise to ever-expanding intellectual challenges. Watching them grow as scientists, engineers, and — most importantly — people is like nothing else.”

Mary Fuller

Experimentation is synonymous with education at MIT and it is a crucial part of literature Professor Mary Fuller’s classes. As her colleague Arthur Bahr notes, “Mary’s habit of starting with a discrete practical challenge can yield insights into much broader questions.”

Fuller attended Dartmouth College as an undergraduate, then received both her MA and PhD in English and American literature from The Johns Hopkins University. She began teaching at MIT in 1989. From 2013 to 2019, Fuller was head of the Literature Section. Her successor in the role, Shankar Raman, says that her nominators “found [themselves] repeatedly surprised by the different ways Mary has pushed the limits of her teaching here, going beyond her own comfort zones to experiment with new texts and techniques.”

“Probably the most significant thing I’ve learned in 30 years of teaching here is how to ask more and better questions,” says Fuller. As part of a series of discussions on ethics and computing, she has explored the possibilities of artificial intelligence from a literary perspective. She is also developing a tool for the edX platform called PoetryViz, which would allow MIT students and students around the world to practice close reading through poetry annotation in an entirely new way.

“We all innovate in our teaching. Every year. But, some of us innovate more than others,” Krishna Rajagopal, dean for digital learning, observes. “In addition to being an outstanding innovator, Mary is one of those colleagues who weaves the fabric of undergraduate education across the Institute.”

Lessons learned in Fuller’s class also underline the importance of a well-rounded education. As one alumna reflected, “Mary’s teaching carried a compassion and ethic which enabled non-humanities students to appreciate literature as a diverse, valuable, and rewarding resource for personal and social reflection.”

Professor Fuller, another student remarked, has created “an environment where learning is not merely the digestion of rote knowledge, but instead the broad-based exploration of ideas and the works connected to them.”

“Her imagination is capacious, her knowledge is deep, and students trust her — so that they follow her eagerly into new and exploratory territory,” says Professor of Literature Stephen Tapscott.

Fuller praises her students’ willingness to take that journey with her, saying, “None of my classes are required, and none are technical, so I feel that students have already shown a kind of intellectual generosity by putting themselves in the room to do the work.”

For students, the hard work is worth it. Mary Fuller, one nominator declared, is exactly “the type of deeply impactful professor that I attended MIT hoping to learn from.”

William Tisdale

William Tisdale is the ARCO Career Development Professor of chemical engineering and, according to his colleagues, a “true star” in the department.

A member of the faculty since 2012, he received his undergraduate degree from the University of Delaware and his PhD from the University of Minnesota. After a year as a postdoc at MIT, Tisdale became an assistant professor. His research interests include nanotechnology and energy transport.

Tisdale’s colleague Kristala Prather calls him a “curriculum fixer.” During an internal review of Course 10 subjects, the department discovered that 10.213 (Chemical and Biological Engineering) was the least popular subject in the major and needed to be revised. After carefully evaluating the coursework, and despite having never taught 10.213 himself, Tisdale envisioned a novel way of teaching it. With his suggestions, the class went from being “despised” to loved, with subject evaluations improving by 70 percent from one spring to the next. “I knew Will could make a difference, but I had no idea he could make that big of a difference in just one year,” remarks Prather.

One student nominator even went so far as to call 10.213, as taught by Tisdale, “one of my best experiences at MIT.”

Always patient, kind, and adaptable, Tisdale’s willingness to tackle difficult problems is reflected in his teaching. “While the class would occasionally start to mutiny when faced with a particularly confusing section, Prof. Tisdale would take our groans on with excitement,” wrote one student. “His attitude made us feel like we could all get through the class together.” Regardless of how they performed on a test, wrote another, Tisdale “clearly sent the message that we all always have so much more to learn, but that first and foremost he respected you as a person.”

“I don’t think I could teach the way I teach at many other universities,” Tisdale says. “MIT students show up on the first day of class with an innate desire to understand the world around them; all I have to do is pull back the curtain!”

“Professor Tisdale remains the best teacher, mentor, and role model that I have encountered,” one student remarked. “He has truly changed the course of my life.”

“I am extremely thankful to be at a university that values undergraduate education so highly,” Tisdale says. “Those of us who devote ourselves to undergraduate teaching and mentoring do so out of a strong sense of responsibility to the students as well as a genuine love of learning. There are few things more validating than being rewarded for doing something that already brings you joy.”

Jacob White

Jacob White is the Cecil H. Green Professor of Electrical Engineering and Computer Science (EECS) and chair of the Committee on Curricula. After completing his undergraduate degree at MIT, he received a master’s degree and doctorate from the University of California at Berkeley. He has been a member of the Course 6 faculty since 1987.

Colleagues and students alike observed White’s dedication not just to teaching, but to improving teaching throughout the Institute. As Luca Daniel and Asu Ozdaglar of the EECS department noted in their nomination letter, “Jacob completely understands that the most efficient way to make his passion and ideas for undergraduate education have a real lasting impact is to ‘teach it to the teachers!’”

One student wrote that White “has spent significant time and effort educating the lab assistants” of 6.302 (Feedback System Design). As one of these teaching assistants confirmed, White’s “enthusiastic spirit” inspired them to spend hours discussing how to best teach the subject. “Many people might think this is not how they want to spend their Thursday nights,” the student wrote. “I can speak for myself and the other TAs when I say that it was an incredibly fun and educational experience.”

His work to improve instruction has even expanded to other departments. A colleague describes White’s efforts to revamp 8.02 (Physics II) as “Herculean.” Working with a group of students and postdocs to develop experiments for this subject, “he seemed to be everywhere at once … while simultaneously teaching his own class.” Iterations took place over a year and a half, after which White trained the subject’s TAs as well. Hundreds of students are benefitting from these improved experiments.

White is, according to Daniel and Ozdaglar, “a colleague who sincerely, genuinely, and enormously cares about our undergraduate students and their education, not just in our EECS department, but also in our entire MIT home.”

When he’s not fine-tuning pedagogy or conducting teacher training, he is personally supporting his students. A visiting student described White’s attention: “He would regularly meet with us in groups of two to make sure we were learning. In a class of about 80 students in a huge lecture hall, it really felt like he cared for each of us.”

And his zeal has rubbed off: “He made me feel like being excited about the material was the most important thing,” one student wrote.
The significance of such a spark is not lost on White.

“As an MIT freshman in the late 1970s, I joined an undergraduate research program being pioneered by Professor Margaret MacVicar,” he says. “It was Professor MacVicar and UROP that put me on the academic’s path of looking for interesting problems with instructive solutions. It is a path I have walked for decades, with extraordinary colleagues and incredible students. So, being selected as a MacVicar Fellow? No honor could mean more to me.”