How Huntington’s disease affects different neurons

In patients with Huntington’s disease, neurons in a part of the brain called the striatum are among the hardest-hit. Degeneration of these neurons contributes to patients’ loss of motor control, which is one of the major hallmarks of the disease.

Neuroscientists at MIT have now shown that two distinct cell populations in the striatum are affected differently by Huntington’s disease. They believe that neurodegeneration of one of these populations leads to motor impairments, while damage to the other population, located in structures called striosomes, may account for the mood disorders that are often see in the early stages of the disease.

“As many as 10 years ahead of the motor diagnosis, Huntington’s patients can experience mood disorders, and one possibility is that the striosomes might be involved in these,” says Ann Graybiel, an MIT Institute Professor, a member of MIT’s McGovern Institute for Brain Research, and one of the senior authors of the study.

Using single-cell RNA sequencing to analyze the genes expressed in mouse models of Huntington’s disease and postmortem brain samples from Huntington’s patients, the researchers found that cells of the striosomes and another structure, the matrix, begin to lose their distinguishing features as the disease progresses. The researchers hope that their mapping of the striatum and how it is affected by Huntington’s could help lead to new treatments that target specific cells within the brain.

This kind of analysis could also shed light on other brain disorders that affect the striatum, such as Parkinson’s disease and autism spectrum disorder, the researchers say.

Myriam Heiman, an associate professor in MIT’s Department of Brain and Cognitive Sciences and a member of the Picower Institute for Learning and Memory, and Manolis Kellis, a professor of computer science in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and a member of the Broad Institute of MIT and Harvard, are also senior authors of the study. Ayano Matsushima, a McGovern Institute research scientist, and Sergio Sebastian Pineda, an MIT graduate student, are the lead authors of the paper, which appears in Nature Communications.

Neuron vulnerability

Huntington’s disease leads to degeneration of brain structures called the basal ganglia, which are responsible for control of movement and also play roles in other behaviors, as well as emotions. For many years, Graybiel has been studying the striatum, a part of the basal ganglia that is involved in making decisions that require evaluating the outcomes of a particular action.

Many years ago, Graybiel discovered that the striatum is divided into striosomes, which are clusters of neurons, and the matrix, which surrounds the striosomes. She has also shown that striosomes are necessary for making decisions that require an anxiety-provoking cost-benefit analysis.

In a 2007 study, Richard Faull of the University of Auckland discovered that in postmortem brain tissue from Huntington’s patients, the striosomes showed a great deal of degeneration. Faull also found that while those patients were alive, many of them had shown signs of mood disorders such as depression before their motor symptoms developed.

To further explore the connections between the striatum and the mood and motor effects of Huntington’s, Graybiel teamed up with Kellis and Heiman to study the gene expression patterns of striosomal and matrix cells. To do that, the researchers used single-cell RNA sequencing to analyze human brain samples and brain tissue from two mouse models of Huntington’s disease.

Within the striatum, neurons can be classified as either D1 or D2 neurons. D1 neurons are involved in the “go” pathway, which initiates an action, and D2 neurons are part of the “no-go” pathway, which suppresses an action. D1 and D2 neurons can both be found within either the striosomes and the matrix.

The analysis of RNA expression in each of these types of cells revealed that striosomal neurons are harder hit by Huntington’s than matrix neurons. Furthermore, within the striosomes, D2 neurons are more vulnerable than D1.

The researchers also found that these four major cell types begin to lose their identifying molecular identities and become more difficult to distinguish from one another in Huntington’s disease. “Overall, the distinction between striosomes and matrix becomes really blurry,” Graybiel says.

Striosomal disorders

The findings suggest that damage to the striosomes, which are known to be involved in regulating mood, may be responsible for the mood disorders that strike Huntington’s patients in the early stages of the disease. Later on, degeneration of the matrix neurons likely contributes to the decline of motor function, the researchers say.

In future work, the researchers hope to explore how degeneration or abnormal gene expression in the striosomes may contribute to other brain disorders.

Previous research has shown that overactivity of striosomes can lead to the development of repetitive behaviors such as those seen in autism, obsessive compulsive disorder, and Tourette’s syndrome. In this study, at least one of the genes that the researchers discovered was overexpressed in the striosomes of Huntington’s brains is also linked to autism.

Additionally, many striosome neurons project to the part of the brain that is most affected by Parkinson’s disease (the substantia nigra, which produces most of the brain’s dopamine).

“There are many, many disorders that probably involve the striatum, and now, partly through transcriptomics, we’re working to understand how all of this could fit together,” Graybiel says.

The research was funded by the Saks Kavanaugh Foundation, the CHDI Foundation, the National Institutes of Health, the Nancy Lurie Marks Family Foundation, the Simons Foundation, the JPB Foundation, the Kristin R. Pressman and Jessica J. Pourian ’13 Fund, and Robert Buxton.

Self-assembling proteins can store cellular “memories”

As cells perform their everyday functions, they turn on a variety of genes and cellular pathways. MIT engineers have now coaxed cells to inscribe the history of these events in a long protein chain that can be imaged using a light microscope.

Cells programmed to produce these chains continuously add building blocks that encode particular cellular events. Later, the ordered protein chains can be labeled with fluorescent molecules and read under a microscope, allowing researchers to reconstruct the timing of the events.

This technique could help shed light on the steps that underlie processes such as memory formation, response to drug treatment, and gene expression.

“There are a lot of changes that happen at organ or body scale, over hours to weeks, which cannot be tracked over time,” says Edward Boyden, the Y. Eva Tan Professor in Neurotechnology, a professor of biological engineering and brain and cognitive sciences at MIT, a Howard Hughes Medical Institute investigator, and a member of MIT’s McGovern Institute for Brain Research and Koch Institute for Integrative Cancer Research.

If the technique could be extended to work over longer time periods, it could also be used to study processes such as aging and disease progression, the researchers say.

Boyden is the senior author of the study, which appears today in Nature Biotechnology. Changyang Linghu, a former J. Douglas Tan Postdoctoral Fellow at the McGovern Institute, who is now an assistant professor at the University of Michigan, is the lead author of the paper.

Cellular history

Biological systems such as organs contain many different kinds of cells, all of which have distinctive functions. One way to study these functions is to image proteins, RNA, or other molecules inside the cells, which provide hints to what the cells are doing. However, most methods for doing this offer only a glimpse of a single moment in time, or don’t work well with very large populations of cells.

“Biological systems are often composed of a large number of different types of cells. For example, the human brain has 86 billion cells,” Linghu says. “To understand those kinds of biological systems, we need to observe physiological events over time in these large cell populations.”

To achieve that, the research team came up with the idea of recording cellular events as a series of protein subunits that are continuously added to a chain. To create their chains, the researchers used engineered protein subunits, not normally found in living cells, that can self-assemble into long filaments.

The researchers designed a genetically encoded system in which one of these subunits is continuously produced inside cells, while the other is generated only when a specific event occurs. Each subunit also contains a very short peptide called an epitope tag — in this case, the researchers chose tags called HA and V5. Each of these tags can bind to a different fluorescent antibody, making it easy to visualize the tags later on and determine the sequence of the protein subunits.

For this study, the researchers made production of the V5-containing subunit contingent on the activation of a gene called c-fos, which is involved in encoding new memories. HA-tagged subunits make up most of the chain, but whenever the V5 tag shows up in the chain, that means that c-fos was activated during that time.

“We’re hoping to use this kind of protein self-assembly to record activity in every single cell,” Linghu says. “It’s not only a snapshot in time, but also records past history, just like how tree rings can permanently store information over time as the wood grows.”

Recording events

In this study, the researchers first used their system to record activation of c-fos in neurons growing in a lab dish. The c-fos gene was activated by chemically induced activation of the neurons, which caused the V5 subunit to be added to the protein chain.

To explore whether this approach could work in the brains of animals, the researchers programmed brain cells of mice to generate protein chains that would reveal when the animals were exposed to a particular drug. Later, the researchers were able to detect that exposure by preserving the tissue and analyzing it with a light microscope.

The researchers designed their system to be modular, so that different epitope tags can be swapped in, or different types of cellular events can be detected, including, in principle, cell division or activation of enzymes called protein kinases, which help control many cellular pathways.

The researchers also hope to extend the recording period that they can achieve. In this study, they recorded events for several days before imaging the tissue. There is a tradeoff between the amount of time that can be recorded and the time resolution, or frequency of event recording, because the length of the protein chain is limited by the size of the cell.

“The total amount of information it could store is fixed, but we could in principle slow down or increase the speed of the growth of the chain,” Linghu says. “If we want to record for a longer time, we could slow down the synthesis so that it will reach the size of the cell within, let’s say two weeks. In that way we could record longer, but with less time resolution.”

The researchers are also working on engineering the system so that it can record multiple types of events in the same chain, by increasing the number of different subunits that can be incorporated.

The research was funded by the Hock E. Tan and K. Lisa Yang Center for Autism Research, John Doerr, the National Institutes of Health, the National Science Foundation, the U.S. Army Research Office, and the Howard Hughes Medical Institute.

New sensor uses MRI to detect light deep in the brain

Using a specialized MRI sensor, MIT researchers have shown that they can detect light deep within tissues such as the brain.

Imaging light in deep tissues is extremely difficult because as light travels into tissue, much of it is either absorbed or scattered. The MIT team overcame that obstacle by designing a sensor that converts light into a magnetic signal that can be detected by MRI (magnetic resonance imaging).

This type of sensor could be used to map light emitted by optical fibers implanted in the brain, such as the fibers used to stimulate neurons during optogenetic experiments. With further development, it could also prove useful for monitoring patients who receive light-based therapies for cancer, the researchers say.

“We can image the distribution of light in tissue, and that’s important because people who use light to stimulate tissue or to measure from tissue often don’t quite know where the light is going, where they’re stimulating, or where the light is coming from. Our tool can be used to address those unknowns,” says Alan Jasanoff, an MIT professor of biological engineering, brain and cognitive sciences, and nuclear science and engineering.

Jasanoff, who is also an associate investigator at MIT’s McGovern Institute for Brain Research, is the senior author of the study, which appears today in Nature Biomedical Engineering. Jacob Simon PhD ’21 and MIT postdoc Miriam Schwalm are the paper’s lead authors, and Johannes Morstein and Dirk Trauner of New York University are also authors of the paper.

A light-sensitive probe

Scientists have been using light to study living cells for hundreds of years, dating back to the late 1500s, when the light microscope was invented. This kind of microscopy allows researchers to peer inside cells and thin slices of tissue, but not deep inside an organism.

“One of the persistent problems in using light, especially in the life sciences, is that it doesn’t do a very good job penetrating many materials,” Jasanoff says. “Biological materials absorb light and scatter light, and the combination of those things prevents us from using most types of optical imaging for anything that involves focusing in deep tissue.”

To overcome that limitation, Jasanoff and his students decided to design a sensor that could transform light into a magnetic signal.

“We wanted to create a magnetic sensor that responds to light locally, and therefore is not subject to absorbance or scattering. Then this light detector can be imaged using MRI,” he says.

Jasanoff’s lab has previously developed MRI probes that can interact with a variety of molecules in the brain, including dopamine and calcium. When these probes bind to their targets, it affects the sensors’ magnetic interactions with the surrounding tissue, dimming or brightening the MRI signal.

To make a light-sensitive MRI probe, the researchers decided to encase magnetic particles in a nanoparticle called a liposome. The liposomes used in this study are made from specialized light-sensitive lipids that Trauner had previously developed. When these lipids are exposed to a certain wavelength of light, the liposomes become more permeable to water, or “leaky.” This allows the magnetic particles inside to interact with water and generate a signal detectable by MRI.

The particles, which the researchers called liposomal nanoparticle reporters (LisNR), can switch from permeable to impermeable depending on the type of light they’re exposed to. In this study, the researchers created particles that become leaky when exposed to ultraviolet light, and then become impermeable again when exposed to blue light. The researchers also showed that the particles could respond to other wavelengths of light.

“This paper shows a novel sensor to enable photon detection with MRI through the brain. This illuminating work introduces a new avenue to bridge photon and proton-driven neuroimaging studies,” says Xin Yu, an assistant professor radiology at Harvard Medical School, who was not involved in the study.

Mapping light

The researchers tested the sensors in the brains of rats — specifically, in a part of the brain called the striatum, which is involved in planning movement and responding to reward. After injecting the particles throughout the striatum, the researchers were able to map the distribution of light from an optical fiber implanted nearby.

The fiber they used is similar to those used for optogenetic stimulation, so this kind of sensing could be useful to researchers who perform optogenetic experiments in the brain, Jasanoff says.

“We don’t expect that everybody doing optogenetics will use this for every experiment — it’s more something that you would do once in a while, to see whether a paradigm that you’re using is really producing the profile of light that you think it should be,” Jasanoff says.

In the future, this type of sensor could also be useful for monitoring patients receiving treatments that involve light, such as photodynamic therapy, which uses light from a laser or LED to kill cancer cells.

The researchers are now working on similar probes that could be used to detect light emitted by luciferases, a family of glowing proteins that are often used in biological experiments. These proteins can be used to reveal whether a particular gene is activated or not, but currently they can only be imaged in superficial tissue or cells grown in a lab dish.

Jasanoff also hopes to use the strategy used for the LisNR sensor to design MRI probes that can detect stimuli other than light, such as neurochemicals or other molecules found in the brain.

“We think that the principle that we use to construct these sensors is quite broad and can be used for other purposes too,” he says.

The research was funded by the National Institutes of Health, the G. Harold and Leila Y. Mathers Foundation, a Friends of the McGovern Fellowship from the McGovern Institute for Brain Research, the MIT Neurobiological Engineering Training Program, and a Marie Curie Individual Fellowship from the European Commission.

This is your brain. This is your brain on code

Functional magnetic resonance imaging (fMRI), which measures changes in blood flow throughout the brain, has been used over the past couple of decades for a variety of applications, including “functional anatomy” — a way of determining which brain areas are switched on when a person carries out a particular task. fMRI has been used to look at people’s brains while they’re doing all sorts of things — working out math problems, learning foreign languages, playing chess, improvising on the piano, doing crossword puzzles, and even watching TV shows like “Curb Your Enthusiasm.”

One pursuit that’s received little attention is computer programming — both the chore of writing code and the equally confounding task of trying to understand a piece of already-written code. “Given the importance that computer programs have assumed in our everyday lives,” says Shashank Srikant, a PhD student in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), “that’s surely worth looking into. So many people are dealing with code these days — reading, writing, designing, debugging — but no one really knows what’s going on in their heads when that happens.” Fortunately, he has made some “headway” in that direction in a paper — written with MIT colleagues Benjamin Lipkin (the paper’s other lead author, along with Srikant), Anna Ivanova, Evelina Fedorenko, and Una-May O’Reilly — that was presented earlier this month at the Neural Information Processing Systems Conference held in New Orleans.

The new paper built on a 2020 study, written by many of the same authors, which used fMRI to monitor the brains of programmers as they “comprehended” small pieces, or snippets, of code. (Comprehension, in this case, means looking at a snippet and correctly determining the result of the computation performed by the snippet.) The 2020 work showed that code comprehension did not consistently activate the language system, brain regions that handle language processing, explains Fedorenko, a brain and cognitive sciences (BCS) professor and a coauthor of the earlier study. “Instead, the multiple demand network — a brain system that is linked to general reasoning and supports domains like mathematical and logical thinking — was strongly active.” The current work, which also utilizes MRI scans of programmers, takes “a deeper dive,” she says, seeking to obtain more fine-grained information.

Whereas the previous study looked at 20 to 30 people to determine which brain systems, on average, are relied upon to comprehend code, the new research looks at the brain activity of individual programmers as they process specific elements of a computer program. Suppose, for instance, that there’s a one-line piece of code that involves word manipulation and a separate piece of code that entails a mathematical operation. “Can I go from the activity we see in the brains, the actual brain signals, to try to reverse-engineer and figure out what, specifically, the programmer was looking at?” Srikant asks. “This would reveal what information pertaining to programs is uniquely encoded in our brains.” To neuroscientists, he notes, a physical property is considered “encoded” if they can infer that property by looking at someone’s brain signals.

Take, for instance, a loop — an instruction within a program to repeat a specific operation until the desired result is achieved — or a branch, a different type of programming instruction than can cause the computer to switch from one operation to another. Based on the patterns of brain activity that were observed, the group could tell whether someone was evaluating a piece of code involving a loop or a branch. The researchers could also tell whether the code related to words or mathematical symbols, and whether someone was reading actual code or merely a written description of that code.

That addressed a first question that an investigator might ask as to whether something is, in fact, encoded. If the answer is yes, the next question might be: where is it encoded? In the above-cited cases — loops or branches, words or math, code or a description thereof — brain activation levels were found to be comparable in both the language system and the multiple demand network.

A noticeable difference was observed, however, when it came to code properties related to what’s called dynamic analysis.

Programs can have “static” properties — such as the number of numerals in a sequence — that do not change over time. “But programs can also have a dynamic aspect, such as the number of times a loop runs,” Srikant says. “I can’t always read a piece of code and know, in advance, what the run time of that program will be.” The MIT researchers found that for dynamic analysis, information is encoded much better in the multiple demand network than it is in the language processing center. That finding was one clue in their quest to see how code comprehension is distributed throughout the brain — which parts are involved and which ones assume a bigger role in certain aspects of that task.

The team carried out a second set of experiments, which incorporated machine learning models called neural networks that were specifically trained on computer programs. These models have been successful, in recent years, in helping programmers complete pieces of code. What the group wanted to find out was whether the brain signals seen in their study when participants were examining pieces of code resembled the patterns of activation observed when neural networks analyzed the same piece of code. And the answer they arrived at was a qualified yes.

“If you put a piece of code into the neural network, it produces a list of numbers that tells you, in some way, what the program is all about,” Srikant says. Brain scans of people studying computer programs similarly produce a list of numbers. When a program is dominated by branching, for example, “you see a distinct pattern of brain activity,” he adds, “and you see a similar pattern when the machine learning model tries to understand that same snippet.”

Mariya Toneva of the Max Planck Institute for Software Systems considers findings like this “particularly exciting. They raise the possibility of using computational models of code to better understand what happens in our brains as we read programs,” she says.

The MIT scientists are definitely intrigued by the connections they’ve uncovered, which shed light on how discrete pieces of computer programs are encoded in the brain. But they don’t yet know what these recently-gleaned insights can tell us about how people carry out more elaborate plans in the real world. Completing tasks of this sort — such as going to the movies, which requires checking showtimes, arranging for transportation, purchasing tickets, and so forth — could not be handled by a single unit of code and just a single algorithm. Successful execution of such a plan would instead require “composition” — stringing together various snippets and algorithms into a sensible sequence that leads to something new, just like assembling individual bars of music in order to make a song or even a symphony. Creating models of code composition, says O’Reilly, a principal research scientist at CSAIL, “is beyond our grasp at the moment.”

Lipkin, a BCS PhD student, considers this the next logical step — figuring out how to “combine simple operations to build complex programs and use those strategies to effectively address general reasoning tasks.” He further believes that some of the progress toward that goal achieved by the team so far owes to its interdisciplinary makeup. “We were able to draw from individual experiences with program analysis and neural signal processing, as well as combined work on machine learning and natural language processing,” Lipkin says. “These types of collaborations are becoming increasingly common as neuro- and computer scientists join forces on the quest towards understanding and building general intelligence.”

This project was funded by grants from the MIT-IBM Watson AI lab, MIT Quest Initiative, National Science Foundation, National Institutes of Health, McGovern Institute of Brain Research, MIT Department of Brain and Cognitive Sciences, and the Simons Center for the Social Brain.

Silent synapses are abundant in the adult brain

MIT neuroscientists have discovered that the adult brain contains millions of “silent synapses” — immature connections between neurons that remain inactive until they’re recruited to help form new memories.

Until now, it was believed that silent synapses were present only during early development, when they help the brain learn the new information that it’s exposed to early in life. However, the new MIT study revealed that in adult mice, about 30 percent of all synapses in the brain’s cortex are silent.

The existence of these silent synapses may help to explain how the adult brain is able to continually form new memories and learn new things without having to modify existing conventional synapses, the researchers say.

“These silent synapses are looking for new connections, and when important new information is presented, connections between the relevant neurons are strengthened. This lets the brain create new memories without overwriting the important memories stored in mature synapses, which are harder to change,” says Dimitra Vardalaki, an MIT graduate student and the lead author of the new study.

Mark Harnett, an associate professor of brain and cognitive sciences and an investigator at the McGovern Institute for Brain Research, is the senior author of the paper, which appears today in Nature. Kwanghun Chung, an associate professor of chemical engineering at MIT, is also an author.

A surprising discovery

When scientists first discovered silent synapses decades ago, they were seen primarily in the brains of young mice and other animals. During early development, these synapses are believed to help the brain acquire the massive amounts of information that babies need to learn about their environment and how to interact with it. In mice, these synapses were believed to disappear by about 12 days of age (equivalent to the first months of human life).

However, some neuroscientists have proposed that silent synapses may persist into adulthood and help with the formation of new memories. Evidence for this has been seen in animal models of addiction, which is thought to be largely a disorder of aberrant learning.

Theoretical work in the field from Stefano Fusi and Larry Abbott of Columbia University has also proposed that neurons must display a wide range of different plasticity mechanisms to explain how brains can both efficiently learn new things and retain them in long-term memory. In this scenario, some synapses must be established or modified easily, to form the new memories, while others must remain much more stable, to preserve long-term memories.

In the new study, the MIT team did not set out specifically to look for silent synapses. Instead, they were following up on an intriguing finding from a previous study in Harnett’s lab. In that paper, the researchers showed that within a single neuron, dendrites — antenna-like extensions that protrude from neurons — can process synaptic input in different ways, depending on their location.

As part of that study, the researchers tried to measure neurotransmitter receptors in different dendritic branches, to see if that would help to account for the differences in their behavior. To do that, they used a technique called eMAP (epitope-preserving Magnified Analysis of the Proteome), developed by Chung. Using this technique, researchers can physically expand a tissue sample and then label specific proteins in the sample, making it possible to obtain super-high-resolution images.

The first thing we saw, which was super bizarre and we didn’t expect, was that there were filopodia everywhere.

While they were doing that imaging, they made a surprising discovery. “The first thing we saw, which was super bizarre and we didn’t expect, was that there were filopodia everywhere,” Harnett says.

Filopodia, thin membrane protrusions that extend from dendrites, have been seen before, but neuroscientists didn’t know exactly what they do. That’s partly because filopodia are so tiny that they are difficult to see using traditional imaging techniques.

After making this observation, the MIT team set out to try to find filopodia in other parts of the adult brain, using the eMAP technique. To their surprise, they found filopodia in the mouse visual cortex and other parts of the brain, at a level 10 times higher than previously seen. They also found that filopodia had neurotransmitter receptors called NMDA receptors, but no AMPA receptors.

A typical active synapse has both of these types of receptors, which bind the neurotransmitter glutamate. NMDA receptors normally require cooperation with AMPA receptors to pass signals because NMDA receptors are blocked by magnesium ions at the normal resting potential of neurons. Thus, when AMPA receptors are not present, synapses that have only NMDA receptors cannot pass along an electric current and are referred to as “silent.”

Unsilencing synapses

To investigate whether these filopodia might be silent synapses, the researchers used a modified version of an experimental technique known as patch clamping. This allowed them to monitor the electrical activity generated at individual filopodia as they tried to stimulate them by mimicking the release of the neurotransmitter glutamate from a neighboring neuron.

Using this technique, the researchers found that glutamate would not generate any electrical signal in the filopodium receiving the input, unless the NMDA receptors were experimentally unblocked. This offers strong support for the theory the filopodia represent silent synapses within the brain, the researchers say.

The researchers also showed that they could “unsilence” these synapses by combining glutamate release with an electrical current coming from the body of the neuron. This combined stimulation leads to accumulation of AMPA receptors in the silent synapse, allowing it to form a strong connection with the nearby axon that is releasing glutamate.

The researchers found that converting silent synapses into active synapses was much easier than altering mature synapses.

“If you start with an already functional synapse, that plasticity protocol doesn’t work,” Harnett says. “The synapses in the adult brain have a much higher threshold, presumably because you want those memories to be pretty resilient. You don’t want them constantly being overwritten. Filopodia, on the other hand, can be captured to form new memories.”

“Flexible and robust”

The findings offer support for the theory proposed by Abbott and Fusi that the adult brain includes highly plastic synapses that can be recruited to form new memories, the researchers say.

“This paper is, as far as I know, the first real evidence that this is how it actually works in a mammalian brain,” Harnett says. “Filopodia allow a memory system to be both flexible and robust. You need flexibility to acquire new information, but you also need stability to retain the important information.”

The researchers are now looking for evidence of these silent synapses in human brain tissue. They also hope to study whether the number or function of these synapses is affected by factors such as aging or neurodegenerative disease.

“It’s entirely possible that by changing the amount of flexibility you’ve got in a memory system, it could become much harder to change your behaviors and habits or incorporate new information,” Harnett says. “You could also imagine finding some of the molecular players that are involved in filopodia and trying to manipulate some of those things to try to restore flexible memory as we age.”

The research was funded by the Boehringer Ingelheim Fonds, the National Institutes of Health, the James W. and Patricia T. Poitras Fund at MIT, a Klingenstein-Simons Fellowship, and Vallee Foundation Scholarship, and a McKnight Scholarship.

New CRISPR-based tool inserts large DNA sequences at desired sites in cells

Building on the CRISPR gene-editing system, MIT researchers have designed a new tool that can snip out faulty genes and replace them with new ones, in a safer and more efficient way.

Using this system, the researchers showed that they could deliver genes as long as 36,000 DNA base pairs to several types of human cells, as well as to liver cells in mice. The new technique, known as PASTE, could hold promise for treating diseases that are caused by defective genes with a large number of mutations, such as cystic fibrosis.

“It’s a new genetic way of potentially targeting these really hard to treat diseases,” says Omar Abudayyeh, a McGovern Fellow at MIT’s McGovern Institute for Brain Research. “We wanted to work toward what gene therapy was supposed to do at its original inception, which is to replace genes, not just correct individual mutations.”

The new tool combines the precise targeting of CRISPR-Cas9, a set of molecules originally derived from bacterial defense systems, with enzymes called integrases, which viruses use to insert their own genetic material into a bacterial genome.

“Just like CRISPR, these integrases come from the ongoing battle between bacteria and the viruses that infect them,” says Jonathan Gootenberg, also a McGovern Fellow. “It speaks to how we can keep finding an abundance of interesting and useful new tools from these natural systems.”

Gootenberg and Abudayyeh are the senior authors of the new study, which appears today in Nature Biotechnology. The lead authors of the study are MIT technical associates Matthew Yarnall and Rohan Krajeski, former MIT graduate student Eleonora Ioannidi, and MIT graduate student Cian Schmitt-Ulms.

DNA insertion

The CRISPR-Cas9 gene editing system consists of a DNA-cutting enzyme called Cas9 and a short RNA strand that guides the enzyme to a specific area of the genome, directing Cas9 where to make its cut. When Cas9 and the guide RNA targeting a disease gene are delivered into cells, a specific cut is made in the genome, and the cells’ DNA repair processes glue the cut back together, often deleting a small portion of the genome.

If a DNA template is also delivered, the cells can incorporate a corrected copy into their genomes during the repair process. However, this process requires cells to make double-stranded breaks in their DNA, which can cause chromosomal deletions or rearrangements that are harmful to cells. Another limitation is that it only works in cells that are dividing, as nondividing cells don’t have active DNA repair processes.

The MIT team wanted to develop a tool that could cut out a defective gene and replace it with a new one without inducing any double-stranded DNA breaks. To achieve this goal, they turned to a family of enzymes called integrases, which viruses called bacteriophages use to insert themselves into bacterial genomes.

For this study, the researchers focused on serine integrases, which can insert huge chunks of DNA, as large as 50,000 base pairs. These enzymes target specific genome sequences known as attachment sites, which function as “landing pads.” When they find the correct landing pad in the host genome, they bind to it and integrate their DNA payload.

In past work, scientists have found it challenging to develop these enzymes for human therapy because the landing pads are very specific, and it’s difficult to reprogram integrases to target other sites. The MIT team realized that combining these enzymes with a CRISPR-Cas9 system that inserts the correct landing site would enable easy reprogramming of the powerful insertion system.

The new tool, PASTE (Programmable Addition via Site-specific Targeting Elements), includes a Cas9 enzyme that cuts at a specific genomic site, guided by a strand of RNA that binds to that site. This allows them to target any site in the genome for insertion of the landing site, which contains 46 DNA base pairs. This insertion can be done without introducing any double-stranded breaks by adding one DNA strand first via a fused reverse transcriptase, then its complementary strand.

Once the landing site is incorporated, the integrase can come along and insert its much larger DNA payload into the genome at that site.

“We think that this is a large step toward achieving the dream of programmable insertion of DNA,” Gootenberg says. “It’s a technique that can be easily tailored both to the site that we want to integrate as well as the cargo.”

Gene replacement

In this study, the researchers showed that they could use PASTE to insert genes into several types of human cells, including liver cells, T cells, and lymphoblasts (immature white blood cells). They tested the delivery system with 13 different payload genes, including some that could be therapeutically useful, and were able to insert them into nine different locations in the genome.

In these cells, the researchers were able to insert genes with a success rate ranging from 5 to 60 percent. This approach also yielded very few unwanted “indels” (insertions or deletions) at the sites of gene integration.

“We see very few indels, and because we’re not making double-stranded breaks, you don’t have to worry about chromosomal rearrangements or large-scale chromosome arm deletions,” Abudayyeh says.

The researchers also demonstrated that they could insert genes in “humanized” livers in mice. Livers in these mice consist of about 70 percent human hepatocytes, and PASTE successfully integrated new genes into about 2.5 percent of these cells.

The DNA sequences that the researchers inserted in this study were up to 36,000 base pairs long, but they believe even longer sequences could also be used. A human gene can range from a few hundred to more than 2 million base pairs, although for therapeutic purposes only the coding sequence of the protein needs to be used, drastically reducing the size of the DNA segment that needs to be inserted into the genome.

“The ability to site-specifically make large genomic integrations is of huge value to both basic science and biotechnology studies. This toolset will, I anticipate, be very enabling for the research community,” says Prashant Mali, a professor of bioengineering at the University of California at San Diego, who was not involved in the study.

The researchers are now further exploring the possibility of using this tool as a possible way to replace the defective cystic fibrosis gene. This technique could also be useful for treating blood diseases caused by faulty genes, such as hemophilia and G6PD deficiency, or Huntington’s disease, a neurological disorder caused by a defective gene that has too many gene repeats.

The researchers have also made their genetic constructs available online for other scientists to use.

“One of the fantastic things about engineering these molecular technologies is that people can build on them, develop and apply them in ways that maybe we didn’t think of or hadn’t considered,” Gootenberg says. “It’s really great to be part of that emerging community.”

The research was funded by a Swiss National Science Foundation Postdoc Mobility Fellowship, the U.S. National Institutes of Health, the McGovern Institute Neurotechnology Program, the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics in Neuroscience, the G. Harold and Leila Y. Mathers Charitable Foundation, the MIT John W. Jarve Seed Fund for Science Innovation, Impetus Grants, a Cystic Fibrosis Foundation Pioneer Grant, Google Ventures, Fast Grants, the Harvey Family Foundation, and the McGovern Institute.

Ila Fiete wins Swartz Prize for Theoretical and Computational Neuroscience

The Society for Neuroscience (SfN) has awarded the Swartz Prize for Theoretical and Computational Neuroscience to Ila Fiete, professor in the Department of Brain and Cognitive Sciences, associate member of the McGovern Institute for Brain Research, and director of the K. Lisa Yang Integrative Computational Neuroscience Center. The SfN, the world’s largest neuroscience organization, announced that Fiete received the prize for her breakthrough research modeling hippocampal grid cells, a component of the navigational system of the mammalian brain.

“Fiete’s body of work has already significantly shaped the field of neuroscience and will continue to do so for the foreseeable future,” states the announcement from SfN.

“Fiete is considered one of the strongest theorists of her generation who has conducted highly influential work demonstrating that grid cell networks have attractor-like dynamics,” says Hollis Cline, a professor at the Scripps Research Institute of California and head of the Swartz Prize selection committee.

Grid cells are found in the cortex of all mammals. Their unique firing properties, creating a neural representation of our surroundings, allow us to navigate the world. Fiete and collaborators developed computational models showing how interactions between neurons can lead to the formation of periodic lattice-like firing patterns of grid cells and stabilize these patterns to create spatial memory. They showed that as we move around in space, these neural patterns can integrate velocity signals to provide a constantly updated estimate of our position, as well as detect and correct errors in the estimated position.

Fiete also proposed that multiple copies of these patterns at different spatial scales enabled efficient and high-capacity representation. Next, Fiete and colleagues worked with multiple collaborators to design experimental tests and establish rare evidence that these pattern-forming mechanisms underlie the function of memory pattern dynamics in the brain.

“I’m truly honored to receive the Swartz Prize,” says Fiete. “This prize recognizes my group’s efforts to decipher the circuit-level mechanisms of cognitive functions involving navigation, integration, and memory. It also recognizes, in its focus, the bearing-of-fruit of dynamical circuit models from my group and others that explain how individually simple elements combine to generate the longer-lasting memory states and complex computations of the brain. I am proud to be able to represent, in some measure, the work of my incredible students, postdocs, collaborators, and intellectual mentors. I am indebted to them and grateful for the chance to work together.”

According to the SfN announcement, Fiete has contributed to the field in many other ways, including modeling “how entorhinal cortex could interact with the hippocampus to efficiently and robustly store large numbers of memories and developed a remarkable method to discern the structure of intrinsic dynamics in neuronal circuits.” This modeling led to the discovery of an internal compass that tracks the direction of one’s head, even in the absence of external sensory input.

“Recently, Fiete’s group has explored the emergence of modular organization, a line of work that elucidates how grid cell modularity and general cortical modules might self-organize from smooth genetic gradients,” states the SfN announcement. Fiete and her research group have shown that even if the biophysical properties underlying grid cells of different scale are mostly similar, continuous variations in these properties can result in discrete groupings of grid cells, each with a different function.

Fiete was recognized with the Swartz Prize, which includes a $30,000 award, during the SfN annual meeting in San Diego.

Other recent MIT winners of the Swartz Prize include Professor Emery Brown (2020) and Professor Tomaso Poggio (2014).

Not every reader’s struggle is the same

Many children struggle to learn to read, and studies have shown that students from a lower socioeconomic status (SES) background are more likely to have difficulty than those from a higher SES background.

MIT neuroscientists have now discovered that the types of difficulties that lower-SES students have with reading, and the underlying brain signatures, are, on average, different from those of higher-SES students who struggle with reading.

In a new study, which included brain scans of more than 150 children as they performed tasks related to reading, researchers found that when students from higher SES backgrounds struggled with reading, it could usually be explained by differences in their ability to piece sounds together into words, a skill known as phonological processing.

However, when students from lower SES backgrounds struggled, it was best explained by differences in their ability to rapidly name words or letters, a task associated with orthographic processing, or visual interpretation of words and letters. This pattern was further confirmed by brain activation during phonological and orthographic processing.

These differences suggest that different types of interventions may needed for different groups of children, the researchers say. The study also highlights the importance of including a wide range of SES levels in studies of reading or other types of academic learning.

“Within the neuroscience realm, we tend to rely on convenience samples of participants, so a lot of our understanding of the neuroscience components of reading in general, and reading disabilities in particular, tends to be based on higher-SES families,” says Rachel Romeo, a former graduate student in the Harvard-MIT Program in Health Sciences and Technology and the lead author of the study. “If we only look at these nonrepresentative samples, we can come away with a relatively biased view of how the brain works.”

Romeo is now an assistant professor in the Department of Human Development and Quantitative Methodology at the University of Maryland. John Gabrieli, the Grover Hermann Professor of Health Sciences and Technology and a professor of brain and cognitive sciences at MIT, is the senior author of the paper, which appears today in the journal Developmental Cognitive Neuroscience.

Components of reading

For many years, researchers have known that children’s scores on standardized assessments of reading are correlated with socioeconomic factors such as school spending per student or the number of children at the school who qualify for free or reduced-price lunches.

Studies of children who struggle with reading, mostly done in higher-SES environments, have shown that the aspect of reading they struggle with most is phonological awareness: the understanding of how sounds combine to make a word, and how sounds can be split up and swapped in or out to make new words.

“That’s a key component of reading, and difficulty with phonological processing is often one of the hallmarks of dyslexia or other reading disorders,” Romeo says.

In the new study, the MIT team wanted to explore how SES might affect phonological processing as well as another key aspect of reading, orthographic processing. This relates more to the visual components of reading, including the ability to identify letters and read words.

To do the study, the researchers recruited first and second grade students from the Boston area, making an effort to include a range of SES levels. For the purposes of this study, SES was assessed by parents’ total years of formal education, which is commonly used as a measure of the family’s SES.

“We went into this not necessarily with any hypothesis about how SES might relate to the two types of processing, but just trying to understand whether SES might be impacting one or the other more, or if it affects both types the same,” Romeo says.

The researchers first gave each child a series of standardized tests designed to measure either phonological processing or orthographic processing. Then, they performed fMRI scans of each child while they carried out additional phonological or orthographic tasks.

The initial series of tests allowed the researchers to determine each child’s abilities for both types of processing, and the brain scans allowed them to measure brain activity in parts of the brain linked with each type of processing.

The results showed that at the higher end of the SES spectrum, differences in phonological processing ability accounted for most of the differences between good readers and struggling readers. This is consistent with the findings of previous studies of reading difficulty. In those children, the researchers also found greater differences in activity in the parts of the brain responsible for phonological processing.

However, the outcomes were different when the researchers analyzed the lower end of the SES spectrum. There, the researchers found that variance in orthographic processing ability accounted for most of the differences between good readers and struggling readers. MRI scans of these children revealed greater differences in brain activity in parts of the brain that are involved in orthographic processing.

Optimizing interventions

There are many possible reasons why a lower SES background might lead to difficulties in orthographic processing, the researchers say. It might be less exposure to books at home, or limited access to libraries and other resources that promote literacy. For children from this background who struggle with reading, different types of interventions might benefit them more than the ones typically used for children who have difficulty with phonological processing.

In a 2017 study, Gabrieli, Romeo, and others found that a summer reading intervention that focused on helping students develop the sensory and cognitive processing necessary for reading was more beneficial for students from lower-SES backgrounds than children from higher-SES backgrounds. Those findings also support the idea that tailored interventions may be necessary for individual students, they say.

“There are two major reasons we understand that cause children to struggle as they learn to read in these early grades. One of them is learning differences, most prominently dyslexia, and the other one is socioeconomic disadvantage,” Gabrieli says. “In my mind, schools have to help all these kinds of kids become the best readers they can, so recognizing the source or sources of reading difficulty ought to inform practices and policies that are sensitive to these differences and optimize supportive interventions.”

Gabrieli and Romeo are now working with researchers at the Harvard University Graduate School of Education to evaluate language and reading interventions that could better prepare preschool children from lower SES backgrounds to learn to read. In her new lab at the University of Maryland, Romeo also plans to further delve into how different aspects of low SES contribute to different areas of language and literacy development.

“No matter why a child is struggling with reading, they need the education and the attention to support them. Studies that try to tease out the underlying factors can help us in tailoring educational interventions to what a child needs,” she says.

The research was funded by the Ellison Medical Foundation, the Halis Family Foundation, and the National Institutes of Health.

Study urges caution when comparing neural networks to the brain

Neural networks, a type of computing system loosely modeled on the organization of the human brain, form the basis of many artificial intelligence systems for applications such speech recognition, computer vision, and medical image analysis.

In the field of neuroscience, researchers often use neural networks to try to model the same kind of tasks that the brain performs, in hopes that the models could suggest new hypotheses regarding how the brain itself performs those tasks. However, a group of researchers at MIT is urging that more caution should be taken when interpreting these models.

In an analysis of more than 11,000 neural networks that were trained to simulate the function of grid cells — key components of the brain’s navigation system — the researchers found that neural networks only produced grid-cell-like activity when they were given very specific constraints that are not found in biological systems.

“What this suggests is that in order to obtain a result with grid cells, the researchers training the models needed to bake in those results with specific, biologically implausible implementation choices,” says Rylan Schaeffer, a former senior research associate at MIT.

Without those constraints, the MIT team found that very few neural networks generated grid-cell-like activity, suggesting that these models do not necessarily generate useful predictions of how the brain works.

Schaeffer, who is now a graduate student in computer science at Stanford University, is the lead author of the new study, which will be presented at the 2022 Conference on Neural Information Processing Systems this month. Ila Fiete, a professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research, is the senior author of the paper. Mikail Khona, an MIT graduate student in physics, is also an author.

Ila Fiete leads a discussion in her lab at the McGovern Institute. Photo: Steph Stevens

Modeling grid cells

Neural networks, which researchers have been using for decades to perform a variety of computational tasks, consist of thousands or millions of processing units connected to each other. Each node has connections of varying strengths to other nodes in the network. As the network analyzes huge amounts of data, the strengths of those connections change as the network learns to perform the desired task.

In this study, the researchers focused on neural networks that have been developed to mimic the function of the brain’s grid cells, which are found in the entorhinal cortex of the mammalian brain. Together with place cells, found in the hippocampus, grid cells form a brain circuit that helps animals know where they are and how to navigate to a different location.

Place cells have been shown to fire whenever an animal is in a specific location, and each place cell may respond to more than one location. Grid cells, on the other hand, work very differently. As an animal moves through a space such as a room, grid cells fire only when the animal is at one of the vertices of a triangular lattice. Different groups of grid cells create lattices of slightly different dimensions, which overlap each other. This allows grid cells to encode a large number of unique positions using a relatively small number of cells.

This type of location encoding also makes it possible to predict an animal’s next location based on a given starting point and a velocity. In several recent studies, researchers have trained neural networks to perform this same task, which is known as path integration.

To train neural networks to perform this task, researchers feed into it a starting point and a velocity that varies over time. The model essentially mimics the activity of an animal roaming through a space, and calculates updated positions as it moves. As the model performs the task, the activity patterns of different units within the network can be measured. Each unit’s activity can be represented as a firing pattern, similar to the firing patterns of neurons in the brain.

In several previous studies, researchers have reported that their models produced units with activity patterns that closely mimic the firing patterns of grid cells. These studies concluded that grid-cell-like representations would naturally emerge in any neural network trained to perform the path integration task.

However, the MIT researchers found very different results. In an analysis of more than 11,000 neural networks that they trained on path integration, they found that while nearly 90 percent of them learned the task successfully, only about 10 percent of those networks generated activity patterns that could be classified as grid-cell-like. That includes networks in which even only a single unit achieved a high grid score.

The earlier studies were more likely to generate grid-cell-like activity only because of the constraints that researchers build into those models, according to the MIT team.

“Earlier studies have presented this story that if you train networks to path integrate, you’re going to get grid cells. What we found is that instead, you have to make this long sequence of choices of parameters, which we know are inconsistent with the biology, and then in a small sliver of those parameters, you will get the desired result,” Schaeffer says.

More biological models

One of the constraints found in earlier studies is that the researchers required the model to convert velocity into a unique position, reported by one network unit that corresponds to a place cell. For this to happen, the researchers also required that each place cell correspond to only one location, which is not how biological place cells work: Studies have shown that place cells in the hippocampus can respond to up to 20 different locations, not just one.

When the MIT team adjusted the models so that place cells were more like biological place cells, the models were still able to perform the path integration task, but they no longer produced grid-cell-like activity. Grid-cell-like activity also disappeared when the researchers instructed the models to generate different types of location output, such as location on a grid with X and Y axes, or location as a distance and angle relative to a home point.

“If the only thing that you ask this network to do is path integrate, and you impose a set of very specific, not physiological requirements on the readout unit, then it’s possible to obtain grid cells,” says Fiete, who is also the director of the K. Lisa Yang Integrative Computational Neuroscience Center at MIT. “But if you relax any of these aspects of this readout unit, that strongly degrades the ability of the network to produce grid cells. In fact, usually they don’t, even though they still solve the path integration task.”

Therefore, if the researchers hadn’t already known of the existence of grid cells, and guided the model to produce them, it would be very unlikely for them to appear as a natural consequence of the model training.

The researchers say that their findings suggest that more caution is warranted when interpreting neural network models of the brain.

“When you use deep learning models, they can be a powerful tool, but one has to be very circumspect in interpreting them and in determining whether they are truly making de novo predictions, or even shedding light on what it is that the brain is optimizing,” Fiete says.

Kenneth Harris, a professor of quantitative neuroscience at University College London, says he hopes the new study will encourage neuroscientists to be more careful when stating what can be shown by analogies between neural networks and the brain.

“Neural networks can be a useful source of predictions. If you want to learn how the brain solves a computation, you can train a network to perform it, then test the hypothesis that the brain works the same way. Whether the hypothesis is confirmed or not, you will learn something,” says Harris, who was not involved in the study. “This paper shows that ‘postdiction’ is less powerful: Neural networks have many parameters, so getting them to replicate an existing result is not as surprising.”

When using these models to make predictions about how the brain works, it’s important to take into account realistic, known biological constraints when building the models, the MIT researchers say. They are now working on models of grid cells that they hope will generate more accurate predictions of how grid cells in the brain work.

“Deep learning models will give us insight about the brain, but only after you inject a lot of biological knowledge into the model,” Khona says. “If you use the correct constraints, then the models can give you a brain-like solution.”

The research was funded by the Office of Naval Research, the National Science Foundation, the Simons Foundation through the Simons Collaboration on the Global Brain, and the Howard Hughes Medical Institute through the Faculty Scholars Program. Mikail Khona was supported by the MathWorks Science Fellowship.

Magnetic sensors track muscle length

Using a simple set of magnets, MIT researchers have come up with a sophisticated way to monitor muscle movements, which they hope will make it easier for people with amputations to control their prosthetic limbs.

In a new pair of papers, the researchers demonstrated the accuracy and safety of their magnet-based system, which can track the length of muscles during movement. The studies, performed in animals, offer hope that this strategy could be used to help people with prosthetic devices control them in a way that more closely mimics natural limb movement.

“These recent results demonstrate that this tool can be used outside the lab to track muscle movement during natural activity, and they also suggest that the magnetic implants are stable and biocompatible and that they don’t cause discomfort,” says Cameron Taylor, an MIT research scientist and co-lead author of both papers.

McGovern Institute Associate Investigator Hugh Herr. Photo: Jimmy Day / MIT Media Lab

In one of the studies, the researchers showed that they could accurately measure the lengths of turkeys’ calf muscles as the birds ran, jumped, and performed other natural movements. In the other study, they showed that the small magnetic beads used for the measurements do not cause inflammation or other adverse effects when implanted in muscle.

“I am very excited for the clinical potential of this new technology to improve the control and efficacy of bionic limbs for persons with limb-loss,” says Hugh Herr, a professor of media arts and sciences, co-director of the K. Lisa Yang Center for Bionics at MIT, and an associate member of MIT’s McGovern Institute for Brain Research.

Herr is a senior author of both papers, which appear today in the journal Frontiers in Bioengineering and Biotechnology. Thomas Roberts, a professor of ecology, evolution, and organismal biology at Brown University, is a senior author of the measurement study.

Tracking movement

Currently, powered prosthetic limbs are usually controlled using an approach known as surface electromyography (EMG). Electrodes attached to the surface of the skin or surgically implanted in the residual muscle of the amputated limb measure electrical signals from a person’s muscles, which are fed into the prosthesis to help it move the way the person wearing the limb intends.

However, that approach does not take into account any information about the muscle length or velocity, which could help to make the prosthetic movements more accurate.

Several years ago, the MIT team began working on a novel way to perform those kinds of muscle measurements, using an approach that they call magnetomicrometry. This strategy takes advantage of the permanent magnetic fields surrounding small beads implanted in a muscle. Using a credit-card-sized, compass-like sensor attached to the outside of the body, their system can track the distances between the two magnets. When a muscle contracts, the magnets move closer together, and when it flexes, they move further apart.

The new muscle measuring approach takes advantage of the magnetic attraction between two small beads implanted in a muscle. Using a small sensor attached to the outside of the body, the system can track the distances between the two magnets as the muscle contracts and flexes. Image: Hugh Herr

In a study published last year, the researchers showed that this system could be used to accurately measure small ankle movements when the beads were implanted in the calf muscles of turkeys. In one of the new studies, the researchers set out to see if the system could make accurate measurements during more natural movements in a nonlaboratory setting.

To do that, they created an obstacle course of ramps for the turkeys to climb and boxes for them to jump on and off of. The researchers used their magnetic sensor to track muscle movements during these activities, and found that the system could calculate muscle lengths in less than a millisecond.

They also compared their data to measurements taken using a more traditional approach known as fluoromicrometry, a type of X-ray technology that requires much larger equipment than magnetomicrometry. The magnetomicrometry measurements varied from those generated by fluoromicrometry by less than a millimeter, on average.

“We’re able to provide the muscle-length tracking functionality of the room-sized X-ray equipment using a much smaller, portable package, and we’re able to collect the data continuously instead of being limited to the 10-second bursts that fluoromicrometry is limited to,” Taylor says.

Seong Ho Yeon, an MIT graduate student, is also a co-lead author of the measurement study. Other authors include MIT Research Support Associate Ellen Clarrissimeaux and former Brown University postdoc Mary Kate O’Donnell.

Biocompatibility

In the second paper, the researchers focused on the biocompatibility of the implants. They found that the magnets did not generate tissue scarring, inflammation, or other harmful effects. They also showed that the implanted magnets did not alter the turkeys’ gaits, suggesting they did not produce discomfort. William Clark, a postdoc at Brown, is the co-lead author of the biocompatibility study.

The researchers also showed that the implants remained stable for eight months, the length of the study, and did not migrate toward each other, as long as they were implanted at least 3 centimeters apart. The researchers envision that the beads, which consist of a magnetic core coated with gold and a polymer called Parylene, could remain in tissue indefinitely once implanted.

“Magnets don’t require an external power source, and after implanting them into the muscle, they can maintain the full strength of their magnetic field throughout the lifetime of the patient,” Taylor says.

The researchers are now planning to seek FDA approval to test the system in people with prosthetic limbs. They hope to use the sensor to control prostheses similar to the way surface EMG is used now: Measurements regarding the length of muscles will be fed into the control system of a prosthesis to help guide it to the position that the wearer intends.

“The place where this technology fills a need is in communicating those muscle lengths and velocities to a wearable robot, so that the robot can perform in a way that works in tandem with the human,” Taylor says. “We hope that magnetomicrometry will enable a person to control a wearable robot with the same comfort level and the same ease as someone would control their own limb.”

In addition to prosthetic limbs, those wearable robots could include robotic exoskeletons, which are worn outside the body to help people move their legs or arms more easily.

The research was funded by the Salah Foundation, the K. Lisa Yang Center for Bionics at MIT, the MIT Media Lab Consortia, the National Institutes of Health, and the National Science Foundation.