Engineers design magnetic cell sensors

MIT engineers have designed magnetic protein nanoparticles that can be used to track cells or to monitor interactions within cells. The particles, described today in Nature Communications, are an enhanced version of a naturally occurring, weakly magnetic protein called ferritin.

“Ferritin, which is as close as biology has given us to a naturally magnetic protein nanoparticle, is really not that magnetic. That’s what this paper is addressing,” says Alan Jasanoff, an MIT professor of biological engineering and the paper’s senior author. “We used the tools of protein engineering to try to boost the magnetic characteristics of this protein.”

The new “hypermagnetic” protein nanoparticles can be produced within cells, allowing the cells to be imaged or sorted using magnetic techniques. This eliminates the need to tag cells with synthetic particles and allows the particles to sense other molecules inside cells.

The paper’s lead author is former MIT graduate student Yuri Matsumoto. Other authors are graduate student Ritchie Chen and Polina Anikeeva, an assistant professor of materials science and engineering.

Magnetic pull

Previous research has yielded synthetic magnetic particles for imaging or tracking cells, but it can be difficult to deliver these particles into the target cells.

In the new study, Jasanoff and colleagues set out to create magnetic particles that are genetically encoded. With this approach, the researchers deliver a gene for a magnetic protein into the target cells, prompting them to start producing the protein on their own.

“Rather than actually making a nanoparticle in the lab and attaching it to cells or injecting it into cells, all we have to do is introduce a gene that encodes this protein,” says Jasanoff, who is also an associate member of MIT’s McGovern Institute for Brain Research.

As a starting point, the researchers used ferritin, which carries a supply of iron atoms that every cell needs as components of metabolic enzymes. In hopes of creating a more magnetic version of ferritin, the researchers created about 10 million variants and tested them in yeast cells.

After repeated rounds of screening, the researchers used one of the most promising candidates to create a magnetic sensor consisting of enhanced ferritin modified with a protein tag that binds with another protein called streptavidin. This allowed them to detect whether streptavidin was present in yeast cells; however, this approach could also be tailored to target other interactions.

The mutated protein appears to successfully overcome one of the key shortcomings of natural ferritin, which is that it is difficult to load with iron, says Alan Koretsky, a senior investigator at the National Institute of Neurological Disorders and Stroke.

“To be able to make more magnetic indicators for MRI would be fabulous, and this is an important step toward making that type of indicator more robust,” says Koretsky, who was not part of the research team.

Sensing cell signals

Because the engineered ferritins are genetically encoded, they can be manufactured within cells that are programmed to make them respond only under certain circumstances, such as when the cell receives some kind of external signal, when it divides, or when it differentiates into another type of cell. Researchers could track this activity using magnetic resonance imaging (MRI), potentially allowing them to observe communication between neurons, activation of immune cells, or stem cell differentiation, among other phenomena.

Such sensors could also be used to monitor the effectiveness of stem cell therapies, Jasanoff says.

“As stem cell therapies are developed, it’s going to be necessary to have noninvasive tools that enable you to measure them,” he says. Without this kind of monitoring, it would be difficult to determine what effect the treatment is having, or why it might not be working.

The researchers are now working on adapting the magnetic sensors to work in mammalian cells. They are also trying to make the engineered ferritin even more strongly magnetic.

2015 McGovern Institute Halloween Party

To view the 2015 McGovern Institute Halloween Party gallery, please click on one of the thumbnail images below.

To locate objects, brain relies on memory

Imagine you are looking for your wallet on a cluttered desk. As you scan the area, you hold in your mind a mental picture of what your wallet looks like.

MIT neuroscientists have now identified a brain region that stores this type of visual representation during a search. The researchers also found that this region sends signals to the parts of the brain that control eye movements, telling individuals where to look next.

This region, known as the ventral pre-arcuate (VPA), is critical for what the researchers call “feature attention,” which allows the brain to seek objects based on their specific properties. Most previous studies of how the brain pays attention have investigated a different type of attention known as spatial attention — that is, what happens when the brain focuses on a certain location.

“The way that people go about their lives most of the time, they don’t know where things are in advance. They’re paying attention to things based on their features,” says Robert Desimone, director of MIT’s McGovern Institute for Brain Research. “In the morning you’re trying to find your car keys so you can go to work. How do you do that? You don’t look at every pixel in your house. You have to use your knowledge of what your car keys look like.”

Desimone, also the Doris and Don Berkey Professor in MIT’s Department of Brain and Cognitive Sciences, is the senior author of a paper describing the findings in the Oct. 29 online edition of Neuron. The paper’s lead author is Narcisse Bichot, a research scientist at the McGovern Institute. Other authors are Matthew Heard, a former research technician, and Ellen DeGennaro, a graduate student in the Harvard-MIT Division of Health Sciences and Technology.

Visual targets

The researchers focused on the VPA in part because of its extensive connections with the brain’s frontal eye fields, which control eye movements. Located in the prefrontal cortex, the VPA has previously been linked with working memory — a cognitive ability that helps us to gather and coordinate information while performing tasks such as solving a math problem or participating in a conversation.

“There have been a lot of studies showing that this region of the cortex is heavily involved in working memory,” Bichot says. “If you have to remember something, cells in these areas are involved in holding the memory of that object for the purpose of identifying it later.”

In the new study, the researchers found that the VPA also holds what they call an “attentional template” — that is, a memory of the item being sought.

In this study, the researchers first showed monkeys a target object, such as a human face, a banana, or a butterfly. After a delay, they showed an array of objects that included the target. When the animal fixed its gaze on the target object, it received a reward. “The animals can look around as long as they want until they find what they’re looking for,” Bichot says.

As the animals performed the task, the researchers recorded electrical activity from neurons in the VPA. Each object produced a distinctive pattern of neural activity, and the neurons that encoded a representation of the target object stayed active until a match was found, prompting the neurons to fire even more.

“When the target object finally enters their receptive fields, they give enhanced responses,” Desimone says. “That’s the signal that the thing they’re looking for is actually there.”

About 20 to 30 milliseconds after the VPA cells respond to the target object, they send a signal to the frontal eye fields, which direct the eyes to lock onto the target.

When the researchers blocked VPA activity, they found that although the animals could still move their eyes around in search of the target object, they could not find it. “Presumably it’s because they’ve lost this mechanism for telling them where the likely target is,” Desimone says.

Focused attention

The researchers believe the VPA may be the equivalent in nonhuman primates of a human brain region called the inferior frontal junction (IFJ). Last year Desimone and postdoc Daniel Baldauf found that the IFJ holds onto the idea of a target object — in that study, either faces or houses — and then directs the correct part of the brain to look for the target.

The researchers are now studying how the VPA interacts with a nearby region called the VPS, which appears to be more important for tasks in which attention must be switched quickly from one object to another. They are also performing additional studies of human attention, in hopes of learning more about disorders such as Attention Deficit Hyperactivity Disorder and other attention disorders.

“There’s really an opportunity there to understand something important about the role of the prefrontal cortex in both normal behavior and in brain disorders,” Desimone says.

Stanley Center & Poitras Center Joint Translational Neuroscience Seminar Series: Dr. Steven Hyman

Abstract:

The genetic analysis of schizophrenia, bipolar disorder, and autism spectrum disorders has achieved early success. Much work remains: increasing the size and diversity of cohorts, fine mapping GWAS loci, and improving tools to implicate variants too rare to allow statistical certainty. However, the greater challenges lie in transforming gene lists into biological insights and therapies. The genetic architecture of neuropsychiatric disorders creates special difficulties for biology including polygenicity, low penetrance alleles and sharing across multiple disorders. These difficulties are heightened by the challenges posed by the human brain, with its diversity of cells and circuits, its inaccessibility in life, and by recent evolutionary changes that often limits the utility of animal models. I will review progress in genetics and discuss why the Stanley Center is pursing genetic analysis to “diminishing returns”. I will then argue that to exploit genetics we must (1) significantly humanize our model systems and commit to using the “right” cell types; (2) enhance molecular tools to interrogate human neurons and glia at the single cell level; (3) eschew overreliance on approaches that have worked for the investigation of highly penetrant alleles; and (4) develop ethical and practical frameworks so that compounds, once shown to be safe, can be studied in patients without attempting to gain false reassurance of efficacy from animal behavior.

How the brain keeps time

Keeping track of time is critical for many tasks, such as playing the piano, swinging a tennis racket, or holding a conversation. Neuroscientists at MIT and Columbia University have now figured out how neurons in one part of the brain measure time intervals and accurately reproduce them.

The researchers found the lateral intraparietal cortex (LIP), which plays a role in sensorimotor function, represents elapsed time, as animals measure and then reproduce a time interval. They also demonstrated how the firing patterns of population of neurons in the LIP could coordinate sensory and motor aspects of timing.

LIP is likely just one node in a circuit that measures time, says Mehrdad Jazayeri, the lead author of a paper describing the work in the Oct. 8 issue of Current Biology.

“I would not conclude that the parietal cortex is the timer,” says Jazayeri, an assistant professor of brain and cognitive sciences at MIT and a member of the McGovern Institute for Brain Research. “What we are doing is discovering computational principles that explain how neurons’ firing rates evolve with time, and how that relates to the animals’ behavior in single trials. We can explain mathematically what’s going on.”

The paper’s senior author is Michael Shadlen, a professor of neuroscience and member of the Mortimer B. Zuckerman Mind Brain Behavior Institute at Columbia University.

As time goes by

Jazayeri, who joined the MIT faculty in 2013, began studying timing in the brain several years ago while a postdoc at the University of Washington. He began by testing humans’ ability to measure and reproduce time using a task called “ready, set, go.” In this experiment, the subject measures the time between two flashes (“ready” and “set”) and then presses a button (“go”) at the appropriate time — that is, after the same amount of time that separated the “ready” and “set.”

From these studies, he discovered that people do not simply measure an interval and then reproduce it. Rather, after measuring an interval they combine that measurement, which is imprecise, with their prior knowledge of what the interval could have been. This prior knowledge, which builds up as they repeat the task many times, allows people to reproduce the interval more accurately.

“When people reproduce time, they don’t seem to use a timer,” Jazayeri says. “It’s an active act of probabilistic inference that goes on.”

To find out what happens in the brain during this process, Jazayeri recorded neuronal activity in the LIP of monkeys trained to perform the same task. In these recordings, he found distinctive patterns in the measurement phase (the interval between “ready” and “set”), and the production phase (the interval between “set” and “go”).

During the measurement phase, neuron activity increases, but not linearly. Instead, the slope of activity begins as a steep curve that gradually flattens out as time goes by, until the “set” signal is given. This is key because the slope at the end of the measurement interval predicts the slope of activity in the production phase.

When the interval is short, the slope during the second phase is steep. This allows the activity to increase quickly so that the animal can produce a short interval. When the interval is longer, the slope is gentler and it takes longer to reach the time of response.

“As time goes by during the measurement, the animal knows that the interval that it has to produce is longer and therefore requires a shallower slope,” Jazayeri says.

Using this data, the researchers could correctly predict, based on the slope at the end of the measurement phase, when the animal would produce the “go” signal.

“Previous research has shown that some neurons exhibit a ramping up of their firing rate that culminates with the onset of a timed motor response. This research is exciting because it provides the first hint as to what may control the slope of this ‘neural ramping,’ specifically that the slope of the ramp may be determined by the firing rate at the beginning of the timed interval,” says Dean Buonomano, a professor of behavioral neuroscience at the University of California at Los Angeles who was not involved in the research.

“A highly distributed problem”

All cognitive and motor functions rely on time to some extent. While LIP represents time during interval reproduction, Jazayeri believes that tracking time occurs throughout brain circuits that connect subcortical structures such as the thalamus, basal ganglia, and cerebellum to the cortex.

“Timing is going to be a highly distributed problem for the brain. There’s not going to be one place in the brain that does timing,” he says.

His lab is now pursuing several questions raised by this study. In one follow-up, the researchers are investigating how animals’ behavior and brain activity change based on their expectations for how long the first interval will last.

In another experiment, they are training animals to reproduce an interval that they get to measure twice. Preliminary results suggest that during the second interval, the animals refine the measurement they took during the first interval, allowing them to perform better than when they make just one measurement.

How the brain recognizes objects

When the eyes are open, visual information flows from the retina through the optic nerve and into the brain, which assembles this raw information into objects and scenes.

Scientists have previously hypothesized that objects are distinguished in the inferior temporal (IT) cortex, which is near the end of this flow of information, also called the ventral stream. A new study from MIT neuroscientists offers evidence that this is indeed the case.

Using data from both humans and nonhuman primates, the researchers found that neuron firing patterns in the IT cortex correlate strongly with success in object-recognition tasks.

“While we knew from prior work that neuronal population activity in inferior temporal cortex was likely to underlie visual object recognition, we did not have a predictive map that could accurately link that neural activity to object perception and behavior. The results from this study demonstrate that a particular map from particular aspects of IT population activity to behavior is highly accurate over all types of objects that were tested,” says James DiCarlo, head of MIT’s Department of Brain and Cognitive Sciences, a member of the McGovern Institute for Brain Research, and senior author of the study, which appears in the Journal of Neuroscience.

The paper’s lead author is Najib Majaj, a former postdoc in DiCarlo’s lab who is now at New York University. Other authors are former MIT graduate student Ha Hong and former MIT undergraduate Ethan Solomon.

Distinguishing objects

Earlier stops along the ventral stream are believed to process basic visual elements such as brightness and orientation. More complex functions take place farther along the stream, with object recognition believed to occur in the IT cortex.

To investigate this theory, the researchers first asked human subjects to perform 64 object-recognition tasks. Some of these tasks were “trivially easy,” Majaj says, such as distinguishing an apple from a car. Others — such as discriminating between two very similar faces — were so difficult that the subjects were correct only about 50 percent of the time.

After measuring human performance on these tasks, the researchers then showed the same set of nearly 6,000 images to nonhuman primates as they recorded electrical activity in neurons of the inferior temporal cortex and another visual region known as V4.

Each of the 168 IT neurons and 128 V4 neurons fired in response to some objects but not others, creating a firing pattern that served as a distinctive signature for each object. By comparing these signatures, the researchers could analyze whether they correlated to humans’ ability to distinguish between two objects.

The researchers found that the firing patterns of IT neurons, but not V4 neurons, perfectly predicted the human performances they had seen. That is, when humans had trouble distinguishing two objects, the neural signatures for those objects were so similar as to be indistinguishable, and for pairs where humans succeeded, the patterns were very different.

“On the easy stimuli, IT did as well as humans, and on the difficult stimuli, IT also failed,” Majaj says. “We had a nice correlation between behavior and neural responses.”

The findings support the hypothesis that patterns of neural activity in the IT cortex can encode object representations detailed enough to allow the brain to distinguish different objects, the researchers say.

Nikolaus Kriegeskorte, a principal investigator at the Medical Research Council Cognition and Brain Sciences Unit in Cambridge, U.K., agrees that the study offers “crucial evidence supporting the idea that inferior temporal cortex contains the neuronal representations underlying human visual object recognition.”

“This study is exemplary for its original and rigorous method of establishing links between brain representations and human behavioral performance,” adds Kriegeskorte, who was not part of the research team.

Model performance

The researchers also tested more than 10,000 other possible models for how the brain might encode object representations. These models varied based on location in the brain, the number of neurons required, and the time window for neural activity.

Some of these models, including some that relied on V4, were eliminated because they performed better than humans on some tasks and worse on others.

“We wanted the performance of the neurons to perfectly match the performance of the humans in terms of the pattern, so the easy tasks would be easy for the neural population and the hard tasks would be hard for the neural population,” Majaj says.

The research team now aims to gather even more data to ask if this model or similar models can predict the behavioral difficulty of object recognition on each and every visual image — an even higher bar than the one tested thus far. That might require additional factors to be included in the model that were not needed in this study, and thus could expose important gaps in scientists’ current understanding of neural representations of objects.

They also plan to expand the model so they can predict responses in IT based on input from earlier parts of the visual stream.

“We can start building a cascade of computational operations that take you from an image on the retina slowly through V1, V2, V4, until we’re able to predict the population in IT,” Majaj says.