Team invents method to shrink objects to the nanoscale

MIT researchers have invented a way to fabricate nanoscale 3-D objects of nearly any shape. They can also pattern the objects with a variety of useful materials, including metals, quantum dots, and DNA.

“It’s a way of putting nearly any kind of material into a 3-D pattern with nanoscale precision,” says Edward Boyden, the Y. Eva Tan Professor in Neurotechnology and an associate professor of biological engineering and of brain and cognitive sciences at MIT.

Using the new technique, the researchers can create any shape and structure they want by patterning a polymer scaffold with a laser. After attaching other useful materials to the scaffold, they shrink it, generating structures one thousandth the volume of the original.

These tiny structures could have applications in many fields, from optics to medicine to robotics, the researchers say. The technique uses equipment that many biology and materials science labs already have, making it widely accessible for researchers who want to try it.

Boyden, who is also a member of MIT’s Media Lab, McGovern Institute for Brain Research, and Koch Institute for Integrative Cancer Research, is one of the senior authors of the paper, which appears in the Dec. 13 issue of Science. The other senior author is Adam Marblestone, a Media Lab research affiliate, and the paper’s lead authors are graduate students Daniel Oran and Samuel Rodriques.

Implosion fabrication

Existing techniques for creating nanostructures are limited in what they can accomplish. Etching patterns onto a surface with light can produce 2-D nanostructures but doesn’t work for 3-D structures. It is possible to make 3-D nanostructures by gradually adding layers on top of each other, but this process is slow and challenging. And, while methods exist that can directly 3-D print nanoscale objects, they are restricted to specialized materials like polymers and plastics, which lack the functional properties necessary for many applications. Furthermore, they can only generate self-supporting structures. (The technique can yield a solid pyramid, for example, but not a linked chain or a hollow sphere.)

To overcome these limitations, Boyden and his students decided to adapt a technique that his lab developed a few years ago for high-resolution imaging of brain tissue. This technique, known as expansion microscopy, involves embedding tissue into a hydrogel and then expanding it, allowing for high resolution imaging with a regular microscope. Hundreds of research groups in biology and medicine are now using expansion microscopy, since it enables 3-D visualization of cells and tissues with ordinary hardware.

By reversing this process, the researchers found that they could create large-scale objects embedded in expanded hydrogels and then shrink them to the nanoscale, an approach that they call “implosion fabrication.”

As they did for expansion microscopy, the researchers used a very absorbent material made of polyacrylate, commonly found in diapers, as the scaffold for their nanofabrication process. The scaffold is bathed in a solution that contains molecules of fluorescein, which attach to the scaffold when they are activated by laser light.

Using two-photon microscopy, which allows for precise targeting of points deep within a structure, the researchers attach fluorescein molecules to specific locations within the gel. The fluorescein molecules act as anchors that can bind to other types of molecules that the researchers add.

“You attach the anchors where you want with light, and later you can attach whatever you want to the anchors,” Boyden says. “It could be a quantum dot, it could be a piece of DNA, it could be a gold nanoparticle.”

“It’s a bit like film photography — a latent image is formed by exposing a sensitive material in a gel to light. Then, you can develop that latent image into a real image by attaching another material, silver, afterwards. In this way implosion fabrication can create all sorts of structures, including gradients, unconnected structures, and multimaterial patterns,” Oran says.

Once the desired molecules are attached in the right locations, the researchers shrink the entire structure by adding an acid. The acid blocks the negative charges in the polyacrylate gel so that they no longer repel each other, causing the gel to contract. Using this technique, the researchers can shrink the objects 10-fold in each dimension (for an overall 1,000-fold reduction in volume). This ability to shrink not only allows for increased resolution, but also makes it possible to assemble materials in a low-density scaffold. This enables easy access for modification, and later the material becomes a dense solid when it is shrunk.

“People have been trying to invent better equipment to make smaller nanomaterials for years, but we realized that if you just use existing systems and embed your materials in this gel, you can shrink them down to the nanoscale, without distorting the patterns,” Rodriques says.

Currently, the researchers can create objects that are around 1 cubic millimeter, patterned with a resolution of 50 nanometers. There is a tradeoff between size and resolution: If the researchers want to make larger objects, about 1 cubic centimeter, they can achieve a resolution of about 500 nanometers. However, that resolution could be improved with further refinement of the process, the researchers say.

Better optics

The MIT team is now exploring potential applications for this technology, and they anticipate that some of the earliest applications might be in optics — for example, making specialized lenses that could be used to study the fundamental properties of light. This technique might also allow for the fabrication of smaller, better lenses for applications such as cell phone cameras, microscopes, or endoscopes, the researchers say. Farther in the future, the researchers say that this approach could be used to build nanoscale electronics or robots.

“There are all kinds of things you can do with this,” Boyden says. “Democratizing nanofabrication could open up frontiers we can’t yet imagine.”

Many research labs are already stocked with the equipment required for this kind of fabrication. “With a laser you can already find in many biology labs, you can scan a pattern, then deposit metals, semiconductors, or DNA, and then shrink it down,” Boyden says.

The research was funded by the Kavli Dream Team Program, the HHMI-Simons Faculty Scholars Program, the Open Philanthropy Project, John Doerr, the Office of Naval Research, the National Institutes of Health, the New York Stem Cell Foundation-Robertson Award, the U.S. Army Research Office, K. Lisa Yang and Y. Eva Tan, and the MIT Media Lab.

How the brain switches between different sets of rules

Cognitive flexibility — the brain’s ability to switch between different rules or action plans depending on the context — is key to many of our everyday activities. For example, imagine you’re driving on a highway at 65 miles per hour. When you exit onto a local street, you realize that the situation has changed and you need to slow down.

When we move between different contexts like this, our brain holds multiple sets of rules in mind so that it can switch to the appropriate one when necessary. These neural representations of task rules are maintained in the prefrontal cortex, the part of the brain responsible for planning action.

A new study from MIT has found that a region of the thalamus is key to the process of switching between the rules required for different contexts. This region, called the mediodorsal thalamus, suppresses representations that are not currently needed. That suppression also protects the representations as a short-term memory that can be reactivated when needed.

“It seems like a way to toggle between irrelevant and relevant contexts, and one advantage is that it protects the currently irrelevant representations from being overwritten,” says Michael Halassa, an assistant professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research.

Halassa is the senior author of the paper, which appears in the Nov. 19 issue of Nature Neuroscience. The paper’s first author is former MIT graduate student Rajeev Rikhye, who is now a postdoc in Halassa’s lab. Aditya Gilra, a postdoc at the University of Bonn, is also an author.

Changing the rules

Previous studies have found that the prefrontal cortex is essential for cognitive flexibility, and that a part of the thalamus called the mediodorsal thalamus also contributes to this ability. In a 2017 study published in Nature, Halassa and his colleagues showed that the mediodorsal thalamus helps the prefrontal cortex to keep a thought in mind by temporarily strengthening the neuronal connections in the prefrontal cortex that encode that particular thought.

In the new study, Halassa wanted to further investigate the relationship between the mediodorsal thalamus and the prefrontal cortex. To do that, he created a task in which mice learn to switch back and forth between two different contexts — one in which they must follow visual instructions and one in which they must follow auditory instructions.

In each trial, the mice are given both a visual target (flash of light to the right or left) and an auditory target (a tone that sweeps from high to low pitch, or vice versa). These targets offer conflicting instructions. One tells the mouse to go to the right to get a reward; the other tells it to go left. Before each trial begins, the mice are given a cue that tells them whether to follow the visual or auditory target.

“The only way for the animal to solve the task is to keep the cue in mind over the entire delay, until the targets are given,” Halassa says.

The researchers found that thalamic input is necessary for the mice to successfully switch from one context to another. When they suppressed the mediodorsal thalamus during the cuing period of a series of trials in which the context did not change, there was no effect on performance. However, if they suppressed the mediodorsal thalamus during the switch to a different context, it took the mice much longer to switch.

By recording from neurons of the prefrontal cortex, the researchers found that when the mediodorsal thalamus was suppressed, the representation of the old context in the prefrontal cortex could not be turned off, making it much harder to switch to the new context.

In addition to helping the brain switch between contexts, this process also appears to help maintain the neural representation of the context that is not currently being used, so that it doesn’t get overwritten, Halassa says. This allows it to be activated again when needed. The mice could maintain these representations over hundreds of trials, but the next day, they had to relearn the rules associated with each context.

Sabine Kastner, a professor of psychology at the Princeton Neuroscience Institute, described the study as a major leap forward in the field of cognitive neuroscience.

“This is a tour-de-force from beginning to end, starting with a sophisticated behavioral design, state-of-the-art methods including causal manipulations, exciting empirical results that point to cell-type specific differences and interactions in functionality between thalamus and cortex, and a computational approach that links the neuroscience results to the field of artificial intelligence,” says Kastner, who was not involved in the research.

Multitasking AI

The findings could help guide the development of better artificial intelligence algorithms, Halassa says. The human brain is very good at learning many different kinds of tasks — singing, walking, talking, etc. However, neural networks (a type of artificial intelligence based on interconnected nodes similar to neurons) usually are good at learning only one thing. These networks are subject to a phenomenon called “catastrophic forgetting” — when they try to learn a new task, previous tasks become overwritten.

Halassa and his colleagues now hope to apply their findings to improve neural networks’ ability to store previously learned tasks while learning to perform new ones.

The research was funded by the National Institutes of Health, the Brain and Behavior Foundation, the Klingenstein Foundation, the Pew Foundation, the Simons Foundation, the Human Frontiers Science Program, and the German Ministry of Education.

Brain activity pattern may be early sign of schizophrenia

Schizophrenia, a brain disorder that produces hallucinations, delusions, and cognitive impairments, usually strikes during adolescence or young adulthood. While some signs can suggest that a person is at high risk for developing the disorder, there is no way to definitively diagnose it until the first psychotic episode occurs.

MIT neuroscientists working with researchers at Beth Israel Deaconess Medical Center, Brigham and Women’s Hospital, and the Shanghai Mental Health Center have now identified a pattern of brain activity correlated with development of schizophrenia, which they say could be used as a marker to diagnose the disease earlier.

“You can consider this pattern to be a risk factor. If we use these types of brain measurements, then maybe we can predict a little bit better who will end up developing psychosis, and that may also help tailor interventions,” says Guusje Collin, a visiting scientist at MIT’s McGovern Institute for Brain Research and the lead author of the paper.

The study, which appeared in the journal Molecular Psychiatry on Nov. 8, was performed at the Shanghai Mental Health Center. Susan Whitfield-Gabrieli, a visiting scientist at the McGovern Institute and a professor of psychology at Northeastern University, is one of the principal investigators for the study, along with Jijun Wang of the Shanghai Mental Health Center, William Stone of Beth Israel Deaconess Medical Center, the late Larry Seidman of Beth Israel Deaconess Medical Center, and Martha Shenton of Brigham and Women’s Hospital.

Abnormal connections

Before they experience a psychotic episode, characterized by sudden changes in behavior and a loss of touch with reality, patients can experience milder symptoms such as disordered thinking. This kind of thinking can lead to behaviors such as jumping from topic to topic at random, or giving answers unrelated to the original question. Previous studies have shown that about 25 percent of people who experience these early symptoms go on to develop schizophrenia.

The research team performed the study at the Shanghai Mental Health Center because the huge volume of patients who visit the hospital annually gave them a large enough sample of people at high risk of developing schizophrenia.

The researchers followed 158 people between the ages of 13 and 34 who were identified as high-risk because they had experienced early symptoms. The team also included 93 control subjects, who did not have any risk factors. At the beginning of the study, the researchers used functional magnetic resonance imaging (fMRI) to measure a type of brain activity involving “resting state networks.” Resting state networks consist of brain regions that preferentially connect with and communicate with each other when the brain is not performing any particular cognitive task.

“We were interested in looking at the intrinsic functional architecture of the brain to see if we could detect early aberrant brain connectivity or networks in individuals who are in the clinically high-risk phase of the disorder,” Whitfield-Gabrieli says.

One year after the initial scans, 23 of the high-risk patients had experienced a psychotic episode and were diagnosed with schizophrenia. In those patients’ scans, taken before their diagnosis, the researchers found a distinctive pattern of activity that was different from the healthy control subjects and the at-risk subjects who had not developed psychosis.

For example, in most people, a part of the brain known as the superior temporal gyrus, which is involved in auditory processing, is highly connected to brain regions involved in sensory perception and motor control. However, in patients who developed psychosis, the superior temporal gyrus became more connected to limbic regions, which are involved in processing emotions. This could help explain why patients with schizophrenia usually experience auditory hallucinations, the researchers say.

Meanwhile, the high-risk subjects who did not develop psychosis showed network connectivity nearly identical to that of the healthy subjects.

Early intervention

This type of distinctive brain activity could be useful as an early indicator of schizophrenia, especially since it is possible that it could be seen in even younger patients. The researchers are now performing similar studies with younger at-risk populations, including children with a family history of schizophrenia.

“That really gets at the heart of how we can translate this clinically, because we can get in earlier and earlier to identify aberrant networks in the hopes that we can do earlier interventions, and possibly even prevent psychiatric disorders,” Whitfield-Gabrieli says.

She and her colleagues are now testing early interventions that could help to combat the symptoms of schizophrenia, including cognitive behavioral therapy and neural feedback. The neural feedback approach involves training patients to use mindfulness meditation to reduce activity in the superior temporal gyrus, which tends to increase before and during auditory hallucinations.

The researchers also plan to continue following the patients in the current study, and they are now analyzing some additional data on the white matter connections in the brains of these patients, to see if those connections might yield additional differences that could also serve as early indicators of disease.

The research was funded by the National Institutes of Health, the Ministry of Science and Technology of China, and the Poitras Center for Psychiatric Disorders Research at MIT. Collin was supported by a Marie Curie Global Fellowship grant from the European Commission.

Machines that learn language more like kids do

Children learn language by observing their environment, listening to the people around them, and connecting the dots between what they see and hear. Among other things, this helps children establish their language’s word order, such as where subjects and verbs fall in a sentence.

In computing, learning language is the task of syntactic and semantic parsers. These systems are trained on sentences annotated by humans that describe the structure and meaning behind words. Parsers are becoming increasingly important for web searches, natural-language database querying, and voice-recognition systems such as Alexa and Siri. Soon, they may also be used for home robotics.

But gathering the annotation data can be time-consuming and difficult for less common languages. Additionally, humans don’t always agree on the annotations, and the annotations themselves may not accurately reflect how people naturally speak.

In a paper being presented at this week’s Empirical Methods in Natural Language Processing conference, MIT researchers describe a parser that learns through observation to more closely mimic a child’s language-acquisition process, which could greatly extend the parser’s capabilities. To learn the structure of language, the parser observes captioned videos, with no other information, and associates the words with recorded objects and actions. Given a new sentence, the parser can then use what it’s learned about the structure of the language to accurately predict a sentence’s meaning, without the video.

This “weakly supervised” approach — meaning it requires limited training data — mimics how children can observe the world around them and learn language, without anyone providing direct context. The approach could expand the types of data and reduce the effort needed for training parsers, according to the researchers. A few directly annotated sentences, for instance, could be combined with many captioned videos, which are easier to come by, to improve performance.

In the future, the parser could be used to improve natural interaction between humans and personal robots. A robot equipped with the parser, for instance, could constantly observe its environment to reinforce its understanding of spoken commands, including when the spoken sentences aren’t fully grammatical or clear. “People talk to each other in partial sentences, run-on thoughts, and jumbled language. You want a robot in your home that will adapt to their particular way of speaking … and still figure out what they mean,” says co-author Andrei Barbu, a researcher in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Center for Brains, Minds, and Machines (CBMM) within MIT’s McGovern Institute.

The parser could also help researchers better understand how young children learn language. “A child has access to redundant, complementary information from different modalities, including hearing parents and siblings talk about the world, as well as tactile information and visual information, [which help him or her] to understand the world,” says co-author Boris Katz, a principal research scientist and head of the InfoLab Group at CSAIL. “It’s an amazing puzzle, to process all this simultaneous sensory input. This work is part of bigger piece to understand how this kind of learning happens in the world.”

Co-authors on the paper are: first author Candace Ross, a graduate student in the Department of Electrical Engineering and Computer Science and CSAIL, and a researcher in CBMM; Yevgeni Berzak PhD ’17, a postdoc in the Computational Psycholinguistics Group in the Department of Brain and Cognitive Sciences; and CSAIL graduate student Battushig Myanganbayar.

Visual learner

For their work, the researchers combined a semantic parser with a computer-vision component trained in object, human, and activity recognition in video. Semantic parsers are generally trained on sentences annotated with code that ascribes meaning to each word and the relationships between the words. Some have been trained on still images or computer simulations.

The new parser is the first to be trained using video, Ross says. In part, videos are more useful in reducing ambiguity. If the parser is unsure about, say, an action or object in a sentence, it can reference the video to clear things up. “There are temporal components — objects interacting with each other and with people — and high-level properties you wouldn’t see in a still image or just in language,” Ross says.

The researchers compiled a dataset of about 400 videos depicting people carrying out a number of actions, including picking up an object or putting it down, and walking toward an object. Participants on the crowdsourcing platform Mechanical Turk then provided 1,200 captions for those videos. They set aside 840 video-caption examples for training and tuning, and used 360 for testing. One advantage of using vision-based parsing is “you don’t need nearly as much data — although if you had [the data], you could scale up to huge datasets,” Barbu says.

In training, the researchers gave the parser the objective of determining whether a sentence accurately describes a given video. They fed the parser a video and matching caption. The parser extracts possible meanings of the caption as logical mathematical expressions. The sentence, “The woman is picking up an apple,” for instance, may be expressed as: λxy. woman x, pick_up x y, apple y.

Those expressions and the video are inputted to the computer-vision algorithm, called “Sentence Tracker,” developed by Barbu and other researchers. The algorithm looks at each video frame to track how objects and people transform over time, to determine if actions are playing out as described. In this way, it determines if the meaning is possibly true of the video.

Connecting the dots

The expression with the most closely matching representations for objects, humans, and actions becomes the most likely meaning of the caption. The expression, initially, may refer to many different objects and actions in the video, but the set of possible meanings serves as a training signal that helps the parser continuously winnow down possibilities. “By assuming that all of the sentences must follow the same rules, that they all come from the same language, and seeing many captioned videos, you can narrow down the meanings further,” Barbu says.

In short, the parser learns through passive observation: To determine if a caption is true of a video, the parser by necessity must identify the highest probability meaning of the caption. “The only way to figure out if the sentence is true of a video [is] to go through this intermediate step of, ‘What does the sentence mean?’ Otherwise, you have no idea how to connect the two,” Barbu explains. “We don’t give the system the meaning for the sentence. We say, ‘There’s a sentence and a video. The sentence has to be true of the video. Figure out some intermediate representation that makes it true of the video.’”

The training produces a syntactic and semantic grammar for the words it’s learned. Given a new sentence, the parser no longer requires videos, but leverages its grammar and lexicon to determine sentence structure and meaning.

Ultimately, this process is learning “as if you’re a kid,” Barbu says. “You see world around you and hear people speaking to learn meaning. One day, I can give you a sentence and ask what it means and, even without a visual, you know the meaning.”

“This research is exactly the right direction for natural language processing,” says Stefanie Tellex, a professor of computer science at Brown University who focuses on helping robots use natural language to communicate with humans. “To interpret grounded language, we need semantic representations, but it is not practicable to make it available at training time. Instead, this work captures representations of compositional structure using context from captioned videos. This is the paper I have been waiting for!”

In future work, the researchers are interested in modeling interactions, not just passive observations. “Children interact with the environment as they’re learning. Our idea is to have a model that would also use perception to learn,” Ross says.

This work was supported, in part, by the CBMM, the National Science Foundation, a Ford Foundation Graduate Research Fellowship, the Toyota Research Institute, and the MIT-IBM Brain-Inspired Multimedia Comprehension project.

Study reveals how the brain overcomes its own limitations

Imagine trying to write your name so that it can be read in a mirror. Your brain has all of the visual information you need, and you’re a pro at writing your own name. Still, this task is very difficult for most people. That’s because it requires the brain to perform a mental transformation that it’s not familiar with: using what it sees in the mirror to accurately guide your hand to write backward.

MIT neuroscientists have now discovered how the brain tries to compensate for its poor performance in tasks that require this kind of complicated transformation. As it also does in other types of situations where it has little confidence in its own judgments, the brain attempts to overcome its difficulties by relying on previous experiences.

“If you’re doing something that requires a harder mental transformation, and therefore creates more uncertainty and more variability, you rely on your prior beliefs and bias yourself toward what you know how to do well, in order to compensate for that variability,” says Mehrdad Jazayeri, the Robert A. Swanson Career Development Professor of Life Sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

This strategy actually improves overall performance, the researchers report in their study, which appears in the Oct. 24 issue of the journal Nature Communications. Evan Remington, a McGovern Institute postdoc, is the paper’s lead author, and technical assistant Tiffany Parks is also an author on the paper.

Noisy computations

Neuroscientists have known for many decades that the brain does not faithfully reproduce exactly what the eyes see or what the ears hear. Instead, there is a great deal of “noise” — random fluctuations of electrical activity in the brain, which can come from uncertainty or ambiguity about what we are seeing or hearing. This uncertainty also comes into play in social interactions, as we try to interpret the motivations of other people, or when recalling memories of past events.

Previous research has revealed many strategies that help the brain to compensate for this uncertainty. Using a framework known as Bayesian integration, the brain combines multiple, potentially conflicting pieces of information and values them according to their reliability. For example, if given information by two sources, we’ll rely more on the one that we believe to be more credible.

In other cases, such as making movements when we’re uncertain exactly how to proceed, the brain will rely on an average of its past experiences. For example, when reaching for a light switch in a dark, unfamiliar room, we’ll move our hand toward a certain height and close to the doorframe, where past experience suggests a light switch might be located.

All of these strategies have been previously shown to work together to increase bias toward a particular outcome, which makes our overall performance better because it reduces variability, Jazayeri says.

Noise can also occur in the mental conversion of sensory information into a motor plan. In many cases, this is a straightforward task in which noise plays a minimal role — for example, reaching for a mug that you can see on your desk. However, for other tasks, such as the mirror-writing exercise, this conversion is much more complicated.

“Your performance will be variable, and it’s not because you don’t know where your hand is, and it’s not because you don’t know where the image is,” Jazayeri says. “It involves an entirely different form of uncertainty, which has to do with processing information. The act of performing mental transformations of information clearly induces variability.”

That type of mental conversion is what the researchers set out to explore in the new study. To do that, they asked subjects to perform three different tasks. For each one, they compared subjects’ performance in a version of the task where mapping sensory information to motor commands was easy, and a version where an extra mental transformation was required.

In one example, the researchers first asked participants to draw a line the same length as a line they were shown, which was always between 5 and 10 centimeters. In the more difficult version, they were asked to draw a line 1.5 times longer than the original line.

The results from this set of experiments, as well as the other two tasks, showed that in the version that required difficult mental transformations, people altered their performance using the same strategies that they use to overcome noise in sensory perception and other realms. For example, in the line-drawing task, in which the participants had to draw lines ranging from 7.5 to 15 centimeters, depending on the length of the original line, they tended to draw lines that were closer to the average length of all the lines they had previously drawn. This made their responses overall less variable and also more accurate.

“This regression to the mean is a very common strategy for making performance better when there is uncertainty,” Jazayeri says.

Noise reduction

The new findings led the researchers to hypothesize that when people get very good at a task that requires complex computation, the noise will become smaller and less detrimental to overall performance. That is, people will trust their computations more and stop relying on averages.

“As it gets easier, our prediction is the bias will go away, because that computation is no longer a noisy computation,” Jazayeri says. “You believe in the computation; you know the computation is working well.”

The researchers now plan to further study whether people’s biases decrease as they learn to perform a complicated task better. In the experiments they performed for the Nature Communications study, they found some preliminary evidence that trained musicians performed better in a task that involved producing time intervals of a specific duration.

The research was funded by the Alfred P. Sloan Foundation, the Esther A. and Joseph Klingenstein Fund, the Simons Foundation, the McKnight Endowment Fund for Neuroscience, and the McGovern Institute.

Monitoring electromagnetic signals in the brain with MRI

Researchers commonly study brain function by monitoring two types of electromagnetism — electric fields and light. However, most methods for measuring these phenomena in the brain are very invasive.

MIT engineers have now devised a new technique to detect either electrical activity or optical signals in the brain using a minimally invasive sensor for magnetic resonance imaging (MRI).

MRI is often used to measure changes in blood flow that indirectly represent brain activity, but the MIT team has devised a new type of MRI sensor that can detect tiny electrical currents, as well as light produced by luminescent proteins. (Electrical impulses arise from the brain’s internal communications, and optical signals can be produced by a variety of molecules developed by chemists and bioengineers.)

“MRI offers a way to sense things from the outside of the body in a minimally invasive fashion,” says Aviad Hai, an MIT postdoc and the lead author of the study. “It does not require a wired connection into the brain. We can implant the sensor and just leave it there.”

This kind of sensor could give neuroscientists a spatially accurate way to pinpoint electrical activity in the brain. It can also be used to measure light, and could be adapted to measure chemicals such as glucose, the researchers say.

Alan Jasanoff, an MIT professor of biological engineering, brain and cognitive sciences, and nuclear science and engineering, and an associate member of MIT’s McGovern Institute for Brain Research, is the senior author of the paper, which appears in the Oct. 22 issue of Nature Biomedical Engineering. Postdocs Virginia Spanoudaki and Benjamin Bartelle are also authors of the paper.

Detecting electric fields

Jasanoff’s lab has previously developed MRI sensors that can detect calcium and neurotransmitters such as serotonin and dopamine. In this paper, they wanted to expand their approach to detecting biophysical phenomena such as electricity and light. Currently, the most accurate way to monitor electrical activity in the brain is by inserting an electrode, which is very invasive and can cause tissue damage. Electroencephalography (EEG) is a noninvasive way to measure electrical activity in the brain, but this method cannot pinpoint the origin of the activity.

To create a sensor that could detect electromagnetic fields with spatial precision, the researchers realized they could use an electronic device — specifically, a tiny radio antenna.

MRI works by detecting radio waves emitted by the nuclei of hydrogen atoms in water. These signals are usually detected by a large radio antenna within an MRI scanner. For this study, the MIT team shrank the radio antenna down to just a few millimeters in size so that it could be implanted directly into the brain to receive the radio waves generated by water in the brain tissue.

The sensor is initially tuned to the same frequency as the radio waves emitted by the hydrogen atoms. When the sensor picks up an electromagnetic signal from the tissue, its tuning changes and the sensor no longer matches the frequency of the hydrogen atoms. When this happens, a weaker image arises when the sensor is scanned by an external MRI machine.

The researchers demonstrated that the sensors can pick up electrical signals similar to those produced by action potentials (the electrical impulses fired by single neurons), or local field potentials (the sum of electrical currents produced by a group of neurons).

“We showed that these devices are sensitive to biological-scale potentials, on the order of millivolts, which are comparable to what biological tissue generates, especially in the brain,” Jasanoff says.

The researchers performed additional tests in rats to study whether the sensors could pick up signals in living brain tissue. For those experiments, they designed the sensors to detect light emitted by cells engineered to express the protein luciferase.

Normally, luciferase’s exact location cannot be determined when it is deep within the brain or other tissues, so the new sensor offers a way to expand the usefulness of luciferase and more precisely pinpoint the cells that are emitting light, the researchers say. Luciferase is commonly engineered into cells along with another gene of interest, allowing researchers to determine whether the genes have been successfully incorporated by measuring the light produced.

Smaller sensors

One major advantage of this sensor is that it does not need to carry any kind of power supply, because the radio signals that the external MRI scanner emits are enough to power the sensor.

Hai, who will be joining the faculty at the University of Wisconsin at Madison in January, plans to further miniaturize the sensors so that more of them can be injected, enabling the imaging of light or electrical fields over a larger brain area. In this paper, the researchers performed modeling that showed that a 250-micron sensor (a few tenths of a millimeter) should be able to detect electrical activity on the order of 100 millivolts, similar to the amount of current in a neural action potential.

Jasanoff’s lab is interested in using this type of sensor to detect neural signals in the brain, and they envision that it could also be used to monitor electromagnetic phenomena elsewhere in the body, including muscle contractions or cardiac activity.

“If the sensors were on the order of hundreds of microns, which is what the modeling suggests is in the future for this technology, then you could imagine taking a syringe and distributing a whole bunch of them and just leaving them there,” Jasanoff says. “What this would do is provide many local readouts by having sensors distributed all over the tissue.”

The research was funded by the National Institutes of Health.

Electrical properties of dendrites help explain our brain’s unique computing power

Neurons in the human brain receive electrical signals from thousands of other cells, and long neural extensions called dendrites play a critical role in incorporating all of that information so the cells can respond appropriately.

Using hard-to-obtain samples of human brain tissue, MIT neuroscientists have now discovered that human dendrites have different electrical properties from those of other species. Their studies reveal that electrical signals weaken more as they flow along human dendrites, resulting in a higher degree of electrical compartmentalization, meaning that small sections of dendrites can behave independently from the rest of the neuron.

These differences may contribute to the enhanced computing power of the human brain, the researchers say.

“It’s not just that humans are smart because we have more neurons and a larger cortex. From the bottom up, neurons behave differently,” says Mark Harnett, the Fred and Carole Middleton Career Development Assistant Professor of Brain and Cognitive Sciences. “In human neurons, there is more electrical compartmentalization, and that allows these units to be a little bit more independent, potentially leading to increased computational capabilities of single neurons.”

Harnett, who is also a member of MIT’s McGovern Institute for Brain Research, and Sydney Cash, an assistant professor of neurology at Harvard Medical School and Massachusetts General Hospital, are the senior authors of the study, which appears in the Oct. 18 issue of Cell. The paper’s lead author is Lou Beaulieu-Laroche, a graduate student in MIT’s Department of Brain and Cognitive Sciences.

Neural computation

Dendrites can be thought of as analogous to transistors in a computer, performing simple operations using electrical signals. Dendrites receive input from many other neurons and carry those signals to the cell body. If stimulated enough, a neuron fires an action potential — an electrical impulse that then stimulates other neurons. Large networks of these neurons communicate with each other to generate thoughts and behavior.

The structure of a single neuron often resembles a tree, with many branches bringing in information that arrives far from the cell body. Previous research has found that the strength of electrical signals arriving at the cell body depends, in part, on how far they travel along the dendrite to get there. As the signals propagate, they become weaker, so a signal that arrives far from the cell body has less of an impact than one that arrives near the cell body.

Dendrites in the cortex of the human brain are much longer than those in rats and most other species, because the human cortex has evolved to be much thicker than that of other species. In humans, the cortex makes up about 75 percent of the total brain volume, compared to about 30 percent in the rat brain.

Although the human cortex is two to three times thicker than that of rats, it maintains the same overall organization, consisting of six distinctive layers of neurons. Neurons from layer 5 have dendrites long enough to reach all the way to layer 1, meaning that human dendrites have had to elongate as the human brain has evolved, and electrical signals have to travel that much farther.

In the new study, the MIT team wanted to investigate how these length differences might affect dendrites’ electrical properties. They were able to compare electrical activity in rat and human dendrites, using small pieces of brain tissue removed from epilepsy patients undergoing surgical removal of part of the temporal lobe. In order to reach the diseased part of the brain, surgeons also have to take out a small chunk of the anterior temporal lobe.

With the help of MGH collaborators Cash, Matthew Frosch, Ziv Williams, and Emad Eskandar, Harnett’s lab was able to obtain samples of the anterior temporal lobe, each about the size of a fingernail.

Evidence suggests that the anterior temporal lobe is not affected by epilepsy, and the tissue appears normal when examined with neuropathological techniques, Harnett says. This part of the brain appears to be involved in a variety of functions, including language and visual processing, but is not critical to any one function; patients are able to function normally after it is removed.

Once the tissue was removed, the researchers placed it in a solution very similar to cerebrospinal fluid, with oxygen flowing through it. This allowed them to keep the tissue alive for up to 48 hours. During that time, they used a technique known as patch-clamp electrophysiology to measure how electrical signals travel along dendrites of pyramidal neurons, which are the most common type of excitatory neurons in the cortex.

These experiments were performed primarily by Beaulieu-Laroche. Harnett’s lab (and others) have previously done this kind of experiment in rodent dendrites, but his team is the first to analyze electrical properties of human dendrites.

Unique features

The researchers found that because human dendrites cover longer distances, a signal flowing along a human dendrite from layer 1 to the cell body in layer 5 is much weaker when it arrives than a signal flowing along a rat dendrite from layer 1 to layer 5.

They also showed that human and rat dendrites have the same number of ion channels, which regulate the current flow, but these channels occur at a lower density in human dendrites as a result of the dendrite elongation. They also developed a detailed biophysical model that shows that this density change can account for some of the differences in electrical activity seen between human and rat dendrites, Harnett says.

Nelson Spruston, senior director of scientific programs at the Howard Hughes Medical Institute Janelia Research Campus, described the researchers’ analysis of human dendrites as “a remarkable accomplishment.”

“These are the most carefully detailed measurements to date of the physiological properties of human neurons,” says Spruston, who was not involved in the research. “These kinds of experiments are very technically demanding, even in mice and rats, so from a technical perspective, it’s pretty amazing that they’ve done this in humans.”

The question remains, how do these differences affect human brainpower? Harnett’s hypothesis is that because of these differences, which allow more regions of a dendrite to influence the strength of an incoming signal, individual neurons can perform more complex computations on the information.

“If you have a cortical column that has a chunk of human or rodent cortex, you’re going to be able to accomplish more computations faster with the human architecture versus the rodent architecture,” he says.

There are many other differences between human neurons and those of other species, Harnett adds, making it difficult to tease out the effects of dendritic electrical properties. In future studies, he hopes to explore further the precise impact of these electrical properties, and how they interact with other unique features of human neurons to produce more computing power.

The research was funded by the National Sciences and Engineering Research Council of Canada, the Dana Foundation David Mahoney Neuroimaging Grant Program, and the National Institutes of Health.

Model helps robots navigate more like humans do

When moving through a crowd to reach some end goal, humans can usually navigate the space safely without thinking too much. They can learn from the behavior of others and note any obstacles to avoid. Robots, on the other hand, struggle with such navigational concepts.

MIT researchers have now devised a way to help robots navigate environments more like humans do. Their novel motion-planning model lets robots determine how to reach a goal by exploring the environment, observing other agents, and exploiting what they’ve learned before in similar situations. A paper describing the model was presented at this week’s IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

Popular motion-planning algorithms will create a tree of possible decisions that branches out until it finds good paths for navigation. A robot that needs to navigate a room to reach a door, for instance, will create a step-by-step search tree of possible movements and then execute the best path to the door, considering various constraints. One drawback, however, is these algorithms rarely learn: Robots can’t leverage information about how they or other agents acted previously in similar environments.

“Just like when playing chess, these decisions branch out until [the robots] find a good way to navigate. But unlike chess players, [the robots] explore what the future looks like without learning much about their environment and other agents,” says co-author Andrei Barbu, a researcher at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Center for Brains, Minds, and Machines (CBMM) within MIT’s McGovern Institute. “The thousandth time they go through the same crowd is as complicated as the first time. They’re always exploring, rarely observing, and never using what’s happened in the past.”

The researchers developed a model that combines a planning algorithm with a neural network that learns to recognize paths that could lead to the best outcome, and uses that knowledge to guide the robot’s movement in an environment.

In their paper, “Deep sequential models for sampling-based planning,” the researchers demonstrate the advantages of their model in two settings: navigating through challenging rooms with traps and narrow passages, and navigating areas while avoiding collisions with other agents. A promising real-world application is helping autonomous cars navigate intersections, where they have to quickly evaluate what others will do before merging into traffic. The researchers are currently pursuing such applications through the Toyota-CSAIL Joint Research Center.

“When humans interact with the world, we see an object we’ve interacted with before, or are in some location we’ve been to before, so we know how we’re going to act,” says Yen-Ling Kuo, a PhD student in CSAIL and first author on the paper. “The idea behind this work is to add to the search space a machine-learning model that knows from past experience how to make planning more efficient.”

Boris Katz, a principal research scientist and head of the InfoLab Group at CSAIL, is also a co-author on the paper.

Trading off exploration and exploitation

Traditional motion planners explore an environment by rapidly expanding a tree of decisions that eventually blankets an entire space. The robot then looks at the tree to find a way to reach the goal, such as a door. The researchers’ model, however, offers “a tradeoff between exploring the world and exploiting past knowledge,” Kuo says.

The learning process starts with a few examples. A robot using the model is trained on a few ways to navigate similar environments. The neural network learns what makes these examples succeed by interpreting the environment around the robot, such as the shape of the walls, the actions of other agents, and features of the goals. In short, the model “learns that when you’re stuck in an environment, and you see a doorway, it’s probably a good idea to go through the door to get out,” Barbu says.

The model combines the exploration behavior from earlier methods with this learned information. The underlying planner, called RRT*, was developed by MIT professors Sertac Karaman and Emilio Frazzoli. (It’s a variant of a widely used motion-planning algorithm known as Rapidly-exploring Random Trees, or  RRT.) The planner creates a search tree while the neural network mirrors each step and makes probabilistic predictions about where the robot should go next. When the network makes a prediction with high confidence, based on learned information, it guides the robot on a new path. If the network doesn’t have high confidence, it lets the robot explore the environment instead, like a traditional planner.

For example, the researchers demonstrated the model in a simulation known as a “bug trap,” where a 2-D robot must escape from an inner chamber through a central narrow channel and reach a location in a surrounding larger room. Blind allies on either side of the channel can get robots stuck. In this simulation, the robot was trained on a few examples of how to escape different bug traps. When faced with a new trap, it recognizes features of the trap, escapes, and continues to search for its goal in the larger room. The neural network helps the robot find the exit to the trap, identify the dead ends, and gives the robot a sense of its surroundings so it can quickly find the goal.

Results in the paper are based on the chances that a path is found after some time, total length of the path that reached a given goal, and how consistent the paths were. In both simulations, the researchers’ model more quickly plotted far shorter and consistent paths than a traditional planner.

“This model is interesting because it allows a motion planner to adapt to what it sees in the environment,” says Stephanie Tellex, an assistant professor of computer science at Brown University, who was not involved in the research. “This can enable dramatic improvements in planning speed by customizing the planner to what the robot knows. Most planners don’t adapt to the environment at all. Being able to traverse long, narrow passages is notoriously difficult for a conventional planner, but they can solve it. We need more ways that bridge this gap.”

Working with multiple agents

In one other experiment, the researchers trained and tested the model in navigating environments with multiple moving agents, which is a useful test for autonomous cars, especially navigating intersections and roundabouts. In the simulation, several agents are circling an obstacle. A robot agent must successfully navigate around the other agents, avoid collisions, and reach a goal location, such as an exit on a roundabout.

“Situations like roundabouts are hard, because they require reasoning about how others will respond to your actions, how you will then respond to theirs, what they will do next, and so on,” Barbu says. “You eventually discover your first action was wrong, because later on it will lead to a likely accident. This problem gets exponentially worse the more cars you have to contend with.”

Results indicate that the researchers’ model can capture enough information about the future behavior of the other agents (cars) to cut off the process early, while still making good decisions in navigation. This makes planning more efficient. Moreover, they only needed to train the model on a few examples of roundabouts with only a few cars. “The plans the robots make take into account what the other cars are going to do, as any human would,” Barbu says.

Going through intersections or roundabouts is one of the most challenging scenarios facing autonomous cars. This work might one day let cars learn how humans behave and how to adapt to drivers in different environments, according to the researchers. This is the focus of the Toyota-CSAIL Joint Research Center work.

“Not everybody behaves the same way, but people are very stereotypical. There are people who are shy, people who are aggressive. The model recognizes that quickly and that’s why it can plan efficiently,” Barbu says.

More recently, the researchers have been applying this work to robots with manipulators that face similarly daunting challenges when reaching for objects in ever-changing environments.

Recognizing the partially seen

When we open our eyes in the morning and take in that first scene of the day, we don’t give much thought to the fact that our brain is processing the objects within our field of view with great efficiency and that it is compensating for a lack of information about our surroundings — all in order to allow us to go about our daily functions. The glass of water you left on the nightstand when preparing for bed is now partially blocked from your line of sight by your alarm clock, yet you know that it is a glass.

This seemingly simple ability for humans to recognize partially occluded objects — defined in this situation as the effect of one object in a 3-D space blocking another object from view — has been a complicated problem for the computer vision community. Martin Schrimpf, a graduate student in the DiCarlo lab in the Department of Brain and Cognitive Sciences at MIT, explains that machines have become increasingly adept at recognizing whole items quickly and confidently, but when something covers part of that item from view, this task becomes increasingly difficult for the models to accurately recognize the article.

“For models from computer vision to function in everyday life, they need to be able to digest occluded objects just as well as whole ones — after all, when you look around, most objects are partially hidden behind another object,” says Schrimpf, co-author of a paper on the subject that was recently published in the Proceedings of the National Academy of Sciences (PNAS).

In the new study, he says, “we dug into the underlying computations in the brain and then used our findings to build computational models. By recapitulating visual processing in the human brain, we are thus hoping to also improve models in computer vision.”

How are we as humans able to repeatedly do this everyday task without putting much thought and energy into this action, identifying whole scenes quickly and accurately after injesting just pieces? Researchers in the study started with the human visual cortex as a model for how to improve the performance of machines in this setting, says Gabriel Kreiman, an affiliate of the MIT Center for Brains, Minds, and Machines. Kreinman is a professor of ophthalmology at Boston Children’s Hospital and Harvard Medical School and was lead principal investigator for the study.

In their paper, “Recurrent computations for visual pattern completion,” the team showed how they developed a computational model, inspired by physiological and anatomical constraints, that was able to capture the behavioral and neurophysiological observations during pattern completion. In the end, the model provided useful insights towards understanding how to make inferences from minimal information.

Work for this study was conducted at the Center for Brains, Minds and Machines within the McGovern Institute for Brain Research at MIT.

Feng Zhang wins 2018 Keio Medical Science Prize

Molecular biologist Feng Zhang has been named a winner of the prestigious Keio Medical Science Prize. He is being recognized for the groundbreaking development of CRISPR-Cas9-mediated genome engineering in cells and its application for medical science.

Zhang is the James and Patricia Poitras Professor of Neuroscience at MIT, an associate professor in the departments of Brain and Cognitive Sciences and Biological Engineering, a Howard Hughes Medical Institute investigator, an investigator at the McGovern Institute for Brain Research, and a core member of the Broad Institute of MIT and Harvard.

“We are delighted that Feng is now a Keio Prize laureate,” says McGovern Institute Director Robert Desimone. “This truly recognizes the remarkable achievements that he has made at such a young age.”

Zhang is a molecular biologist who has contributed to the development of multiple molecular tools to accelerate the understanding of human disease and create new therapeutic modalities. During his graduate work, Zhang contributed to the development of optogenetics, a system for activating neurons using light, which has advanced our understanding of brain connectivity.

Zhang went on to pioneer the deployment of the microbial CRISPR-Cas9 system for genome engineering in eukaryotic cells. The ease and specificity of the system has led to its widespread use across the life sciences and it has groundbreaking implications for disease therapeutics, biotechnology, and agriculture. He has continued to mine bacterial CRISPR systems for additional enzymes with useful properties, leading to the discovery of Cas13, which targets RNA, rather than DNA, and may potentially be a way to treat genetic diseases without altering the genome. Zhang has also developed a molecular detection system called SHERLOCK based on the Cas13 family, which can sense trace amounts of genetic material, including viruses and alterations in genes that might be linked to cancer.

“I am tremendously honored to have our work recognized by the Keio Medical Prize,” says Zhang. “It is an inspiration to us to continue our work to improve human health.”

Now in its 23rd year, the Keio Medical Science Prize is awarded to a maximum of two scientists each year. The other 2018 laureate, Masashi Yanagisawa, director of the International Institute for Integrative Sleep Medicine at the University of Tsukuba, is being recognized for his seminal work on sleep control mechanisms.

The prize is offered by Keio University, and the selection committee specifically looks for laureates that have made an outstanding contribution to medicine or the life sciences. The prize was initially endowed by Mitsunada Sakaguchi in 1994, with the express condition that it be used to commend outstanding science, promote advances in medicine and the life sciences, expand researcher networks, and contribute to the wellbeing of humankind. The winners receive a certificate of merit, a medal, and a monetary award of approximately $90,000.

The prize ceremony will be held on Dec. 18 at Keio University in Tokyo.