The quest to understand intelligence

McGovern investigators study intelligence to answer a practical question for both educators and computer scientists. Can intelligence be improved?

A nine-year-old girl, a contestant on a game show, is standing on stage. On a screen in front of her, there appears a twelve-digit number followed by a six-digit number. Her challenge is to divide the two numbers as fast as possible.

The timer begins. She is racing against three other contestants, two from China and one, like her, from Japan. Whoever answers first wins, but only if the answer is correct.

The show, called “The Brain,” is wildly popular in China, and attracts players who display their memory and concentration skills much the way American athletes demonstrate their physical skills in shows like “American Ninja Warrior.” After a few seconds, the girl slams the timer and gives the correct answer, faster than most people could have entered the numbers on a calculator.

The camera pans to a team of expert judges, including McGovern Director Robert Desimone, who had arrived in Nanjing just a few hours earlier. Desimone shakes his head in disbelief. The task appears to make extraordinary demands on working memory and rapid processing, but the girl explains that she solves it by visualizing an abacus in her mind—something she has practiced intensively.

The show raises an age-old question: What is intelligence, exactly?

The study of intelligence has a long and sometimes contentious history, but recently, neuroscientists have begun to dissect intelligence to understand the neural roots of the distinct cognitive skills that contribute to it. One key question is whether these skills can be improved individually with training and, if so, whether those improvements translate into overall intelligence gains. This research has practical implications for multiple domains, from brain science to education to artificial intelligence.

“The problem of intelligence is one of the great problems in science,” says Tomaso Poggio, a McGovern investigator and an expert on machine learning. “If we make progress in understanding intelligence, and if that helps us make progress in making ourselves smarter or in making machines that help us think better, we can solve all other problems more easily.”

Brain training 101

Many studies have reported positive results from brain training, and there is now a thriving industry devoted to selling tools and games such as Lumosity and BrainHQ. Yet the science behind brain training to improve intelligence remains controversial.

A case in point is the “n-back” working memory task, in which subjects are presented with a rapid sequence of letters or visual patterns, and must report whether the current item matches the last, last-but-one, last-but-two, and so on. The field of brain training received a boost in 2008 when a widely discussed study claimed that a few weeks of training on a challenging version of this task could boost fluid intelligence, the ability to solve novel problems. The report generated excitement and optimism when it first appeared, but several subsequent attempts to reproduce the findings have been unsuccessful.

Among those unable to confirm the result was McGovern Investigator John Gabrieli, who recruited 60 young adults and trained them forty minutes a day for four weeks on an n-back task similar to that of the original study.

Six months later, Gabrieli re-evaluated the participants. “They got amazingly better at the difficult task they practiced. We have great imaging data showing changes in brain activation as they performed the task from before to after,” says Gabrieli. “And yet, that didn’t help them do better on any other cognitive abilities we could measure, and we measured a lot of things.”

The results don’t completely rule out the value of n-back training, says Gabrieli. It may be more effective in children, or in populations with a lower average intelligence than the individuals (mostly college students) who were recruited for Gabrieli’s study. The prospect that training might help disadvantaged individuals holds strong appeal. “If you could raise the cognitive abilities of a child with autism, or a child who is struggling in school, the data tells us that their life would be a step better,” says Gabrieli. “It’s something you would wish for people, especially for those where something is holding them back from the expression of their other abilities.”

Music for the brain

The concept of early intervention is now being tested by Desimone, who has teamed with Chinese colleagues at the recently-established IDG/McGovern Institute at Beijing Normal University to explore the effect of music training on the cognitive abilities of young children.

The researchers recruited 100 children at a neighborhood kindergarten in Beijing, and provided them with a semester-long intervention, randomly assigning children either to music training or (as a control) to additional reading instruction. Unlike the so-called “Mozart Effect,” a scientifically unsubstantiated claim that passive listening to music increases intelligence, the new study requires active learning through daily practice. Several smaller studies have reported cognitive benefits from music training, and Desimone finds the idea plausible given that musical cognition involves several mental functions that are also implicated in intelligence. The study is nearly complete, and results are expected to emerge within a few months. “We’re also collecting data on brain activity, so if we see improvements in the kids who had music training, we’ll also be able to ask about its neural basis,” says Desimone. The results may also have immediate practical implications, since the study design reflects decisions that schools must make in determining how children spend their time. “Many schools are deciding to cut their arts and music programs to make room for more instruction in academic core subjects, so our study is relevant to real questions schools are facing.”

Intelligent classrooms

In another school-based study, Gabrieli’s group recently raised questions about the benefits of “teaching to the test.” In this study, postdoc Amy Finn evaluated over 1300 eighth-graders in the Boston public schools, some enrolled at traditional schools and others at charter schools that emphasize standardized test score improvements. The researchers wanted to find out whether raised test scores were accompanied by improvement of cognitive skills that are linked to intelligence. (Charter school students are selected by lottery, meaning that any results are unlikely to reflect preexisting differences between the two groups of students.) As expected, charter school students showed larger improvements in test scores (relative to their scores from 4 years earlier). But when Finn and her colleagues measured key aspects of intelligence, such as working memory, processing speed, and reasoning, they found no difference between the students who enrolled in charter schools and those who did not. “You can look at these skills as the building blocks of cognition. They are useful for reasoning in a novel situation, an ability that is really important for learning,” says Finn. “It’s surprising that school practices that increase achievement don’t also increase these building blocks.”

Gabrieli remains optimistic that it will eventually be possible to design scientifically based interventions that can raise children’s abilities. Allyson Mackey, a postdoc in his lab, is studying the use of games to exercise the cognitive skills in a classroom setting. As a graduate student at University of California, Berkeley, Mackey had studied the effects of games such as “Chocolate Fix,” in which players match shapes and flavors, represented by color, to positions in a grid based on hints, such as, “the upper left position is strawberry.”

These games gave children practice at thinking through and solving novel problems, and at the end of Mackey’s study, the students—from second through fourth grades—showed improved measures of skills associated with intelligence. “Our results suggest that these cognitive skills are specifically malleable, although we don’t yet know what the active ingredients were in this program,” says Mackey, who speaks of the interventions as if they were drugs, with dosages, efficacies and potentially synergistic combinations to be explored. Mackey is now working to identify the most promising interventions—those that boost cognitive abilities, work well in the classroom, and are engaging for kids—to try in Boston charter schools. “It’s just the beginning of a three-year process to methodically test interventions to see if they work,” she says.

Brain training…for machines

While Desimone, Gabrieli and their colleagues look for ways to raise human intelligence, Poggio, who directs the MIT-based Center for Brains, Minds and Machines, is trying to endow computers with more human-like intelligence. Computers can already match human performance on some specific tasks such as chess. Programs such as Apple’s “Siri” can mimic human speech interpretation, not perfectly but well enough to be useful. Computer vision programs are approaching human performance at rapid object recognitions, and one such system, developed by one of Poggio’s former postdocs, is now being used to assist car drivers. “The last decade has been pretty magical for intelligent computer systems,” says Poggio.

Like children, these intelligent systems learn from past experience. But compared to humans or other animals, machines tend to be very slow learners. For example, the visual system for automobiles was trained by presenting it with millions of images—traffic light, pedestrian, and so on—that had already been labeled by humans. “You would never present so many examples to a child,” says Poggio. “One of our big challenges is to understand how to make algorithms in computers learn with many fewer examples, to make them learn more like children do.”

To accomplish this and other goals of machine intelligence, Poggio suspects that the work being done by Desimone, Gabrieli and others to understand the neural basis of intelligence will be critical. But he is not expecting any single breakthrough that will make everything fall into place. “A century ago,” he says, “scientists pondered the problem of life, as if ‘life’—what we now call biology—were just one problem. The science of intelligence is like biology. It’s a lot of problems, and a lot of breakthroughs will have to come before a machine appears that is as intelligent as we are.”

Study finds early signatures of the social brain

Humans use an ability known as theory of mind every time they make inferences about someone else’s mental state — what the other person believes, what they want, or why they are feeling happy, angry, or scared.

Behavioral studies have suggested that children begin succeeding at a key measure of this ability, known as the false-belief task, around age 4. However, a new study from MIT has found that the brain network that controls theory of mind has already formed in children as young as 3.

The MIT study is the first to use functional magnetic resonance imaging (fMRI) to scan the brains of children as young as age 3 as they perform a task requiring theory of mind — in this case, watching a short animated movie involving social interactions between two characters.

“The brain regions involved in theory-of-mind reasoning are behaving like a cohesive network, with similar responses to the movie, by age 3, which is before kids tend to pass explicit false-belief tasks,” says Hilary Richardson, an MIT graduate student and the lead author of the study.

Rebecca Saxe, an MIT professor of brain and cognitive sciences and an associate member of MIT’s McGovern Institute for Brain Research, is the senior author of the paper, which appears in the March 12 issue of Nature Communications. Other authors are Indiana University graduate student Grace Lisandrelli and Wellesley College undergraduate Alexa Riobueno-Naylor.

Thinking about others

In 2003, Saxe first showed that theory of mind is seated in a brain region known as the right temporo-parietal junction (TPJ). The TPJ coordinates with other regions, including several parts of the prefrontal cortex, to form a network that is active when people think about the mental states of others.

The most commonly used test of theory of mind is the false-belief test, which probes whether the subject understands that other people may have beliefs that are not true. A classic example is the Sally-Anne test, in which a child is asked where Sally will look for a marble that she believes is in her own basket, but that Anne has moved to a different spot while Sally wasn’t looking. To pass, the subject must reply that Sally will look where she thinks the marble is (in her basket), not where it actually is.

Until now, neuroscientists had assumed that theory-of-mind studies involving fMRI brain scans could only be done with children at least 5 years of age, because the children need to be able to lie still in a scanner for about 20 minutes, listen to a series of stories, and answer questions about them.

Richardson wanted to study children younger than that, so that she could delve into what happens in the brain’s theory-of-mind network before the age of 5. To do that, she and Saxe came up with a new experimental protocol, which calls for scanning children while they watch a short movie that includes simple social interactions between two characters.

The animated movie they chose, called “Partly Cloudy,” has a plot that lends itself well to the experiment. It features Gus, a cloud who produces baby animals, and Peck, a stork whose job is to deliver the babies. Gus and Peck have some tense moments in their friendship because Gus produces baby alligators and porcupines, which are difficult to deliver, while other clouds create kittens and puppies. Peck is attacked by some of the fierce baby animals, and he isn’t sure if he wants to keep working for Gus.

“It has events that make you think about the characters’ mental states and events that make you think about their bodily states,” Richardson says.

The researchers spent about four years gathering data from 122 children ranging in age from 3 to 12 years. They scanned the entire brain, focusing on two distinct networks that have been well-characterized in adults: the theory-of-mind network and another network known as the pain matrix, which is active when thinking about another person’s physical state.

They also scanned 33 adults as they watched the movie so that they could identify scenes that provoke responses in either of those two networks. These scenes were dubbed theory-of-mind events and pain events. Scans of children revealed that even in 3-year-olds, the theory-of-mind and pain networks responded preferentially to the same events that the adult brains did.

“We see early signatures of this theory-of-mind network being wired up, so the theory-of-mind brain regions which we studied in adults are already really highly correlated with one another in 3-year-olds,” Richardson says.

The researchers also found that the responses in 3-year-olds were not as strong as in adults but gradually became stronger in the older children they scanned.

Patterns of development

The findings offer support for an existing hypothesis that says children develop theory of mind even before they can pass explicit false-belief tests, and that it continues to develop as they get older. Theory of mind encompasses many abilities, including more difficult skills such as understanding irony and assigning blame, which tend to develop later.

Another hypothesis is that children undergo a fairly sudden development of theory of mind around the age of 4 or 5, reflected by their success in the false-belief test. The MIT data, which do not show any dramatic changes in brain activity when children begin to succeed at the false-belief test, do not support that theory.

“Scientists have focused really intensely on the changes in children’s theory of mind that happen around age 4, when children get a better understanding of how people can have wrong or biased or misinformed beliefs,” Saxe says. “But really important changes in how we think about other minds happen long before, and long after, this famous landmark. Even 2-year-olds try to figure out why different people like different things — this might be why they get so interested in talking about everybody’s favorite colors. And even 9-year-olds are still learning about irony and negligence. Theory of mind seems to undergo a very long continuous developmental process, both in kids’ behaviors and in their brains.”

Now that the researchers have data on the typical trajectory of theory of mind development, they hope to scan the brains of autistic children to see whether there are any differences in how their theory-of-mind networks develop. Saxe’s lab is also studying children whose first exposure to language was delayed, to test the effects of early language on the development of theory of mind.

The research was funded by the National Science Foundation, the National Institutes of Health, and the David and Lucile Packard Foundation.

Seeing the brain’s electrical activity

Neurons in the brain communicate via rapid electrical impulses that allow the brain to coordinate behavior, sensation, thoughts, and emotion. Scientists who want to study this electrical activity usually measure these signals with electrodes inserted into the brain, a task that is notoriously difficult and time-consuming.

MIT researchers have now come up with a completely different approach to measuring electrical activity in the brain, which they believe will prove much easier and more informative. They have developed a light-sensitive protein that can be embedded into neuron membranes, where it emits a fluorescent signal that indicates how much voltage a particular cell is experiencing. This could allow scientists to study how neurons behave, millisecond by millisecond, as the brain performs a particular function.

“If you put an electrode in the brain, it’s like trying to understand a phone conversation by hearing only one person talk,” says Edward Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT. “Now we can record the neural activity of many cells in a neural circuit and hear them as they talk to each other.”

Boyden, who is also a member of MIT’s Media Lab, McGovern Institute for Brain Research, and Koch Institute for Integrative Cancer Research, and an HHMI-Simons Faculty Scholar, is the senior author of the study, which appears in the Feb. 26 issue of Nature Chemical Biology. The paper’s lead authors are MIT postdocs Kiryl Piatkevich and Erica Jung.

Imaging voltage

For the past two decades, scientists have sought a way to monitor electrical activity in the brain through imaging instead of recording with electrodes. Finding fluorescent molecules that can be used for this kind of imaging has been difficult; not only do the proteins have to be very sensitive to changes in voltage, they must also respond quickly and be resistant to photobleaching (fading that can be caused by exposure to light).

Boyden and his colleagues came up with a new strategy for finding a molecule that would fulfill everything on this wish list: They built a robot that could screen millions of proteins, generated through a process called directed protein evolution, for the traits they wanted.

“You take a gene, then you make millions and millions of mutant genes, and finally you pick the ones that work the best,” Boyden says. “That’s the way that evolution works in nature, but now we’re doing it in the lab with robots so we can pick out the genes with the properties we want.”

The researchers made 1.5 million mutated versions of a light-sensitive protein called QuasAr2, which was previously engineered by Adam Cohen’s lab at Harvard University. (That work, in turn, was based on the molecule Arch, which the Boyden lab reported in 2010.) The researchers put each of those genes into mammalian cells (one mutant per cell), then grew the cells in lab dishes and used an automated microscope to take pictures of the cells. The robot was able to identify cells with proteins that met the criteria the researchers were looking for, the most important being the protein’s location within the cell and its brightness.

The research team then selected five of the best candidates and did another round of mutation, generating 8 million new candidates. The robot picked out the seven best of these, which the researchers then narrowed down to one top performer, which they called Archon1.

Mapping the brain

A key feature of Archon1 is that once the gene is delivered into a cell, the Archon1 protein embeds itself into the cell membrane, which is the best place to obtain an accurate measurement of a cell’s voltage.

Using this protein, the researchers were able to measure electrical activity in mouse brain tissue, as well as in brain cells of zebrafish larvae and the worm Caenorhabditis elegans. The latter two organisms are transparent, so it is easy to expose them to light and image the resulting fluorescence. When the cells are exposed to a certain wavelength of reddish-orange light, the protein sensor emits a longer wavelength of red light, and the brightness of the light corresponds to the voltage of that cell at a given moment in time.

The researchers also showed that Archon1 can be used in conjunction with light-sensitive proteins that are commonly used to silence or stimulate neuron activity — these are known as optogenetic proteins — as long as those proteins respond to colors other than red. In experiments with C. elegans, the researchers demonstrated that they could stimulate one neuron using blue light and then use Archon1 to measure the resulting effect in neurons that receive input from that cell.

Cohen, the Harvard professor who developed the predecessor to Archon1, says the new MIT protein brings scientists closer to the goal of imaging millisecond-timescale electrical activity in live brains.

“Traditionally, it has been excruciatingly labor-intensive to engineer fluorescent voltage indicators, because each mutant had to be cloned individually and then tested through a slow, manual patch-clamp electrophysiology measurement. The Boyden lab developed a very clever high-throughput screening approach to this problem,” says Cohen, who was not involved in this study. “Their new reporter looks really great in fish and worms and in brain slices. I’m eager to try it in my lab.”

The researchers are now working on using this technology to measure brain activity in mice as they perform various tasks, which Boyden believes should allow them to map neural circuits and discover how they produce specific behaviors.

“We will be able to watch a neural computation happen,” he says. “Over the next five years or so we’re going to try to solve some small brain circuits completely. Such results might take a step toward understanding what a thought or a feeling actually is.”

The research was funded by the HHMI-Simons Faculty Scholars Program, the IET Harvey Prize, the MIT Media Lab, the New York Stem Cell Foundation Robertson Award, the Open Philanthropy Project, John Doerr, the Human Frontier Science Program, the Department of Defense, the National Science Foundation, and the National Institutes of Health, including an NIH Director’s Pioneer Award.

Back-and-forth exchanges boost children’s brain response to language

A landmark 1995 study found that children from higher-income families hear about 30 million more words during their first three years of life than children from lower-income families. This “30-million-word gap” correlates with significant differences in tests of vocabulary, language development, and reading comprehension.

MIT cognitive scientists have now found that conversation between an adult and a child appears to change the child’s brain, and that this back-and-forth conversation is actually more critical to language development than the word gap. In a study of children between the ages of 4 and 6, they found that differences in the number of “conversational turns” accounted for a large portion of the differences in brain physiology and language skills that they found among the children. This finding applied to children regardless of parental income or education.

The findings suggest that parents can have considerable influence over their children’s language and brain development by simply engaging them in conversation, the researchers say.

“The important thing is not just to talk to your child, but to talk with your child. It’s not just about dumping language into your child’s brain, but to actually carry on a conversation with them,” says Rachel Romeo, a graduate student at Harvard and MIT and the lead author of the paper, which appears in the Feb. 14 online edition of Psychological Science.

Using functional magnetic resonance imaging (fMRI), the researchers identified differences in the brain’s response to language that correlated with the number of conversational turns. In children who experienced more conversation, Broca’s area, a part of the brain involved in speech production and language processing, was much more active while they listened to stories. This brain activation then predicted children’s scores on language assessments, fully explaining the income-related differences in children’s language skills.

“The really novel thing about our paper is that it provides the first evidence that family conversation at home is associated with brain development in children. It’s almost magical how parental conversation appears to influence the biological growth of the brain,” says John Gabrieli, the Grover M. Hermann Professor in Health Sciences and Technology, a professor of brain and cognitive sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Beyond the word gap

Before this study, little was known about how the “word gap” might translate into differences in the brain. The MIT team set out to find these differences by comparing the brain scans of children from different socioeconomic backgrounds.

As part of the study, the researchers used a system called Language Environment Analysis (LENA) to record every word spoken or heard by each child. Parents who agreed to have their children participate in the study were told to have their children wear the recorder for two days, from the time they woke up until they went to bed.

The recordings were then analyzed by a computer program that yielded three measurements: the number of words spoken by the child, the number of words spoken to the child, and the number of times that the child and an adult took a “conversational turn” — a back-and-forth exchange initiated by either one.

The researchers found that the number of conversational turns correlated strongly with the children’s scores on standardized tests of language skill, including vocabulary, grammar, and verbal reasoning. The number of conversational turns also correlated with more activity in Broca’s area, when the children listened to stories while inside an fMRI scanner.

These correlations were much stronger than those between the number of words heard and language scores, and between the number of words heard and activity in Broca’s area.

This result aligns with other recent findings, Romeo says, “but there’s still a popular notion that there’s this 30-million-word gap, and we need to dump words into these kids — just talk to them all day long, or maybe sit them in front of a TV that will talk to them. However, the brain data show that it really seems to be this interactive dialogue that is more strongly related to neural processing.”

The researchers believe interactive conversation gives children more of an opportunity to practice their communication skills, including the ability to understand what another person is trying to say and to respond in an appropriate way.

While children from higher-income families were exposed to more language on average, children from lower-income families who experienced a high number of conversational turns had language skills and Broca’s area brain activity similar to those of children who came from higher-income families.

“In our analysis, the conversational turn-taking seems like the thing that makes a difference, regardless of socioeconomic status. Such turn-taking occurs more often in families from a higher socioeconomic status, but children coming from families with lesser income or parental education showed the same benefits from conversational turn-taking,” Gabrieli says.

Taking action

The researchers hope their findings will encourage parents to engage their young children in more conversation. Although this study was done in children age 4 to 6, this type of turn-taking can also be done with much younger children, by making sounds back and forth or making faces, the researchers say.

“One of the things we’re excited about is that it feels like a relatively actionable thing because it’s specific. That doesn’t mean it’s easy for less educated families, under greater economic stress, to have more conversation with their child. But at the same time, it’s a targeted, specific action, and there may be ways to promote or encourage that,” Gabrieli says.

Roberta Golinkoff, a professor of education at the University of Delaware School of Education, says the new study presents an important finding that adds to the evidence that it’s not just the number of words children hear that is significant for their language development.

“You can talk to a child until you’re blue in the face, but if you’re not engaging with the child and having a conversational duet about what the child is interested in, you’re not going to give the child the language processing skills that they need,” says Golinkoff, who was not involved in the study. “If you can get the child to participate, not just listen, that will allow the child to have a better language outcome.”

The MIT researchers now hope to study the effects of possible interventions that incorporate more conversation into young children’s lives. These could include technological assistance, such as computer programs that can converse or electronic reminders to parents to engage their children in conversation.

The research was funded by the Walton Family Foundation, the National Institute of Child Health and Human Development, a Harvard Mind Brain Behavior Grant, and a gift from David Pun Chan.

Microscopy technique could enable more informative biopsies

MIT and Harvard Medical School researchers have devised a way to image biopsy samples with much higher resolution — an advance that could help doctors develop more accurate and inexpensive diagnostic tests.

For more than 100 years, conventional light microscopes have been vital tools for pathology. However, fine-scale details of cells cannot be seen with these scopes. The new technique relies on an approach known as expansion microscopy, developed originally in Edward Boyden’s lab at MIT, in which the researchers expand a tissue sample to 100 times its original volume before imaging it.

This expansion allows researchers to see features with a conventional light microscope that ordinarily could be seen only with an expensive, high-resolution electron microscope. It also reveals additional molecular information that the electron microscope cannot provide.

“It’s a technique that could have very broad application,” says Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT. He is also a member of MIT’s Media Lab and McGovern Institute for Brain Research, and an HHMI-Simons Faculty Scholar.

In a paper appearing in the 17 July issue of Nature Biotechnology, Boyden and his colleagues used this technique to distinguish early-stage breast lesions with high or low risk of progressing to cancer — a task that is challenging for human observers. This approach can also be applied to other diseases: In an analysis of kidney tissue, the researchers found that images of expanded samples revealed signs of kidney disease that can normally only be seen with an electron microscope.

“Using expansion microscopy, we are able to diagnose diseases that were previously impossible to diagnose with a conventional light microscope,” says Octavian Bucur, an instructor at Harvard Medical School, Beth Israel Deaconess Medical Center (BIDMC), and the Ludwig Center at Harvard, and one of the paper’s lead authors.

MIT postdoc Yongxin Zhao is the paper’s co-lead author. Boyden and Andrew Beck, a former associate professor at Harvard Medical School and BIDMC, are the paper’s senior authors.


“A few chemicals and a light microscope”

Boyden’s original expansion microscopy technique is based on embedding tissue samples in a dense, evenly generated polymer that swells when water is added. Before the swelling occurs, the researchers anchor to the polymer gel the molecules that they want to image, and they digest other proteins that normally hold tissue together.

This tissue enlargement allows researchers to obtain images with a resolution of around 70 nanometers, which was previously possible only with very specialized and expensive microscopes.

In the new study, the researchers set out to adapt the expansion process for biopsy tissue samples, which are usually embedded in paraffin wax, flash frozen, or stained with a chemical that makes cellular structures more visible.

The MIT/Harvard team devised a process to convert these samples into a state suitable for expansion. For example, they remove the chemical stain or paraffin by exposing the tissues to a chemical solvent called xylene. Then, they heat up the sample in another chemical called citrate. After that, the tissues go through an expansion process similar to the original version of the technique, but with stronger digestion steps to compensate for the strong chemical fixation of the samples.

During this procedure, the researchers can also add fluorescent labels for molecules of interest, including proteins that mark particular types of cells, or DNA or RNA with a specific sequence.

“The work of Zhao et al. describes a very clever way of extending the resolution of light microscopy to resolve detail beyond that seen with conventional methods,” says David Rimm, a professor of pathology at the Yale University School of Medicine, who was not involved in the research.

The researchers tested this approach on tissue samples from patients with early-stage breast lesions. One way to predict whether these lesions will become malignant is to evaluate the appearance of the cells’ nuclei. Benign lesions with atypical nuclei have about a fivefold higher probability of progressing to cancer than those with typical nuclei.

However, studies have revealed significant discrepancies between the assessments of nuclear atypia performed by different pathologists, which can potentially lead to an inaccurate diagnosis and unnecessary surgery. An improved system for differentiating benign lesions with atypical and typical nuclei could potentially prevent 400,000 misdiagnoses and hundreds of millions of dollars every year in the United States, according to the researchers.

After expanding the tissue samples, the MIT/Harvard team analyzed them with a machine learning algorithm that can rate the nuclei based on dozens of features, including orientation, diameter, and how much they deviate from true circularity. This algorithm was able to distinguish between lesions that were likely to become invasive and those that were not, with an accuracy of 93 percent on expanded samples compared to only 71 percent on the pre-expanded tissue.

“These two types of lesions look highly similar to the naked eye, but one has much less risk of cancer,” Zhao says.

The researchers also analyzed kidney tissue samples from patients with nephrotic syndrome, which impairs the kidneys’ ability to filter blood. In these patients, tiny finger-like projections that filter the blood are lost or damaged. These structures are spaced about 200 nanometers apart and therefore can usually be seen only with an electron microscope or expensive super resolution microscopes.

When the researchers showed the images of the expanded tissue samples to a group of scientists that included pathologists and nonpathologists, the group was able to identify the diseased tissue with 90 percent accuracy overall, compared to only 65 percent accuracy with unexpanded tissue samples.

“Now you can diagnose nephrotic kidney disease without needing an electron microscope, a very expensive machine,” Boyden says. “You can do it with a few chemicals and a light microscope.”

Uncovering patterns

Using this approach, the researchers anticipate that scientists could develop more precise diagnostics for many other diseases. To do that, scientists and doctors will need to analyze many more patient samples, allowing them to discover patterns that would be impossible to see otherwise.

“If you can expand a tissue by one-hundredfold in volume, all other things being equal, you’re getting 100 times the information,” Boyden says.

For example, researchers could distinguish cancer cells based on how many copies of a particular gene they have. Extra copies of genes such as HER2, which the researchers imaged in one part of this study, indicate a subtype of breast cancer that is eligible for specific treatments.

Scientists could also look at the architecture of the genome, or at how cell shapes change as they become cancerous and interact with other cells of the body. Another possible application is identifying proteins that are expressed specifically on the surface of cancer cells, allowing researchers to design immunotherapies that mark those cells for destruction by the patient’s immune system.

Boyden and his colleagues run training courses several times a month at MIT, where visitors can come and watch expansion microscopy techniques, and they have made their protocols available on their website. They hope that many more people will begin using this approach to study a variety of diseases.

“Cancer biopsies are just the beginning,” Boyden says. “We have a new pipeline for taking clinical samples and expanding them, and we are finding that we can apply expansion to many different diseases. Expansion will enable computational pathology to take advantage of more information in a specimen than previously possible.”

Humayun Irshad, a research fellow at Harvard/BIDMC and an author of the study, agrees: “Expanded images result in more informative features, which in turn result in higher-performing classification models.”

Other authors include Harvard pathologist Astrid Weins, who helped oversee the kidney study. Other authors from MIT (Fei Chen) and BIDMC/Harvard (Andreea Stancu, Eun-Young Oh, Marcello DiStasio, Vanda Torous, Benjamin Glass, Isaac E. Stillman, and Stuart J. Schnitt) also contributed to this study.

The research was funded, in part, by the New York Stem Cell Foundation Robertson Investigator Award, the National Institutes of Health Director’s Pioneer Award, the Department of Defense Multidisciplinary University Research Initiative, the Open Philanthropy Project, the Ludwig Center at Harvard, and Harvard Catalyst.

Socioeconomic background linked to reading improvement

About 20 percent of children in the United States have difficulty learning to read, and educators have devised a variety of interventions to try to help them. Not every program helps every student, however, in part because the origins of their struggles are not identical.

MIT neuroscientist John Gabrieli is trying to identify factors that may help to predict individual children’s responses to different types of reading interventions. As part of that effort, he recently found that children from lower-income families responded much better to a summer reading program than children from a higher socioeconomic background.

Using magnetic resonance imaging (MRI), the research team also found anatomical changes in the brains of children whose reading abilities improved — in particular, a thickening of the cortex in parts of the brain known to be involved in reading.

“If you just left these children [with reading difficulties] alone on the developmental path they’re on, they would have terrible troubles reading in school. We’re taking them on a neuroanatomical detour that seems to go with real gains in reading ability,” says Gabrieli, the Grover M. Hermann Professor in Health Sciences and Technology, a professor of brain and cognitive sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Rachel Romeo, a graduate student in the Harvard-MIT Program in Health Sciences and Technology, and Joanna Christodoulou, an assistant professor of communication sciences and disorders at the Massachusetts General Hospital Institute of Health Professions, are the lead authors of the paper, which appears in the June 7 issue of the journal Cerebral Cortex.

Predicting improvement

In hopes of identifying factors that influence children’s responses to reading interventions, the MIT team set up two summer schools based on a program known as Lindamood-Bell. The researchers recruited students from a wide income range, although socioeconomic status was not the original focus of their study.

The Lindamood-Bell program focuses on helping students develop the sensory and cognitive processing necessary for reading, such as thinking about words as units of sound, and translating printed letters into word meanings.

Children participating in the study, who ranged from 6 to 9 years old, spent four hours a day, five days a week in the program, for six weeks. Before and after the program, their brains were scanned with MRI and they were given some commonly used tests of reading proficiency.

In tests taken before the program started, children from higher and lower socioeconomic (SES) backgrounds fared equally poorly in most areas, with one exception. Children from higher SES backgrounds had higher vocabulary scores, which has also been seen in studies comparing nondyslexic readers from different SES backgrounds.

“There’s a strong trend in these studies that higher SES families tend to talk more with their kids and also use more complex and diverse language. That tends to be where the vocabulary correlation comes from,” Romeo says.

The researchers also found differences in brain anatomy before the reading program started. Children from higher socioeconomic backgrounds had thicker cortex in a part of the brain known as Broca’s area, which is necessary for language production and comprehension. The researchers also found that these differences could account for the differences in vocabulary levels between the two groups.

Based on a limited number of previous studies, the researchers hypothesized that the reading program would have more of an impact on the students from higher socioeconomic backgrounds. But in fact, they found the opposite. About half of the students improved their scores, while the other half worsened or stayed the same. When analyzing the data for possible explanations, family income level was the one factor that proved significant.

“Socioeconomic status just showed up as the piece that was most predictive of treatment response,” Romeo says.

The same children whose reading scores improved also displayed changes in their brain anatomy. Specifically, the researchers found that they had a thickening of the cortex in a part of the brain known as the temporal occipital region, which comprises a large network of structures involved in reading.

“Mix of causes”

The researchers believe that their results may have been different than previous studies of reading intervention in low SES students because their program was run during the summer, rather than during the school year.

“Summer is when socioeconomic status takes its biggest toll. Low SES kids typically have less academic content in their summer activities compared to high SES, and that results in a slump in their skills,” Romeo says. “This may have been particularly beneficial for them because it may have been out of the realm of their typical summer.”

The researchers also hypothesize that reading difficulties may arise in slightly different ways among children of different SES backgrounds.

“There could be a different mix of causes,” Gabrieli says. “Reading is a complicated skill, so there could be a number of different factors that would make you do better or do worse. It could be that those factors are a little bit different in children with more enriched or less enriched environments.”

The researchers are hoping to identify more precisely the factors related to socioeconomic status, other environmental factors, or genetic components that could predict which types of reading interventions will be successful for individual students.

“In medicine, people call it personalized medicine: this idea that some people will really benefit from one intervention and not so much from another,” Gabrieli says. “We’re interested in understanding the match between the student and the kind of educational support that would be helpful for that particular student.”

The research was funded by the Ellison Medical Foundation, the Halis Family Foundation, Lindamood-Bell Learning Processes, and the National Institutes of Health.

Making brain implants smaller could prolong their lifespan

Many diseases, including Parkinson’s disease, can be treated with electrical stimulation from an electrode implanted in the brain. However, the electrodes can produce scarring, which diminishes their effectiveness and can necessitate additional surgeries to replace them.

MIT researchers have now demonstrated that making these electrodes much smaller can essentially eliminate this scarring, potentially allowing the devices to remain in the brain for much longer.

“What we’re doing is changing the scale and making the procedure less invasive,” says Michael Cima, the David H. Koch Professor of Engineering in the Department of Materials Science and Engineering, a member of MIT’s Koch Institute for Integrative Cancer Research, and the senior author of the study, which appears in the May 16 issue of Scientific Reports.

Cima and his colleagues are now designing brain implants that can not only deliver electrical stimulation but also record brain activity or deliver drugs to very targeted locations.

The paper’s lead author is former MIT graduate student Kevin Spencer. Other authors are former postdoc Jay Sy, graduate student Khalil Ramadi, Institute Professor Ann Graybiel, and David H. Koch Institute Professor Robert Langer.

Effects of size

Many Parkinson’s patients have benefited from treatment with low-frequency electrical current delivered to a part of the brain involved in movement control. The electrodes used for this deep brain stimulation are a few millimeters in diameter. After being implanted, they gradually generate scar tissue through the constant rubbing of the electrode against the surrounding brain tissue. This process, known as gliosis, contributes to the high failure rate of such devices: About half stop working within the first six months.

Previous studies have suggested that making the implants smaller or softer could reduce the amount of scarring, so the MIT team set out to measure the effects of both reducing the size of the implants and coating them with a soft polyethylene glycol (PEG) hydrogel.

The hydrogel coating was designed to have an elasticity very similar to that of the brain. The researchers could also control the thickness of the coating. They found that when coated electrodes were pushed into the brain, the soft coating would fall off, so they devised a way to apply the hydrogel and then dry it, so that it becomes a hard, thin film. After the electrode is inserted, the film soaks up water and becomes soft again.

In mice, the researchers tested both coated and uncoated glass fibers with varying diameters and found that there is a tradeoff between size and softness. Coated fibers produced much less scarring than uncoated fibers of the same diameter. However, as the electrode fibers became smaller, down to about 30 microns (0.03 millimeters) in diameter, the uncoated versions produced less scarring, because the coatings increase the diameter.

This suggests that a 30-micron, uncoated fiber is the optimal design for implantable devices in the brain.

“Before this paper, no one really knew the effects of size,” Cima says. “Softer is better, but not if it makes the electrode larger.”

New devices

The question now is whether fibers that are only 30 microns in diameter can be adapted for electrical stimulation, drug delivery, and recording electrical activity in the brain. Cima and his colleagues have had some initial success developing such devices.

“It’s one of those things that at first glance seems impossible. If you have 30-micron glass fibers, that’s slightly thicker than a piece of hair. But it is possible to do,” Cima says.
Such devices could be potentially useful for treating Parkinson’s disease or other neurological disorders. They could also be used to remove fluid from the brain to monitor whether treatments are having the intended effect, or to measure brain activity that might indicate when an epileptic seizure is about to occur.

The research was funded by the National Institutes of Health and MIT’s Institute for Soldier Nanotechnologies.

High-resolution imaging with conventional microscopes

MIT researchers have developed a way to make extremely high-resolution images of tissue samples, at a fraction of the cost of other techniques that offer similar resolution.

The new technique relies on expanding tissue before imaging it with a conventional light microscope. Two years ago, the MIT team showed that it was possible to expand tissue volumes 100-fold, resulting in an image resolution of about 60 nanometers. Now, the researchers have shown that expanding the tissue a second time before imaging can boost the resolution to about 25 nanometers.

This level of resolution allows scientists to see, for example, the proteins that cluster together in complex patterns at brain synapses, helping neurons to communicate with each other. It could also help researchers to map neural circuits, says Ed Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT.

“We want to be able to trace the wiring of complete brain circuits,” says Boyden, the study’s senior author. “If you could reconstruct a complete brain circuit, maybe you could make a computational model of how it generates complex phenomena like decisions and emotions. Since you can map out the biomolecules that generate electrical pulses within cells and that exchange chemicals between cells, you could potentially model the dynamics of the brain.”

This approach could also be used to image other phenomena such as the interactions between cancer cells and immune cells, to detect pathogens without expensive equipment, and to map the cell types of the body.

Former MIT postdoc Jae-Byum Chang is the first author of the paper, which appears in the April 17 issue of Nature Methods.

Double expansion

To expand tissue samples, the researchers embed them in a dense, evenly generated gel made of polyacrylate, a very absorbent material that’s also used in diapers. Before the gel is formed, the researchers label the cell proteins they want to image, using antibodies that bind to specific targets. These antibodies bear “barcodes” made of DNA, which in turn are attached to cross-linking molecules that bind to the polymers that make up the expandable gel. The researchers then break down the proteins that normally hold the tissue together, allowing the DNA barcodes to expand away from each other as the gel swells.

These enlarged samples can then be labeled with fluorescent probes that bind the DNA barcodes, and imaged with commercially available confocal microscopes, whose resolution is usually limited to hundreds of nanometers.

Using that approach, the researchers were previously able to achieve a resolution of about 60 nanometers. However, “individual biomolecules are much smaller than that, say 5 nanometers or even smaller,” Boyden says. “The original versions of expansion microscopy were useful for many scientific questions but couldn’t equal the performance of the highest-resolution imaging methods such as electron microscopy.”

In their original expansion microscopy study, the researchers found that they could expand the tissue more than 100-fold in volume by reducing the number of cross-linking molecules that hold the polymer in an orderly pattern. However, this made the tissue unstable.

“If you reduce the cross-linker density, the polymers no longer retain their organization during the expansion process,” says Boyden, who is a member of MIT’s Media Lab and McGovern Institute for Brain Research. “You lose the information.”

Instead, in their latest study, the researchers modified their technique so that after the first tissue expansion, they can create a new gel that swells the tissue a second time — an approach they call “iterative expansion.”

Mapping circuits

Using iterative expansion, the researchers were able to image tissues with a resolution of about 25 nanometers, which is similar to that achieved by high-resolution techniques such as stochastic optical reconstruction microscopy (STORM). However, expansion microscopy is much cheaper and simpler to perform because no specialized equipment or chemicals are required, Boyden says. The method is also much faster and thus compatible with large-scale, 3-D imaging.

The resolution of expansion microscopy does not yet match that of scanning electron microscopy (about 5 nanometers) or transmission electron microscopy (about 1 nanometer). However, electron microscopes are very expensive and not widely available, and with those microscopes, it is difficult for researchers to label specific proteins.

In the Nature Methods paper, the MIT team used iterative expansion to image synapses — the connections between neurons that allow them to communicate with each other. In their original expansion microscopy study, the researchers were able to image scaffolding proteins, which help to organize the hundreds of other proteins found in synapses. With the new, enhanced resolution, the researchers were also able to see finer-scale structures, such as the location of neurotransmitter receptors located on the surfaces of the “postsynaptic” cells on the receiving side of the synapse.

“My hope is that we can, in the coming years, really start to map out the organization of these scaffolding and signaling proteins at the synapse,” Boyden says.

Combining expansion microscopy with a new tool called temporal multiplexing should help to achieve that, he believes. Currently, only a limited number of colored probes can be used to image different molecules in a tissue sample. With temporal multiplexing, researchers can label one molecule with a fluorescent probe, take an image, and then wash the probe away. This can then be repeated many times, each time using the same colors to label different molecules.

“By combining iterative expansion with temporal multiplexing, we could in principle have essentially infinite-color, nanoscale-resolution imaging over large 3-D volumes,” Boyden says. “Things are getting really exciting now that these different technologies may soon connect with each other.”

The researchers also hope to achieve a third round of expansion, which they believe could, in principle, enable resolution of about 5 nanometers. However, right now the resolution is limited by the size of the antibodies used to label molecules in the cell. These antibodies are about 10 to 20 nanometers long, so to get resolution below that, researchers would need to create smaller tags or expand the proteins away from each other first and then deliver the antibodies after expansion.

This study was funded by the National Institutes of Health Director’s Pioneer Award, the New York Stem Cell Foundation Robertson Award, the HHMI-Simons Faculty Scholars Award, and the Open Philanthropy Project.

Precise technique tracks dopamine in the brain

MIT researchers have devised a way to measure dopamine in the brain much more precisely than previously possible, which should allow scientists to gain insight into dopamine’s roles in learning, memory, and emotion.

Dopamine is one of the many neurotransmitters that neurons in the brain use to communicate with each other. Previous systems for measuring these neurotransmitters have been limited in how long they provide accurate readings and how much of the brain they can cover. The new MIT device, an array of tiny carbon electrodes, overcomes both of those obstacles.

“Nobody has really measured neurotransmitter behavior at this spatial scale and timescale. Having a tool like this will allow us to explore potentially any neurotransmitter-related disease,” says Michael Cima, the David H. Koch Professor of Engineering in the Department of Materials Science and Engineering, a member of MIT’s Koch Institute for Integrative Cancer Research, and the senior author of the study.

Furthermore, because the array is so tiny, it has the potential to eventually be adapted for use in humans, to monitor whether therapies aimed at boosting dopamine levels are succeeding. Many human brain disorders, most notably Parkinson’s disease, are linked to dysregulation of dopamine.

“Right now deep brain stimulation is being used to treat Parkinson’s disease, and we assume that that stimulation is somehow resupplying the brain with dopamine, but no one’s really measured that,” says Helen Schwerdt, a Koch Institute postdoc and the lead author of the paper, which appears in the journal Lab on a Chip.

Studying the striatum

For this project, Cima’s lab teamed up with David H. Koch Institute Professor Robert Langer, who has a long history of drug delivery research, and Institute Professor Ann Graybiel, who has been studying dopamine’s role in the brain for decades with a particular focus on a brain region called the striatum. Dopamine-producing cells within the striatum are critical for habit formation and reward-reinforced learning.

Until now, neuroscientists have used carbon electrodes with a shaft diameter of about 100 microns to measure dopamine in the brain. However, these can only be used reliably for about a day because they produce scar tissue that interferes with the electrodes’ ability to interact with dopamine, and other types of interfering films can also form on the electrode surface over time. Furthermore, there is only about a 50 percent chance that a single electrode will end up in a spot where there is any measurable dopamine, Schwerdt says.

The MIT team designed electrodes that are only 10 microns in diameter and combined them into arrays of eight electrodes. These delicate electrodes are then wrapped in a rigid polymer called PEG, which protects them and keeps them from deflecting as they enter the brain tissue. However, the PEG is dissolved during the insertion so it does not enter the brain.

These tiny electrodes measure dopamine in the same way that the larger versions do. The researchers apply an oscillating voltage through the electrodes, and when the voltage is at a certain point, any dopamine in the vicinity undergoes an electrochemical reaction that produces a measurable electric current. Using this technique, dopamine’s presence can be monitored at millisecond timescales.

Using these arrays, the researchers demonstrated that they could monitor dopamine levels in many parts of the striatum at once.

“What motivated us to pursue this high-density array was the fact that now we have a better chance to measure dopamine in the striatum, because now we have eight or 16 probes in the striatum, rather than just one,” Schwerdt says.

The researchers found that dopamine levels vary greatly across the striatum. This was not surprising, because they did not expect the entire region to be continuously bathed in dopamine, but this variation has been difficult to demonstrate because previous methods measured only one area at a time.

How learning happens

The researchers are now conducting tests to see how long these electrodes can continue giving a measurable signal, and so far the device has kept working for up to two months. With this kind of long-term sensing, scientists should be able to track dopamine changes over long periods of time, as habits are formed or new skills are learned.

“We and other people have struggled with getting good long-term readings,” says Graybiel, who is a member of MIT’s McGovern Institute for Brain Research. “We need to be able to find out what happens to dopamine in mouse models of brain disorders, for example, or what happens to dopamine when animals learn something.”

She also hopes to learn more about the roles of structures in the striatum known as striosomes. These clusters of cells, discovered by Graybiel many years ago, are distributed throughout the striatum. Recent work from her lab suggests that striosomes are involved in making decisions that induce anxiety.

This study is part of a larger collaboration between Cima’s and Graybiel’s labs that also includes efforts to develop injectable drug-delivery devices to treat brain disorders.

“What links all these studies together is we’re trying to find a way to chemically interface with the brain,” Schwerdt says. “If we can communicate chemically with the brain, it makes our treatment or our measurement a lot more focused and selective, and we can better understand what’s going on.”

Other authors of the paper are McGovern Institute research scientists Minjung Kim, Satoko Amemori, and Hideki Shimazu; McGovern Institute postdoc Daigo Homma; McGovern Institute technical associate Tomoko Yoshida; and undergraduates Harshita Yerramreddy and Ekin Karasan.

The research was funded by the National Institutes of Health, the National Institute of Biomedical Imaging and Bioengineering, and the National Institute of Neurological Disorders and Stroke.

Sensor traces dopamine released by single cells

MIT chemical engineers have developed an extremely sensitive detector that can track single cells’ secretion of dopamine, a brain chemical responsible for carrying messages involved in reward-motivated behavior, learning, and memory.

Using arrays of up to 20,000 tiny sensors, the researchers can monitor dopamine secretion of single neurons, allowing them to explore critical questions about dopamine dynamics. Until now, that has been very difficult to do.

“Now, in real-time, and with good spatial resolution, we can see exactly where dopamine is being released,” says Michael Strano, the Carbon P. Dubbs Professor of Chemical Engineering and the senior author of a paper describing the research, which appears in the Proceedings of the National Academy of Sciences the week of Feb. 6.

Strano and his colleagues have already demonstrated that dopamine release occurs differently than scientists expected in a type of neural progenitor cell, helping to shed light on how dopamine may exert its effects in the brain.

The paper’s lead author is Sebastian Kruss, a former MIT postdoc who is now at Göttingen University, in Germany. Other authors are Daniel Salem and Barbara Lima, both MIT graduate students; Edward Boyden, an associate professor of biological engineering and brain and cognitive sciences, as well as a member of the MIT Media Lab and the McGovern Institute for Brain Research; Lela Vukovic, an assistant professor of chemistry at the University of Texas at El Paso; and Emma Vander Ende, a graduate student at Northwestern University.

“A global effect”

Dopamine is a neurotransmitter that plays important roles in learning, memory, and feelings of reward, which reinforce positive experiences.

Neurotransmitters allow neurons to relay messages to nearby neurons through connections known as synapses. However, unlike most other neurotransmitters, dopamine can exert its effects beyond the synapse: Not all dopamine released into a synapse is taken up by the target cell, allowing some of the chemical to diffuse away and affect other nearby cells.

“It has a local effect, which controls the signaling through the neurons, but also it has a global effect,” Strano says. “If dopamine is in the region, it influences all the neurons nearby.”

Tracking this dopamine diffusion in the brain has proven difficult. Neuroscientists have tried using electrodes that are specialized to detect dopamine, but even using the smallest electrodes available, they can place only about 20 near any given cell.

“We’re at the infancy of really understanding how these packets of chemicals move and their directionality,” says Strano, who decided to take a different approach.

Strano’s lab has previously developed sensors made from arrays of carbon nanotubes — hollow, nanometer-thick cylinders made of carbon, which naturally fluoresce when exposed to laser light. By wrapping these tubes in different proteins or DNA strands, scientists can customize them to bind to different types of molecules.

The carbon nanotube sensors used in this study are coated with a DNA sequence that makes the sensors interact with dopamine. When dopamine binds to the carbon nanotubes, they fluoresce more brightly, allowing the researchers to see exactly where the dopamine was released. The researchers deposited more than 20,000 of these nanotubes on a glass slide, creating an array that detects any dopamine secreted by a cell placed on the slide.

Dopamine diffusion

In the new PNAS study, the researchers used these dopamine sensors to explore a longstanding question about dopamine release in the brain: From which part of the cell is dopamine secreted?

To help answer that question, the researchers placed individual neural progenitor cells known as PC-12 cells onto the sensor arrays. PC-12 cells, which develop into neuron-like cells under the right conditions, have a starfish-like shape with several protrusions that resemble axons, which form synapses with other cells.

After stimulating the cells to release dopamine, the researchers found that certain dopamine sensors near the cells lit up immediately, while those farther away turned on later as the dopamine diffused away. Tracking those patterns over many seconds allowed the researchers to trace how dopamine spreads away from the cells.

Strano says one might expect to see that most of the dopamine would be released from the tips of the arms extending out from the cells. However, the researchers found that in fact more dopamine came from the sides of the arms.

“We have falsified the notion that dopamine should only be released at these regions that will eventually become the synapses,” Strano says. “This observation is counterintuitive, and it’s a new piece of information you can only obtain with a nanosensor array like this one.”

The team also showed that most of the dopamine traveled away from the cell, through protrusions extending in opposite directions. “Even though dopamine is not necessarily being released only at the tip of these protrusions, the direction of release is associated with them,” Salem says.

Other questions that could be explored using these sensors include how dopamine release is affected by the direction of input to the cell, and how the presence of nearby cells influences each cell’s dopamine release.

The research was funded by the National Science Foundation, the National Institutes of Health, a University of Illinois Center for the Physics of Living Cells Postdoctoral Fellowship, the German Research Foundation, and a Liebig Fellowship.