Robotic system offers easier monitoring of single neurons

Recording electrical signals from inside a neuron in the living brain can reveal a great deal of information about that neuron’s function and how it coordinates with other cells in the brain. However, performing this kind of recording is extremely difficult, so only a handful of neuroscience labs around the world do it.

To make this technique more widely available, MIT engineers have now devised a way to automate the process, using a computer algorithm that analyzes microscope images and guides a robotic arm to the target cell.

This technology could allow more scientists to study single neurons and learn how they interact with other cells to enable cognition, sensory perception, and other brain functions. Researchers could also use it to learn more about how neural circuits are affected by brain disorders.

“Knowing how neurons communicate is fundamental to basic and clinical neuroscience. Our hope is this technology will allow you to look at what’s happening inside a cell, in terms of neural computation, or in a disease state,” says Ed Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT, and a member of MIT’s Media Lab and McGovern Institute for Brain Research.

Boyden is the senior author of the paper, which appears in the Aug. 30 issue of Neuron. The paper’s lead author is MIT graduate student Ho-Jun Suk.

Precision guidance

For more than 30 years, neuroscientists have been using a technique known as patch clamping to record the electrical activity of cells. This method, which involves bringing a tiny, hollow glass pipette in contact with the cell membrane of a neuron, then opening up a small pore in the membrane, usually takes a graduate student or postdoc several months to learn. Learning to perform this on neurons in the living mammalian brain is even more difficult.

There are two types of patch clamping: a “blind” (not image-guided) method, which is limited because researchers cannot see where the cells are and can only record from whatever cell the pipette encounters first, and an image-guided version that allows a specific cell to be targeted.

Five years ago, Boyden and colleagues at MIT and Georgia Tech, including co-author Craig Forest, devised a way to automate the blind version of patch clamping. They created a computer algorithm that could guide the pipette to a cell based on measurements of a property called electrical impedance — which reflects how difficult it is for electricity to flow out of the pipette. If there are no cells around, electricity flows and impedance is low. When the tip hits a cell, electricity can’t flow as well and impedance goes up.

Once the pipette detects a cell, it can stop moving instantly, preventing it from poking through the membrane. A vacuum pump then applies suction to form a seal with the cell’s membrane. Then, the electrode can break through the membrane to record the cell’s internal electrical activity.

The researchers achieved very high accuracy using this technique, but it still could not be used to target a specific cell. For most studies, neuroscientists have a particular cell type they would like to learn about, Boyden says.

“It might be a cell that is compromised in autism, or is altered in schizophrenia, or a cell that is active when a memory is stored. That’s the cell that you want to know about,” he says. “You don’t want to patch a thousand cells until you find the one that is interesting.”

To enable this kind of precise targeting, the researchers set out to automate image-guided patch clamping. This technique is difficult to perform manually because, although the scientist can see the target neuron and the pipette through a microscope, he or she must compensate for the fact that nearby cells will move as the pipette enters the brain.

“It’s almost like trying to hit a moving target inside the brain, which is a delicate tissue,” Suk says. “For machines it’s easier because they can keep track of where the cell is, they can automatically move the focus of the microscope, and they can automatically move the pipette.”

By combining several imaging processing techniques, the researchers came up with an algorithm that guides the pipette to within about 25 microns of the target cell. At that point, the system begins to rely on a combination of imagery and impedance, which is more accurate at detecting contact between the pipette and the target cell than either signal alone.

The researchers imaged the cells with two-photon microscopy, a commonly used technique that uses a pulsed laser to send infrared light into the brain, lighting up cells that have been engineered to express a fluorescent protein.

Using this automated approach, the researchers were able to successfully target and record from two types of cells — a class of interneurons, which relay messages between other neurons, and a set of excitatory neurons known as pyramidal cells. They achieved a success rate of about 20 percent, which is comparable to the performance of highly trained scientists performing the process manually.

Unraveling circuits

This technology paves the way for in-depth studies of the behavior of specific neurons, which could shed light on both their normal functions and how they go awry in diseases such as Alzheimer’s or schizophrenia. For example, the interneurons that the researchers studied in this paper have been previously linked with Alzheimer’s. In a recent study of mice, led by Li-Huei Tsai, director of MIT’s Picower Institute for Learning and Memory, and conducted in collaboration with Boyden, it was reported that inducing a specific frequency of brain wave oscillation in interneurons in the hippocampus could help to clear amyloid plaques similar to those found in Alzheimer’s patients.

“You really would love to know what’s happening in those cells,” Boyden says. “Are they signaling to specific downstream cells, which then contribute to the therapeutic result? The brain is a circuit, and to understand how a circuit works, you have to be able to monitor the components of the circuit while they are in action.”

This technique could also enable studies of fundamental questions in neuroscience, such as how individual neurons interact with each other as the brain makes a decision or recalls a memory.

Bernardo Sabatini, a professor of neurobiology at Harvard Medical School, says he is interested in adapting this technique to use in his lab, where students spend a great deal of time recording electrical activity from neurons growing in a lab dish.

“It’s silly to have amazingly intelligent students doing tedious tasks that could be done by robots,” says Sabatini, who was not involved in this study. “I would be happy to have robots do more of the experimentation so we can focus on the design and interpretation of the experiments.”

To help other labs adopt the new technology, the researchers plan to put the details of their approach on their web site, autopatcher.org.

Other co-authors include Ingrid van Welie, Suhasa Kodandaramaiah, and Brian Allen. The research was funded by Jeremy and Joyce Wertheimer, the National Institutes of Health (including the NIH Single Cell Initiative and the NIH Director’s Pioneer Award), the HHMI-Simons Faculty Scholars Program, and the New York Stem Cell Foundation-Robertson Award.

How Biological Memory Really Works: Insights from the Man with the World’s Greatest Memory

 

Jim Karol exhibited no particular talent for memorizing anything early in his life. Far from being a savant, his grades in school were actually pretty bad and, after failing to graduate from college, he spent his 20’s working in a factory. He only started playing around with mnemonic techniques at the age of 49, merely as a means to amuse himself while he worked out on the treadmill. Then, in one of the most remarkable cognitive transformations in human history, he turned himself into the man with the world’s greatest memory. Whatever vast body of information is put before him — the US zip codes, the day of the week of every date in history, the first few thousand digits of pi, etc. — he voraciously commits to memory using his own inimitable mnemonic techniques. Moreover, unlike most other professional memorists, Jim has mastered the mental skill of permanently storing that information in long-term memory, as opposed to only short or medium-term memory. How does he do it?

To be sure, Jim has taken standard menmonic techniques to the next level. That said, it has been well-documented for over 2500 years that mnemonic techiques — such as the “Method of Loci” or the “Memory Palace” — dramatically enhance the memory capacity of anyone who uses them regularly. But is there any point to improving one’s memory in the age of the computer? Tony Dottino, the founder/executive director of the USA Memory Championship and a world reknown memory coach, will describe his experiences of teaching these techniques to all age groups.

Finally, does any of this have anything to do with the neuroscience of memory? McGovern Institute neuroscientist Robert Ajemian argues that it does and that one of the great intellectual misunderstandings in scientific history is that modern-day neuroscientists largely base their conceptualization of human memory on the computer metaphor. For this reason, neuroscientists usually talk of read/write operations, traces, engrams, storage/retrieval distinctions, etc. Ajemian argues that all of this is wrong for the brain, a highly distributed system which processes in parallel. The correct conceptualization of human memory is that of content-addressable memory implemented by attractor networks, and the success of mnemonic techniques, though largely ignored in current theories of memory, constitutes the ultimate proof. Ajemian will briefly outline these arguments.

Tan-Yang Center for Autism Research: Opening Remarks

June 12, 2017
Tan-Yang Center for Autism Research: Opening Remarks
Bob Desimone, Director of the McGovern Institute for Brain Research at MIT
Bob Millard, Chair of MIT Corporation
Lore Harp McGovern, Co-founder of the McGovern Institute for Brain Research at MIT
Hock E. Tan and K. Lisa Yang, Founders of the Tan-Yang Center for Autism Research

On June 12, 2017, the McGovern Institute hosted the launch celebration for the Hock E. Tan and K. Lisa Yang Center for Autism Research. The center is made possible by a kick-off commitment of $20 million, made by Lisa Yang and MIT alumnus Hock Tan ’75.

The Tan-Yang Center for Autism Research will support research on the genetic, biological and neural bases of autism spectrum disorders, a developmental disability estimated to affect 1 in 68 individuals in the United States. Tan and Yang hope their initial investment will stimulate additional support and help foster collaborative research efforts to erase the devastating effects of this disorder on individuals, their families and the broader autism community.

Microscopy technique could enable more informative biopsies

MIT and Harvard Medical School researchers have devised a way to image biopsy samples with much higher resolution — an advance that could help doctors develop more accurate and inexpensive diagnostic tests.

For more than 100 years, conventional light microscopes have been vital tools for pathology. However, fine-scale details of cells cannot be seen with these scopes. The new technique relies on an approach known as expansion microscopy, developed originally in Edward Boyden’s lab at MIT, in which the researchers expand a tissue sample to 100 times its original volume before imaging it.

This expansion allows researchers to see features with a conventional light microscope that ordinarily could be seen only with an expensive, high-resolution electron microscope. It also reveals additional molecular information that the electron microscope cannot provide.

“It’s a technique that could have very broad application,” says Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT. He is also a member of MIT’s Media Lab and McGovern Institute for Brain Research, and an HHMI-Simons Faculty Scholar.

In a paper appearing in the 17 July issue of Nature Biotechnology, Boyden and his colleagues used this technique to distinguish early-stage breast lesions with high or low risk of progressing to cancer — a task that is challenging for human observers. This approach can also be applied to other diseases: In an analysis of kidney tissue, the researchers found that images of expanded samples revealed signs of kidney disease that can normally only be seen with an electron microscope.

“Using expansion microscopy, we are able to diagnose diseases that were previously impossible to diagnose with a conventional light microscope,” says Octavian Bucur, an instructor at Harvard Medical School, Beth Israel Deaconess Medical Center (BIDMC), and the Ludwig Center at Harvard, and one of the paper’s lead authors.

MIT postdoc Yongxin Zhao is the paper’s co-lead author. Boyden and Andrew Beck, a former associate professor at Harvard Medical School and BIDMC, are the paper’s senior authors.


“A few chemicals and a light microscope”

Boyden’s original expansion microscopy technique is based on embedding tissue samples in a dense, evenly generated polymer that swells when water is added. Before the swelling occurs, the researchers anchor to the polymer gel the molecules that they want to image, and they digest other proteins that normally hold tissue together.

This tissue enlargement allows researchers to obtain images with a resolution of around 70 nanometers, which was previously possible only with very specialized and expensive microscopes.

In the new study, the researchers set out to adapt the expansion process for biopsy tissue samples, which are usually embedded in paraffin wax, flash frozen, or stained with a chemical that makes cellular structures more visible.

The MIT/Harvard team devised a process to convert these samples into a state suitable for expansion. For example, they remove the chemical stain or paraffin by exposing the tissues to a chemical solvent called xylene. Then, they heat up the sample in another chemical called citrate. After that, the tissues go through an expansion process similar to the original version of the technique, but with stronger digestion steps to compensate for the strong chemical fixation of the samples.

During this procedure, the researchers can also add fluorescent labels for molecules of interest, including proteins that mark particular types of cells, or DNA or RNA with a specific sequence.

“The work of Zhao et al. describes a very clever way of extending the resolution of light microscopy to resolve detail beyond that seen with conventional methods,” says David Rimm, a professor of pathology at the Yale University School of Medicine, who was not involved in the research.

The researchers tested this approach on tissue samples from patients with early-stage breast lesions. One way to predict whether these lesions will become malignant is to evaluate the appearance of the cells’ nuclei. Benign lesions with atypical nuclei have about a fivefold higher probability of progressing to cancer than those with typical nuclei.

However, studies have revealed significant discrepancies between the assessments of nuclear atypia performed by different pathologists, which can potentially lead to an inaccurate diagnosis and unnecessary surgery. An improved system for differentiating benign lesions with atypical and typical nuclei could potentially prevent 400,000 misdiagnoses and hundreds of millions of dollars every year in the United States, according to the researchers.

After expanding the tissue samples, the MIT/Harvard team analyzed them with a machine learning algorithm that can rate the nuclei based on dozens of features, including orientation, diameter, and how much they deviate from true circularity. This algorithm was able to distinguish between lesions that were likely to become invasive and those that were not, with an accuracy of 93 percent on expanded samples compared to only 71 percent on the pre-expanded tissue.

“These two types of lesions look highly similar to the naked eye, but one has much less risk of cancer,” Zhao says.

The researchers also analyzed kidney tissue samples from patients with nephrotic syndrome, which impairs the kidneys’ ability to filter blood. In these patients, tiny finger-like projections that filter the blood are lost or damaged. These structures are spaced about 200 nanometers apart and therefore can usually be seen only with an electron microscope or expensive super resolution microscopes.

When the researchers showed the images of the expanded tissue samples to a group of scientists that included pathologists and nonpathologists, the group was able to identify the diseased tissue with 90 percent accuracy overall, compared to only 65 percent accuracy with unexpanded tissue samples.

“Now you can diagnose nephrotic kidney disease without needing an electron microscope, a very expensive machine,” Boyden says. “You can do it with a few chemicals and a light microscope.”

Uncovering patterns

Using this approach, the researchers anticipate that scientists could develop more precise diagnostics for many other diseases. To do that, scientists and doctors will need to analyze many more patient samples, allowing them to discover patterns that would be impossible to see otherwise.

“If you can expand a tissue by one-hundredfold in volume, all other things being equal, you’re getting 100 times the information,” Boyden says.

For example, researchers could distinguish cancer cells based on how many copies of a particular gene they have. Extra copies of genes such as HER2, which the researchers imaged in one part of this study, indicate a subtype of breast cancer that is eligible for specific treatments.

Scientists could also look at the architecture of the genome, or at how cell shapes change as they become cancerous and interact with other cells of the body. Another possible application is identifying proteins that are expressed specifically on the surface of cancer cells, allowing researchers to design immunotherapies that mark those cells for destruction by the patient’s immune system.

Boyden and his colleagues run training courses several times a month at MIT, where visitors can come and watch expansion microscopy techniques, and they have made their protocols available on their website. They hope that many more people will begin using this approach to study a variety of diseases.

“Cancer biopsies are just the beginning,” Boyden says. “We have a new pipeline for taking clinical samples and expanding them, and we are finding that we can apply expansion to many different diseases. Expansion will enable computational pathology to take advantage of more information in a specimen than previously possible.”

Humayun Irshad, a research fellow at Harvard/BIDMC and an author of the study, agrees: “Expanded images result in more informative features, which in turn result in higher-performing classification models.”

Other authors include Harvard pathologist Astrid Weins, who helped oversee the kidney study. Other authors from MIT (Fei Chen) and BIDMC/Harvard (Andreea Stancu, Eun-Young Oh, Marcello DiStasio, Vanda Torous, Benjamin Glass, Isaac E. Stillman, and Stuart J. Schnitt) also contributed to this study.

The research was funded, in part, by the New York Stem Cell Foundation Robertson Investigator Award, the National Institutes of Health Director’s Pioneer Award, the Department of Defense Multidisciplinary University Research Initiative, the Open Philanthropy Project, the Ludwig Center at Harvard, and Harvard Catalyst.

Feng Zhang Wins the 2017 Blavatnik National Award for Young Scientists

The Blavatnik Family Foundation and the New York Academy of Sciences today announced the 2017 Laureates of the Blavatnik National Awards for Young Scientists. Starting with a pool of 308 nominees – the most promising scientific researchers aged 42 years and younger nominated by America’s top academic and research institutions – a distinguished jury first narrowed their selections to 30 Finalists, and then to three outstanding Laureates, one each from the disciplines of Life Sciences, Chemistry and Physical Sciences & Engineering. Each Laureate will receive $250,000 – the largest unrestricted award of its kind for early career scientists and engineers. This year’s Blavatnik National Laureates are:

Feng Zhang, PhD, Core Member, Broad Institute of MIT and Harvard; Associate Professor of Brain and Cognitive Sciences and Biomedical Engineering, MIT; Robertson Investigator, New York Stem Cell Foundation; James and Patricia Poitras ’63 Professor in Neuroscience, McGovern Institute for Brain Research at MIT. Dr. Zhang is being recognized for his role in developing the CRISPR-Cas9 gene-editing system and demonstrating pioneering uses in mammalian cells, and for his development of revolutionary technologies in neuroscience.

Melanie S. Sanford, PhD, Moses Gomberg Distinguished University Professor and Arthur F. Thurnau Professor of Chemistry, University of Michigan. Dr. Sanford is being celebrated for developing simpler chemical approaches – with less environmental impact – to the synthesis of molecules that have applications ranging from carbon dioxide recycling to drug discovery.

Yi Cui, PhD, Professor of Materials Science and Engineering, Photon Science and Chemistry, Stanford University and SLAC National Accelerator Laboratory. Dr. Cui is being honored for his technological innovations in the use of nanomaterials for environmental protection and the development of sustainable energy sources.

“The work of these three brilliant Laureates demonstrates the exceptional science being performed at America’s premiere research institutions and the discoveries that will make the lives of future generations immeasurably better,” said Len Blavatnik, Founder and Chairman of Access Industries, head of the Blavatnik Family Foundation, and an Academy Board Governor.

“Each of our 2017 National Laureates is shifting paradigms in areas that profoundly affect the way we tackle the health of our population and our planet — improved ways to store energy, “greener” drug and fuel production, and novel tools to correct disease-causing genetic mutations,” said Ellis Rubinstein, President and CEO of the Academy and Chair of the Awards’ Scientific Advisory Council. “Recognition programs like the Blavatnik Awards provide incentives and resources for rising stars, and help them to continue their important work. We look forward to learning where their innovations and future discoveries will take us in the years ahead.”

The annual Blavatnik Awards, established in 2007 by the Blavatnik Family Foundation and administered by the New York Academy of Sciences, recognize exceptional young researchers who will drive the next generation of innovation by answering today’s most complex and intriguing scientific questions.

A Google map of the brain

At the start of the twentieth century, Santiago Ramón y Cajal’s drawings of brain cells under the microscope revealed a remarkable diversity of cell types within the brain. Through sketch after sketch, Cajal showed that the brain was not, as many believed, a web of self-similar material, but rather that it is composed of billions of cells of many different sizes, shapes, and interconnections.

Yet more than a hundred years later, we still do not know how many cell types make up the human brain. Despite decades of study, the challenge remains daunting, as the brain’s complexity has overwhelmed attempts to describe it systematically or to catalog its parts.

Now, however, this appears about to change, thanks to an explosion of new technical advances in areas ranging from DNA sequencing to microfluidics to computing and microscopy. For the first time, a parts list for the human brain appears to be within reach.

Why is this important? “Until we know all the cell types, we won’t fully understand how they are connected together,” explains McGovern Investigator Guoping Feng. “We know that the brain’s wiring is incredibly complicated, and that the connections are key to understanding how it works, but we don’t yet have the full picture. That’s what we are aiming for. It’s like making a Google map of the brain.”

Identifying the cell types is also important for understanding disease. As genetic risk factors for different disorders are identified, researchers need to know where they act within the brain, and which cell types and connections are disrupted as a result. “Once we know that, we can start to think about new therapeutic approaches,” says Feng, who is also an institute member of the Broad Institute, where he leads the neurobiology program at the Stanley Center for Psychiatric Disorders Research.

Drop by drop

In 2012, computational biologist Naomi Habib arrived from the Hebrew University of Jerusalem to join the labs of McGovern Investigator Feng Zhang and his collaborator Aviv Regev at the Broad Institute. Habib’s plan was to learn new RNA methods as they were emerging. “I wanted to use these powerful tools to understand this fascinating system that is our brain,” she says.

Her rationale was simple, at least in theory. All cells of an organism carry the same DNA instructions, but the instructions are read out differently in each cell type. Stretches of DNA corresponding to individual genes are copied, sometimes thousands of times, into RNA molecules that in turn direct the synthesis of proteins. Differences in which sequences get copied are what give cells their identities: brain cells express RNAs that encode brain proteins, while blood cells express different RNAs, and so on. A given cell can express thousands of genes, providing a molecular “fingerprint” for each cell type.

Analyzing these RNAs can provide a great deal of information about the brain, including potentially the identities of its constituent cell types. But doing this is not easy, because the different cell types are mixed together like salt and pepper within the brain. For many years, studying brain RNA meant grinding up the tissue—an approach that has been compared to studying smoothies to learn about fruit salad.

As methods improved, it became possible to study the tiny quantities of RNA contained within single cells. This opened the door to studying the difference between individual cells, but this required painstaking manipulation of many samples, a slow and laborious process.

A breakthrough came in 2015, with the development of automated methods based on microfluidics. One of these, known as dropseq (droplet-based sequencing), was pioneered by Steve McCarroll at Harvard, in collaboration with Regev’s lab at Broad. In this method, individual cells are captured in tiny water droplets suspended in oil. Vast numbers of droplets are automatically pumped through tiny channels, where each undergoes its own separate sequencing reactions. By running multiple samples in parallel, the machines can process tens of thousands of cells and billions of sequences, within hours rather than weeks or months. The power of the method became clear when in an experiment on mouse retina, the researchers were able to identify almost every cell type that had ever been described in the retina, effectively recapitulating decades of work in a single experiment.

Dropseq works well for many tissues, but Habib wanted to apply it to the adult brain, which posed a unique challenge. Mature neurons often bear elaborate branches that become intertwined like tree roots in a forest, making it impossible to separate individual cells without damage.

Nuclear option

So Habib turned to another idea. RNA is made in the nucleus before moving to the cytoplasm, and because nuclei are compact and robust it is easy to recover them intact in large numbers, even from difficult tissues such as brain. The amount of RNA contained in a single nucleus is tiny, and Habib didn’t know if it would be enough to be informative, but Zhang and Regev encouraged her to keep going. “You have to be optimistic,” she says. “You have to try.”

Fortunately, the experiment worked. In a paper with Zhang and Regev, she was able to isolate nuclei from newly formed neurons in the adult mouse hippocampus (a brain structure involved in memory), and by analyzing their RNA profiles individually she could order them in a series according to their age, revealing their developmental history from birth to maturity.

Now, after much further experimentation, Habib and her colleagues have managed to apply the droplet method to nuclei, making it possible for the first time to analyze huge numbers of cells from adult brain—at least ten times more than with previous methods.

This opens up many new avenues, including the study of human postmortem tissue, given that RNA in nuclei can survive for years in frozen samples. Habib is already starting to examine tissue taken at autopsy from patients with Alzheimer’s and other neurodegenerative diseases. “The neurons are degenerating, but the other cells around them could also be contributing to the degenerative process,” she says. “Now we have these tools, we can look at what happens during the progression of the disease.”

Computing cells

Once the sequencing is completed, the results are analyzed using sophisticated computational methods. When the results emerge, data from individual cells are visualized as colored dots, clustered on a graph according to their statistical similarities. But because the cells were dissociated at the start of the experiment, information about their appearance and origin within the brain is lost.

To find out how these abstract displays correspond to the visible cells of the brain, Habib teamed up with Yinqing Li, a former graduate student with Zhang who is now a postdoc in the lab of Guoping Feng. Li began with existing maps from the Allen Institute, a public repository with thousands of images showing expression patterns for individual genes within mouse brain. By comparing these maps with the molecular fingerprints from Habib’s nuclear RNA sequencing experiments, Li was able to make a map of where in the brain each cell was likely to have come from.

It was a good first step, but still not perfect. “What we really need,” he says, “is a method that allows us to see every RNA in individual cells. If we are studying a brain disease, we want to know which neurons are involved in the disease process, where they are, what they are connected to, and which special genes might be involved so that we can start thinking about how to design a drug that could alter the disease.”

Expanding horizons

So Li partnered with Asmamaw (Oz) Wassie, a graduate student in the lab of McGovern Investigator Ed Boyden, to tackle the problem. Wassie had previously studied bioengineering as an MIT undergraduate, where he had helped build an electronic “artificial nose” for detecting trace chemicals in air. With support from a prestigious Hertz Fellowship, he joined Boyden’s lab, where he is now working on the development of a method known as expansion microscopy.

In this method, a sample of tissue is embedded with a polymer that swells when water is added. The entire sample expands in all directions, allowing scientists to see fine details such as connections between neurons, using an ordinary microscope. Wassie recently helped develop a way to anchor RNA molecules to the polymer matrix, allowing them to be physically secured during the expansion process. Now, within the expanded samples he can see the individual molecules using a method called fluorescent in situ hybridization (FISH), in which each RNA appears as a glowing dot under the microscope. Currently, he can label only a handful of RNA types at once, but by using special sets of probes, applied sequentially, he thinks it will soon be possible to distinguish thousands of different RNA sequences.

“That will help us to see what each cell looks like, how they are connected to each other, and what RNAs they contain,” says Wassie. By combining this information with the RNA expression data generated by Li and Habib, it will be possible to reveal the organization and fine structure of complex brain areas and perhaps to identify new cell types that have not yet been recognized.

Looking ahead

Li plans to apply these methods to a brain structure known as the thalamic reticular nucleus (TRN) – a sheet of tissue, about ten neurons thick in mice, that sits on top of the thalamus and close to the cortex. The TRN is not well understood, but it is important for controlling sleep, attention and sensory processing, and it has caught the interest of Feng and other neuroscientists because it expresses a disproportionate number of genes implicated in disorders such as autism, attention deficit hyperactivity disorder, and intelligence deficits. Together with Joshua Levin’s group at Broad, Li has already used nuclear RNA sequencing to identify the cell types in the TRN, and he has begun to examine them within intact brain using the expansion techniques. “When you map these precise cell types back to the tissue, you can integrate the gene expression information with everything else, like electrophysiology, connectivity, morphology,” says Li. “Then we can start to ask what’s going wrong in disease.”

Meanwhile, Feng is already looking beyond the TRN, and planning how to scale the approach to other structures and eventually to the entire brain. He returns to the metaphor of a Google map. “Microscopic images are like satellite photos,” he says. “Now with expansion microscopy we can add another layer of information, like property boundaries and individual buildings. And knowing which RNAs are in each cell will be like seeing who lives in those buildings. I think this will completely change how we view the brain.”

Tan-Yang Center for Autism Research: Feng Zhang

June 12, 2017
Tan-Yang Center for Autism Research: Launch Celebration
Feng Zhang, McGovern Institute for Brain Research
“Gene Therapy for the Brain”

Socioeconomic background linked to reading improvement

About 20 percent of children in the United States have difficulty learning to read, and educators have devised a variety of interventions to try to help them. Not every program helps every student, however, in part because the origins of their struggles are not identical.

MIT neuroscientist John Gabrieli is trying to identify factors that may help to predict individual children’s responses to different types of reading interventions. As part of that effort, he recently found that children from lower-income families responded much better to a summer reading program than children from a higher socioeconomic background.

Using magnetic resonance imaging (MRI), the research team also found anatomical changes in the brains of children whose reading abilities improved — in particular, a thickening of the cortex in parts of the brain known to be involved in reading.

“If you just left these children [with reading difficulties] alone on the developmental path they’re on, they would have terrible troubles reading in school. We’re taking them on a neuroanatomical detour that seems to go with real gains in reading ability,” says Gabrieli, the Grover M. Hermann Professor in Health Sciences and Technology, a professor of brain and cognitive sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Rachel Romeo, a graduate student in the Harvard-MIT Program in Health Sciences and Technology, and Joanna Christodoulou, an assistant professor of communication sciences and disorders at the Massachusetts General Hospital Institute of Health Professions, are the lead authors of the paper, which appears in the June 7 issue of the journal Cerebral Cortex.

Predicting improvement

In hopes of identifying factors that influence children’s responses to reading interventions, the MIT team set up two summer schools based on a program known as Lindamood-Bell. The researchers recruited students from a wide income range, although socioeconomic status was not the original focus of their study.

The Lindamood-Bell program focuses on helping students develop the sensory and cognitive processing necessary for reading, such as thinking about words as units of sound, and translating printed letters into word meanings.

Children participating in the study, who ranged from 6 to 9 years old, spent four hours a day, five days a week in the program, for six weeks. Before and after the program, their brains were scanned with MRI and they were given some commonly used tests of reading proficiency.

In tests taken before the program started, children from higher and lower socioeconomic (SES) backgrounds fared equally poorly in most areas, with one exception. Children from higher SES backgrounds had higher vocabulary scores, which has also been seen in studies comparing nondyslexic readers from different SES backgrounds.

“There’s a strong trend in these studies that higher SES families tend to talk more with their kids and also use more complex and diverse language. That tends to be where the vocabulary correlation comes from,” Romeo says.

The researchers also found differences in brain anatomy before the reading program started. Children from higher socioeconomic backgrounds had thicker cortex in a part of the brain known as Broca’s area, which is necessary for language production and comprehension. The researchers also found that these differences could account for the differences in vocabulary levels between the two groups.

Based on a limited number of previous studies, the researchers hypothesized that the reading program would have more of an impact on the students from higher socioeconomic backgrounds. But in fact, they found the opposite. About half of the students improved their scores, while the other half worsened or stayed the same. When analyzing the data for possible explanations, family income level was the one factor that proved significant.

“Socioeconomic status just showed up as the piece that was most predictive of treatment response,” Romeo says.

The same children whose reading scores improved also displayed changes in their brain anatomy. Specifically, the researchers found that they had a thickening of the cortex in a part of the brain known as the temporal occipital region, which comprises a large network of structures involved in reading.

“Mix of causes”

The researchers believe that their results may have been different than previous studies of reading intervention in low SES students because their program was run during the summer, rather than during the school year.

“Summer is when socioeconomic status takes its biggest toll. Low SES kids typically have less academic content in their summer activities compared to high SES, and that results in a slump in their skills,” Romeo says. “This may have been particularly beneficial for them because it may have been out of the realm of their typical summer.”

The researchers also hypothesize that reading difficulties may arise in slightly different ways among children of different SES backgrounds.

“There could be a different mix of causes,” Gabrieli says. “Reading is a complicated skill, so there could be a number of different factors that would make you do better or do worse. It could be that those factors are a little bit different in children with more enriched or less enriched environments.”

The researchers are hoping to identify more precisely the factors related to socioeconomic status, other environmental factors, or genetic components that could predict which types of reading interventions will be successful for individual students.

“In medicine, people call it personalized medicine: this idea that some people will really benefit from one intervention and not so much from another,” Gabrieli says. “We’re interested in understanding the match between the student and the kind of educational support that would be helpful for that particular student.”

The research was funded by the Ellison Medical Foundation, the Halis Family Foundation, Lindamood-Bell Learning Processes, and the National Institutes of Health.

McGovern Institute 2017 Retreat

On June 5-6, McGovern researchers and staff gathered in Newport, Rhode Island for the annual McGovern Institute retreat. The overnight retreat featured talks, a poster session, a Newport Harbor cruise (for those willing to brave the cool, wet weather) and a dance party. Click the thumbnails below to see other images from the McGovern Institute Retreat.