A sense of timing

The ability to measure time and to control the timing of actions is critical for almost every aspect of behavior. Yet the mechanisms by which our brains process time are still largely mysterious.

We experience time on many different scales—from milliseconds to years— but of particular interest is the middle range, the scale of seconds over which we perceive time directly, and over which many of our actions and thoughts unfold.

“We speak of a sense of time, yet unlike our other senses there is no sensory organ for time,” says McGovern Investigator Mehrdad Jazayeri. “It seems to come entirely from within. So if we understand time, we should be getting close to understanding mental processes.”

Singing in the brain

Emily Mackevicius comes to work in the early morning because that’s when her birds are most likely to sing. A graduate student in the lab of McGovern Investigator Michale Fee, she is studying zebra finches, songbirds that learn to sing by copying their fathers. Bird song involves a complex and precisely timed set of movements, and Mackevicius, who plays the cello in her spare time, likens it to musical performance. “With every phrase, you have to learn a sequence of finger movements and bowing movements, and put it all together with exact timing. The birds are doing something very similar with their vocal muscles.”

A typical zebra finch song lasts about one second, and consists of several syllables, produced at a rate similar to the syllables in human speech. Each song syllable involves a precisely timed sequence of muscle commands, and understanding how the bird’s brain generates this sequence is a central goal for Fee’s lab. Birds learn it naturally without any need for training, making it an ideal model for understanding the complex action sequences that represent the fundamental “building blocks” of behavior.

Some years ago Fee and colleagues made a surprising discovery that has shaped their thinking ever since. Within a part of the bird brain called HVC, they found neurons that fire a single short burst of pulses at exactly the same point on every repetition of the song. Each burst lasts about a hundredth of a second, and different neurons fire at different times within the song. With about 20,000 neurons in HVC, it was easy to imagine that there would be specific neurons active at every point in the song, meaning that each time point could be represented by the activity of a handful of individual neurons.

Proving this was not easy—“we had to wait about ten years for the technology to catch up,” says Fee—but they finally succeeded last year, when students Tatsuo Okubo and Galen Lynch analyzed recordings from hundreds of individual HVC neurons, and found that they do indeed fire in a fixed sequence, covering the entire song period.

“We think it’s like a row of falling dominoes,” says Fee. “The neurons are connected to each other so that when one fires it triggers the next one in the chain.” It’s an appealing model, because it’s easy to see how a chain of activity could control complex action sequences, simply by connecting individual time-stamp neurons to downstream motor neurons. With the correct connections, each movement is triggered at the right time in the sequence. Fee believes these motor connections are learned through trial and error—like babies babbling as they learn to speak—and a separate project in his lab aims to understand how this learning occurs.

But the domino metaphor also begs another question: who sets up the dominoes in the first place? Mackevicius and Okubo, along with summer student Hannah Payne, set out to answer this question, asking how HVC becomes wired to produce these precisely timed chain reactions.

Mackevicius, who studied math as an undergraduate before turning to neuroscience, developed computer simulations of the HVC neuronal network, and Okubo ran experiments to test the predictions, recording from young birds at different stages in the learning process. “We found that setting up a chain is surprisingly easy,” says Mackevicius. “If we start with a randomly connected network, and some realistic assumptions about the “plasticity rules” by which synapses change with repeated use, we found that these chains emerge spontaneously. All you need is to give them a push—like knocking over the first domino.”

Their results also suggested how a young bird learns to produce different syllables, as it progresses from repetitive babbling to a more adult-like song. “At first, there’s just one big burst of neural activity, but as the song becomes more complex, the activity gradually spreads out in time and splits into different sequences, each controlling a different syllable. It’s as if you started with lots of dominos all clumped together, and then gradually they become sorted into different rows.”

Does something similar happen in the human brain? “It seems very likely,” says Fee. “Many of our movements are precisely timed—think about speaking a sentence or performing a musical instrument or delivering a tennis serve. Even our thoughts often happen in sequences. Things happen faster in birds than mammals, but we suspect the underlying mechanisms will be very similar.”

Speed control

One floor above the Fee lab, Mehrdad Jazayeri is also studying how time controls actions, using humans and monkeys rather than birds. Like Fee, Jazayeri comes from an engineering background, and his goal is to understand, with an engineer’s level of detail, how we perceive time and use it flexibly to control our actions.

To begin to answer this question, Jazayeri trained monkeys to remember time intervals of a few seconds or less, and to reproduce them by pressing a button or making an eye movement at the correct time after a visual cue appears on a screen. He then recorded brain activity as the monkeys perform this task, to find out how the brain measures elapsed time. “There were two prominent ideas in the field,” he explains. “One idea was that there is an internal clock, and that the brain can somehow count the accumulating ticks. Another class of models had proposed that there are multiple oscillators that come in and out of phase at different times.”

When they examined the recordings, however, the results did not fit either model. Despite searching across multiple brain areas, Jazayeri and his colleagues found no sign of ticking or oscillations. Instead, their recordings revealed complex patterns of activity, distributed across populations of neurons; moreover, as the monkey produced longer or shorter intervals, these activity patterns were stretched or compressed in time, to fit the overall duration of each interval. In other words, says Jazayeri, the brain circuits were able to adjust the speed with which neural signals evolve over time. He compares it to a group of musicians performing a complex piece of music. “Each player has their own part, which they can play faster or slower depending on the overall tempo of the music.”

Ready-set-go

Jazayeri is also using time as a window onto a broader question—how our perceptions and decisions are shaped by past experience. “It’s one of the great questions in neuroscience, but it’s not easy to study. One of the great advantages of studying timing is that it’s easy to measure precisely, so we can frame our questions in precise mathematical ways.”

The starting point for this work was a deceptively simple task, which Jazayeri calls “Ready-Set-Go.” In this task, the subject is given the first two beats of a regular rhythm (“Ready, Set”) and must then generate the third beat (“Go”) at the correct time. To perform this task, the brain must measure the duration between Ready and Set and then immediately reproduce it.

Humans can do this fairly accurately, but not perfectly—their response times are imprecise, presumably because there is some “noise” in the neural signals that convey timing information within the brain. In the face of this uncertainty, the optimal strategy (known mathematically as Bayesian Inference) is to bias the time estimates based on prior expectations, and this is exactly what happened in Jazayeri’s experiments. If the intervals in previous trials were shorter, then people tend to under-estimate the next interval, whereas if the previous intervals were longer, they will over-estimate. In other words, people use their memory to improve their time estimates.

Monkeys can also learn this task and show similar biases, providing an opportunity to study how the brain establishes and stores these prior expectations, and how these expectations influence subsequent behavior. Again, Jazayeri and colleagues recorded from large numbers of neurons during the task. These patterns are complex and not easily described in words, but in mathematical terms, the activity forms a geometric structure known as a manifold. “Think of it as a curved surface, analogous to a cylinder,” he says. “In the past, people could not see it because they could only record from one or a few neurons at a time. We have to measure activity across large numbers of neurons simultaneously if we want to understand the workings of the system.”

Computing time

To interpret their data, Jazayeri and his team often turn to computer models based on artificial neural networks. “These models are a powerful tool in our work because we can fully reverse-engineer them and gain insight into the underlying mechanisms,” he explains. His lab has now succeeded in training a recurrent neural network that can perform the Ready-Set-Go task, and they have found that the model develops a manifold similar to the real brain data. This has led to the intriguing conjecture that memory of past experiences can be embedded in the structure of the manifold.

Jazayeri concludes: “We haven’t connected all the dots, but I suspect that many questions about brain and behavior will find their answers in the geometry and dynamics of neural activity.” Jazayeri’s long-term ambition is to develop predictive models of brain function. As an analogy, he says, think of a pendulum. “If we know its current state—its position and speed—we can predict with complete confidence what it will do next, and how it will respond to a perturbation. We don’t have anything like that for the brain—nobody has been able to do that, not even the simplest brain functions. But that’s where we’d eventually like to be.”

A clock within the brain?

It is not yet clear how the mechanisms studied by Fee and Jazayeri are related. “We talk together often, but we are still guessing how the pieces fit together,” says Fee. But one thing they both agree on is the lack of evidence for any central clock within the brain. “Most people have this intuitive feeling that time is a unitary thing, and that there must be some central clock inside our head, coordinating everything like the conductor of the orchestra or the clock inside your computer,” says Jazayeri. “Even many experts in the field believe this, but we don’t think it’s right.” Rather, his work and Fee’s both point to the existence of separate circuits for different time-related behaviors, such as singing. If there is no clock, how do the different systems work together to create our apparently seamless perception of time? “It’s still a big mystery,” says Jazayeri. “Questions like that are what make neuroscience so interesting.”

 

Ten researchers from MIT and Broad receive NIH Director’s Awards

The High-Risk, High-Reward Research (HRHR) program, supported by the National Institutes of Health (NIH) Common Fund, has awarded 86 grants to scientists with unconventional approaches to major challenges in biomedical and behavioral research. Ten of the awardees are affiliated with MIT and the Broad Institute of MIT and Harvard.

The NIH typically supports research projects, not individual scientists, but the HRHR program identifies specific researchers with innovative ideas to address gaps in biomedical research. The program issues four types of awards annually — the Pioneer Award, the New Innovator Award, the Transformative Research Award and the Early Independence Award — to “high-caliber investigators whose ideas stretch the boundaries of our scientific knowledge.”

Four researchers who are affiliated with either MIT or the Broad Institute received this year’s New Innovator Awards, which support “unusually innovative research” from early career investigators. They are:

  • Paul Blainey, an MIT assistant professor of biological engineering and a core member of the Broad Institute, is an expert in microanalysis systems for studies of individual molecules and cells. The award will fund the establishment a new technology that enables advanced readout from living cells.
  • Kevin Esvelt, an associate professor of media arts and sciences at MIT’s Media Lab, invents new ways to study and influence the evolution of ecosystems. Esvelt plans to use the NIH grant to develop powerful “daisy drive” systems for more precise genetic alterations of wild organisms. Such an intervention has the potential to serve as a powerful weapon against malaria, Zika, Lyme disease, and many other infectious diseases.
  • Evan Macosko is an associate member of the Broad Institute who develops molecular techniques to more deeply understand the function of cellular specialization in the nervous system. Macosko’s award will fund a novel technology, Slide-seq, which enables genome-wide expression analysis of brain tissue sections at single-cell resolution.
  • Gabriela Schlau-Cohen, an MIT assistant professor of chemistry, combines tools from chemistry, optics, biology, and microscopy to develop new approaches to study the dynamics of biological systems. Her award will be used to fund the development of a new nanometer-distance assay that directly accesses protein motion with unprecedented spatiotemporal resolution under physiological conditions.

Recipients of the Early Independence Award include three Broad Institute Fellows. The award recognizes “exceptional junior scientists” with an opportunity to skip traditional postdoctoral training and move immediately into independent research positions.

  • Ahmed Badran is a Broad Institute Fellow who studies the function of ribosomes and the control of protein synthesis. Ribosomes are important targets for antibiotics, and the NIH award will support the development of a new technology platform for probing ribosome function within living cells.
  • Fei Chen, a Broad Institute Fellow who is also a research affiliate at MIT’s McGovern Institute for Brain Research, has pioneered novel molecular and microscopy tools to illuminate biological pathways and function. He will use one of these tools, expansion microscopy, to explore the molecular basis of glioblastomas, an aggressive form of brain cancer.
  • Hilary Finucane, a Broad Institute Fellow who recently received her PhD from MIT’s Department of Mathematics, develops computational methods for analyzing biological data. She plans to develop methods to analyze large-scale genomic data to identify disease-relevant cell types and tissues, a necessary first step for understanding molecular mechanisms of disease.

Among the recipients of the NIH’s Pioneer Awards are Kay Tye, an assistant professor of brain and cognitive sciences at MIT and a member of MIT’s Picower Institute for Learning and Memory, and Feng Zhang, the James and Patricia Poitras ’63 Professor in Neuroscience, an associate professor of brain and cognitive sciences and biological engineering at MIT, a core member of the Broad Institute, and an investigator at MIT’s McGovern Institute for Brain Research. Recipients of this award are challenged to pursue “groundbreaking, high-impact approaches to a broad area of biomedical or behavioral science. Tye, who studies the brain mechanisms underlying emotion and behavior, will use her award to look at the neural representation of social homeostasis and social rank. Zhang, who pioneered the gene-editing technology known as CRISPR, plans to develop a suite of tools designed to achieve precise genome surgery for repairing disease-causing changes in DNA.

Ed Boyden, an associate professor of brain and cognitive sciences and biological engineering at MIT, and a member of MIT’s Media Lab and McGovern Institute for Brain Research, is a recipient of the Transformative Research Award. This award promotes “cross-cutting, interdisciplinary approaches that could potentially create or challenge existing paradigms.” Boyden, who develops new strategies for understanding and engineering brain circuits, will use the grant to develop high-speed 3-D imaging of neural activity.

This year, the NIH issued a total of 12 Pioneer Awards, 55 New Innovator Awards, 8 Transformative Research Awards, and 11 Early Independence Awards. The awards total $263 million and represent contributions from the NIH Common Fund; National Institute of General Medical Sciences; National Institute of Mental Health; National Center for Complementary and Integrative Health; and National Institute of Dental and Craniofacial Research.

“I continually point to this program as an example of the creative and revolutionary research NIH supports,” said NIH Director Francis S. Collins. “The quality of the investigators and the impact their research has on the biomedical field is extraordinary.”

Gene-editing technology developer Feng Zhang awarded $500,000 Lemelson-MIT Prize

Feng Zhang, a pioneer of the revolutionary CRISPR gene-editing technology, TAL effector proteins, and optogenetics, is the recipient of the 2017 $500,000 Lemelson-MIT Prize, the largest cash prize for invention in the United States. Zhang is a core member of the Broad Institute of MIT and Harvard, an investigator at the McGovern Institute for Brain Research, the James and Patricia Poitras Professor in Neuroscience at MIT, and associate professor in the departments of Brain and Cognitive Sciences and Biological Engineering at MIT.

Zhang and his team were first to develop and demonstrate successful methods for using an engineered CRISPR-Cas9 system to edit genomes in living mouse and human cells and have turned CRISPR technology into a practical and shareable collection of tools for robust gene editing and epigenomic manipulation. CRISPR, short for Clustered Regularly Interspaced Short Palindromic Repeats, has been harnessed by Zhang and his team as a groundbreaking gene-editing tool that is simple and versatile to use. A key tenet of Zhang’s is to encourage further development and research through open sharing of tools and scientific collaboration. Zhang believes that wide use of CRISPR-based tools will further our understanding of biology, allowing scientists to identify genetic differences that contribute to diseases and, eventually, provide the basis for new therapeutic techniques.

Zhang’s lab has trained thousands of researchers to use CRISPR technology, and since 2013 he has shared over 40,000 plasmid samples with labs around the world both directly and through the nonprofit Addgene, enabling wide use of his CRISPR tools in their research.

Zhang began working in a gene therapy laboratory at the age of 16 and has played key roles in the development of multiple technologies. Prior to harnessing CRISPR-Cas9, Zhang engineered microbial TAL effectors (TALEs) for use in mammalian cells, working with colleagues at Harvard University, authoring multiple publications on the subject and becoming a co-inventor on several patents on TALE-based technologies. Zhang was also a key member of the team at Stanford University that harnessed microbial opsins for developing optogenetics, which uses light signals and light-sensitive proteins to monitor and control activity in brain cells. This technology can help scientists understand how cells in the brain affect mental and neurological illnesses. Zhang has co-authored multiple publications on optogenetics and is a co-inventor on several patents related to this technology.

Zhang’s numerous scientific discoveries and inventions, as well as his commitment to mentorship and collaboration, earned him the Lemelson-MIT Prize, which honors outstanding mid-career inventors who improve the world through technological invention and demonstrate a commitment to mentorship in science, technology, engineering and mathematics (STEM).

“Feng’s creativity and dedication to problem-solving impressed us,” says Stephanie Couch, executive director of the Lemelson-MIT Program. “Beyond the breadth of his own accomplishments, Feng and his lab have also helped thousands of scientists across the world access the new technology to advance their own scientific discoveries.”

“It is a tremendous honor to receive the Lemelson-MIT Prize and to join the company of so many incredibly impactful inventors who have won this prize in years past,” says Zhang. “Invention has always been a part of my life; I think about new problems every day and work to solve them creatively. This prize is a testament to the passionate work of my team and the support of my family, teachers, colleagues and counterparts around the world.”

The $500,000 prize, which bears no restrictions in how it can be used, is made possible through the support of The Lemelson Foundation, the world’s leading funder of invention in service of social and economic change.

“We are thrilled to honor Dr. Zhang, who we commend for his advancements in genetics, and more importantly, his willingness to share his discoveries to advance the work of others around the world,” says Dorothy Lemelson, chair of The Lemelson Foundation. “Zhang’s work is inspiring a new generation of inventors to tackle the biggest problems of our time.”

Zhang will speak at EmTech MIT, the annual conference on emerging technologies hosted by MIT Technology Review at the MIT Media Lab on Tuesday, Nov. 7.

The Lemelson-MIT Program is now seeking nominations for the 2018 $500,000 Lemelson-MIT Prize. Please contact the Lemelson-MIT Program at awards-lemelson@mit.edu for more information or visit the MIT-Lemelson Prize website.

Studies help explain link between autism, severe infection during pregnancy

Mothers who experience an infection severe enough to require hospitalization during pregnancy are at higher risk of having a child with autism. Two new studies from MIT and the University of Massachusetts Medical School shed more light on this phenomenon and identify possible approaches to preventing it.

In research on mice, the researchers found that the composition of bacterial populations in the mother’s digestive tract can influence whether maternal infection leads to autistic-like behaviors in offspring. They also discovered the specific brain changes that produce these behaviors.

“We identified a very discrete brain region that seems to be modulating all the behaviors associated with this particular model of neurodevelopmental disorder,” says Gloria Choi, the Samuel A. Goldblith Career Development Assistant Professor of Brain and Cognitive Sciences and a member of MIT’s McGovern Institute for Brain Research.

If further validated in human studies, the findings could offer a possible way to reduce the risk of autism, which would involve blocking the function of certain strains of bacteria found in the maternal gut, the researchers say.

Choi and Jun Huh, formerly an assistant professor at UMass Medical School who is now a faculty member at Harvard Medical School, are the senior authors of both papers, which appear in Nature on Sept. 13. MIT postdoc Yeong Shin Yim is the first author of one paper, and UMass Medical School visiting scholars Sangdoo Kim and Hyunju Kim are the lead authors of the other.

Reversing symptoms

A 2010 study that included all children born in Denmark between 1980 and 2005 found that severe viral infections during the first trimester of pregnancy translated to a threefold risk for autism, and serious bacterial infections during the second trimester were linked with a 1.42-fold increase in risk. These infections included influenza, viral gastroenteritis, and severe urinary tract infections.

Similar effects have been described in mouse models of maternal inflammation, and in a 2016 Science paper, Choi and Huh found that a type of immune cells known as Th17 cells, and their effector molecule, called IL-17, are responsible for this effect in mice. IL-17 then interacts with receptors found on brain cells in the developing fetus, leading to irregularities that the researchers call “patches” in certain parts of the cortex.

In one of the new papers, the researchers set out to learn more about these patches and to determine if they were responsible for the behavioral abnormalities seen in those mice, which include repetitive behavior and impaired sociability.

The researchers found that the patches are most common in a part of the brain known as S1DZ. Part of the somatosensory cortex, this region is believed to be responsible for proprioception, or sensing where the body is in space. In these patches, populations of cells called interneurons, which express a protein called parvalbumin, are reduced. Interneurons are responsible for controlling the balance of excitation and inhibition in the brain, and the researchers found that the changes they found in the cortical patches were associated with overexcitement in S1DZ.

When the researchers restored normal levels of brain activity in this area, they were able to reverse the behavioral abnormalities. They were also able to induce the behaviors in otherwise normal mice by overstimulating neurons in S1DZ.

The researchers also discovered that S1DZ sends messages to two other brain regions: the temporal association area of the cortex and the striatum. When the researchers inhibited the neurons connected to the temporal association area, they were able to reverse the sociability deficits. When they inhibited the neurons connected to the striatum, they were able to halt the repetitive behaviors.

Microbial factors

In the second Nature paper, the researchers delved into some of the additional factors that influence whether or not a severe infection leads to autism. Not all mothers who experience severe infection end up having child with autism, and similarly not all the mice in the maternal inflammation model develop behavioral abnormalities.

“This suggests that inflammation during pregnancy is just one of the factors. It needs to work with additional factors to lead all the way to that outcome,” Choi says.

A key clue was that when immune systems in some of the pregnant mice were stimulated, they began producing IL-17 within a day. “Normally it takes three to five days, because IL-17 is produced by specialized immune cells and they require time to differentiate,” Huh says. “We thought that perhaps this cytokine is being produced not from differentiating immune cells, but rather from pre-existing immune cells.”

Previous studies in mice and humans have found populations of Th17 cells in the intestines of healthy individuals. These cells, which help to protect the host from harmful microbes, are thought to be produced after exposure to particular types of harmless bacteria that associate with the epithelium.

The researchers found that only the offspring of mice with one specific type of harmless bacteria, known as segmented filamentous bacteria, had behavioral abnormalities and cortical patches. When the researchers killed those bacteria with antibiotics, the mice produced normal offspring.

“This data strongly suggests that perhaps certain mothers who happen to carry these types of Th17 cell-inducing bacteria in their gut may be susceptible to this inflammation-induced condition,” Huh says.

Humans can also carry strains of gut bacteria known to drive production of Th17 cells, and the researchers plan to investigate whether the presence of these bacteria is associated with autism.

Sarah Gaffen, a professor of rheumatology and clinical immunology at the University of Pittsburgh, says the study clearly demonstrates the link between IL-17 and the neurological effects seen in the mice offspring. “It’s rare for things to fit into such a clear model, where you can identify a single molecule that does what you predicted,” says Gaffen, who was not involved in the study.

The research was funded by the Simons Foundation Autism Research Initiative, the Simons Center for the Social Brain at MIT, the Howard Hughes Medical Institute, Robert Buxton, the National Research Foundation of Korea, the Searle Scholars Program, a Pew Scholarship for Biomedical Sciences, the Kenneth Rainin Foundation, the National Institutes of Health, and the Hock E. Tan and K. Lisa Yang Center for Autism Research.

Robotic system monitors specific neurons

Recording electrical signals from inside a neuron in the living brain can reveal a great deal of information about that neuron’s function and how it coordinates with other cells in the brain. However, performing this kind of recording is extremely difficult, so only a handful of neuroscience labs around the world do it.

To make this technique more widely available, MIT engineers have now devised a way to automate the process, using a computer algorithm that analyzes microscope images and guides a robotic arm to the target cell.

This technology could allow more scientists to study single neurons and learn how they interact with other cells to enable cognition, sensory perception, and other brain functions. Researchers could also use it to learn more about how neural circuits are affected by brain disorders.

“Knowing how neurons communicate is fundamental to basic and clinical neuroscience. Our hope is this technology will allow you to look at what’s happening inside a cell, in terms of neural computation, or in a disease state,” says Ed Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT, and a member of MIT’s Media Lab and McGovern Institute for Brain Research.

Boyden is the senior author of the paper, which appears in the Aug. 30 issue of Neuron. The paper’s lead author is MIT graduate student Ho-Jun Suk.

Precision guidance

For more than 30 years, neuroscientists have been using a technique known as patch clamping to record the electrical activity of cells. This method, which involves bringing a tiny, hollow glass pipette in contact with the cell membrane of a neuron, then opening up a small pore in the membrane, usually takes a graduate student or postdoc several months to learn. Learning to perform this on neurons in the living mammalian brain is even more difficult.

There are two types of patch clamping: a “blind” (not image-guided) method, which is limited because researchers cannot see where the cells are and can only record from whatever cell the pipette encounters first, and an image-guided version that allows a specific cell to be targeted.

Five years ago, Boyden and colleagues at MIT and Georgia Tech, including co-author Craig Forest, devised a way to automate the blind version of patch clamping. They created a computer algorithm that could guide the pipette to a cell based on measurements of a property called electrical impedance — which reflects how difficult it is for electricity to flow out of the pipette. If there are no cells around, electricity flows and impedance is low. When the tip hits a cell, electricity can’t flow as well and impedance goes up.

Once the pipette detects a cell, it can stop moving instantly, preventing it from poking through the membrane. A vacuum pump then applies suction to form a seal with the cell’s membrane. Then, the electrode can break through the membrane to record the cell’s internal electrical activity.

The researchers achieved very high accuracy using this technique, but it still could not be used to target a specific cell. For most studies, neuroscientists have a particular cell type they would like to learn about, Boyden says.

“It might be a cell that is compromised in autism, or is altered in schizophrenia, or a cell that is active when a memory is stored. That’s the cell that you want to know about,” he says. “You don’t want to patch a thousand cells until you find the one that is interesting.”

To enable this kind of precise targeting, the researchers set out to automate image-guided patch clamping. This technique is difficult to perform manually because, although the scientist can see the target neuron and the pipette through a microscope, he or she must compensate for the fact that nearby cells will move as the pipette enters the brain.

“It’s almost like trying to hit a moving target inside the brain, which is a delicate tissue,” Suk says. “For machines it’s easier because they can keep track of where the cell is, they can automatically move the focus of the microscope, and they can automatically move the pipette.”

By combining several imaging processing techniques, the researchers came up with an algorithm that guides the pipette to within about 25 microns of the target cell. At that point, the system begins to rely on a combination of imagery and impedance, which is more accurate at detecting contact between the pipette and the target cell than either signal alone.

The researchers imaged the cells with two-photon microscopy, a commonly used technique that uses a pulsed laser to send infrared light into the brain, lighting up cells that have been engineered to express a fluorescent protein.

Using this automated approach, the researchers were able to successfully target and record from two types of cells — a class of interneurons, which relay messages between other neurons, and a set of excitatory neurons known as pyramidal cells. They achieved a success rate of about 20 percent, which is comparable to the performance of highly trained scientists performing the process manually.

Unraveling circuits

This technology paves the way for in-depth studies of the behavior of specific neurons, which could shed light on both their normal functions and how they go awry in diseases such as Alzheimer’s or schizophrenia. For example, the interneurons that the researchers studied in this paper have been previously linked with Alzheimer’s. In a recent study of mice, led by Li-Huei Tsai, director of MIT’s Picower Institute for Learning and Memory, and conducted in collaboration with Boyden, it was reported that inducing a specific frequency of brain wave oscillation in interneurons in the hippocampus could help to clear amyloid plaques similar to those found in Alzheimer’s patients.

“You really would love to know what’s happening in those cells,” Boyden says. “Are they signaling to specific downstream cells, which then contribute to the therapeutic result? The brain is a circuit, and to understand how a circuit works, you have to be able to monitor the components of the circuit while they are in action.”

This technique could also enable studies of fundamental questions in neuroscience, such as how individual neurons interact with each other as the brain makes a decision or recalls a memory.

Bernardo Sabatini, a professor of neurobiology at Harvard Medical School, says he is interested in adapting this technique to use in his lab, where students spend a great deal of time recording electrical activity from neurons growing in a lab dish.

“It’s silly to have amazingly intelligent students doing tedious tasks that could be done by robots,” says Sabatini, who was not involved in this study. “I would be happy to have robots do more of the experimentation so we can focus on the design and interpretation of the experiments.”

To help other labs adopt the new technology, the researchers plan to put the details of their approach on their web site, autopatcher.org.

Other co-authors include Ingrid van Welie, Suhasa Kodandaramaiah, and Brian Allen. The research was funded by Jeremy and Joyce Wertheimer, the National Institutes of Health (including the NIH Single Cell Initiative and the NIH Director’s Pioneer Award), the HHMI-Simons Faculty Scholars Program, and the New York Stem Cell Foundation-Robertson Award.

Robotic system offers easier monitoring of single neurons

Recording electrical signals from inside a neuron in the living brain can reveal a great deal of information about that neuron’s function and how it coordinates with other cells in the brain. However, performing this kind of recording is extremely difficult, so only a handful of neuroscience labs around the world do it.

To make this technique more widely available, MIT engineers have now devised a way to automate the process, using a computer algorithm that analyzes microscope images and guides a robotic arm to the target cell.

This technology could allow more scientists to study single neurons and learn how they interact with other cells to enable cognition, sensory perception, and other brain functions. Researchers could also use it to learn more about how neural circuits are affected by brain disorders.

“Knowing how neurons communicate is fundamental to basic and clinical neuroscience. Our hope is this technology will allow you to look at what’s happening inside a cell, in terms of neural computation, or in a disease state,” says Ed Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT, and a member of MIT’s Media Lab and McGovern Institute for Brain Research.

Boyden is the senior author of the paper, which appears in the Aug. 30 issue of Neuron. The paper’s lead author is MIT graduate student Ho-Jun Suk.

Precision guidance

For more than 30 years, neuroscientists have been using a technique known as patch clamping to record the electrical activity of cells. This method, which involves bringing a tiny, hollow glass pipette in contact with the cell membrane of a neuron, then opening up a small pore in the membrane, usually takes a graduate student or postdoc several months to learn. Learning to perform this on neurons in the living mammalian brain is even more difficult.

There are two types of patch clamping: a “blind” (not image-guided) method, which is limited because researchers cannot see where the cells are and can only record from whatever cell the pipette encounters first, and an image-guided version that allows a specific cell to be targeted.

Five years ago, Boyden and colleagues at MIT and Georgia Tech, including co-author Craig Forest, devised a way to automate the blind version of patch clamping. They created a computer algorithm that could guide the pipette to a cell based on measurements of a property called electrical impedance — which reflects how difficult it is for electricity to flow out of the pipette. If there are no cells around, electricity flows and impedance is low. When the tip hits a cell, electricity can’t flow as well and impedance goes up.

Once the pipette detects a cell, it can stop moving instantly, preventing it from poking through the membrane. A vacuum pump then applies suction to form a seal with the cell’s membrane. Then, the electrode can break through the membrane to record the cell’s internal electrical activity.

The researchers achieved very high accuracy using this technique, but it still could not be used to target a specific cell. For most studies, neuroscientists have a particular cell type they would like to learn about, Boyden says.

“It might be a cell that is compromised in autism, or is altered in schizophrenia, or a cell that is active when a memory is stored. That’s the cell that you want to know about,” he says. “You don’t want to patch a thousand cells until you find the one that is interesting.”

To enable this kind of precise targeting, the researchers set out to automate image-guided patch clamping. This technique is difficult to perform manually because, although the scientist can see the target neuron and the pipette through a microscope, he or she must compensate for the fact that nearby cells will move as the pipette enters the brain.

“It’s almost like trying to hit a moving target inside the brain, which is a delicate tissue,” Suk says. “For machines it’s easier because they can keep track of where the cell is, they can automatically move the focus of the microscope, and they can automatically move the pipette.”

By combining several imaging processing techniques, the researchers came up with an algorithm that guides the pipette to within about 25 microns of the target cell. At that point, the system begins to rely on a combination of imagery and impedance, which is more accurate at detecting contact between the pipette and the target cell than either signal alone.

The researchers imaged the cells with two-photon microscopy, a commonly used technique that uses a pulsed laser to send infrared light into the brain, lighting up cells that have been engineered to express a fluorescent protein.

Using this automated approach, the researchers were able to successfully target and record from two types of cells — a class of interneurons, which relay messages between other neurons, and a set of excitatory neurons known as pyramidal cells. They achieved a success rate of about 20 percent, which is comparable to the performance of highly trained scientists performing the process manually.

Unraveling circuits

This technology paves the way for in-depth studies of the behavior of specific neurons, which could shed light on both their normal functions and how they go awry in diseases such as Alzheimer’s or schizophrenia. For example, the interneurons that the researchers studied in this paper have been previously linked with Alzheimer’s. In a recent study of mice, led by Li-Huei Tsai, director of MIT’s Picower Institute for Learning and Memory, and conducted in collaboration with Boyden, it was reported that inducing a specific frequency of brain wave oscillation in interneurons in the hippocampus could help to clear amyloid plaques similar to those found in Alzheimer’s patients.

“You really would love to know what’s happening in those cells,” Boyden says. “Are they signaling to specific downstream cells, which then contribute to the therapeutic result? The brain is a circuit, and to understand how a circuit works, you have to be able to monitor the components of the circuit while they are in action.”

This technique could also enable studies of fundamental questions in neuroscience, such as how individual neurons interact with each other as the brain makes a decision or recalls a memory.

Bernardo Sabatini, a professor of neurobiology at Harvard Medical School, says he is interested in adapting this technique to use in his lab, where students spend a great deal of time recording electrical activity from neurons growing in a lab dish.

“It’s silly to have amazingly intelligent students doing tedious tasks that could be done by robots,” says Sabatini, who was not involved in this study. “I would be happy to have robots do more of the experimentation so we can focus on the design and interpretation of the experiments.”

To help other labs adopt the new technology, the researchers plan to put the details of their approach on their web site, autopatcher.org.

Other co-authors include Ingrid van Welie, Suhasa Kodandaramaiah, and Brian Allen. The research was funded by Jeremy and Joyce Wertheimer, the National Institutes of Health (including the NIH Single Cell Initiative and the NIH Director’s Pioneer Award), the HHMI-Simons Faculty Scholars Program, and the New York Stem Cell Foundation-Robertson Award.

How Biological Memory Really Works: Insights from the Man with the World’s Greatest Memory

 

Jim Karol exhibited no particular talent for memorizing anything early in his life. Far from being a savant, his grades in school were actually pretty bad and, after failing to graduate from college, he spent his 20’s working in a factory. He only started playing around with mnemonic techniques at the age of 49, merely as a means to amuse himself while he worked out on the treadmill. Then, in one of the most remarkable cognitive transformations in human history, he turned himself into the man with the world’s greatest memory. Whatever vast body of information is put before him — the US zip codes, the day of the week of every date in history, the first few thousand digits of pi, etc. — he voraciously commits to memory using his own inimitable mnemonic techniques. Moreover, unlike most other professional memorists, Jim has mastered the mental skill of permanently storing that information in long-term memory, as opposed to only short or medium-term memory. How does he do it?

To be sure, Jim has taken standard menmonic techniques to the next level. That said, it has been well-documented for over 2500 years that mnemonic techiques — such as the “Method of Loci” or the “Memory Palace” — dramatically enhance the memory capacity of anyone who uses them regularly. But is there any point to improving one’s memory in the age of the computer? Tony Dottino, the founder/executive director of the USA Memory Championship and a world reknown memory coach, will describe his experiences of teaching these techniques to all age groups.

Finally, does any of this have anything to do with the neuroscience of memory? McGovern Institute neuroscientist Robert Ajemian argues that it does and that one of the great intellectual misunderstandings in scientific history is that modern-day neuroscientists largely base their conceptualization of human memory on the computer metaphor. For this reason, neuroscientists usually talk of read/write operations, traces, engrams, storage/retrieval distinctions, etc. Ajemian argues that all of this is wrong for the brain, a highly distributed system which processes in parallel. The correct conceptualization of human memory is that of content-addressable memory implemented by attractor networks, and the success of mnemonic techniques, though largely ignored in current theories of memory, constitutes the ultimate proof. Ajemian will briefly outline these arguments.

Tan-Yang Center for Autism Research: Opening Remarks

June 12, 2017
Tan-Yang Center for Autism Research: Opening Remarks
Bob Desimone, Director of the McGovern Institute for Brain Research at MIT
Bob Millard, Chair of MIT Corporation
Lore Harp McGovern, Co-founder of the McGovern Institute for Brain Research at MIT
Hock E. Tan and K. Lisa Yang, Founders of the Tan-Yang Center for Autism Research

On June 12, 2017, the McGovern Institute hosted the launch celebration for the Hock E. Tan and K. Lisa Yang Center for Autism Research. The center is made possible by a kick-off commitment of $20 million, made by Lisa Yang and MIT alumnus Hock Tan ’75.

The Tan-Yang Center for Autism Research will support research on the genetic, biological and neural bases of autism spectrum disorders, a developmental disability estimated to affect 1 in 68 individuals in the United States. Tan and Yang hope their initial investment will stimulate additional support and help foster collaborative research efforts to erase the devastating effects of this disorder on individuals, their families and the broader autism community.

Microscopy technique could enable more informative biopsies

MIT and Harvard Medical School researchers have devised a way to image biopsy samples with much higher resolution — an advance that could help doctors develop more accurate and inexpensive diagnostic tests.

For more than 100 years, conventional light microscopes have been vital tools for pathology. However, fine-scale details of cells cannot be seen with these scopes. The new technique relies on an approach known as expansion microscopy, developed originally in Edward Boyden’s lab at MIT, in which the researchers expand a tissue sample to 100 times its original volume before imaging it.

This expansion allows researchers to see features with a conventional light microscope that ordinarily could be seen only with an expensive, high-resolution electron microscope. It also reveals additional molecular information that the electron microscope cannot provide.

“It’s a technique that could have very broad application,” says Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT. He is also a member of MIT’s Media Lab and McGovern Institute for Brain Research, and an HHMI-Simons Faculty Scholar.

In a paper appearing in the 17 July issue of Nature Biotechnology, Boyden and his colleagues used this technique to distinguish early-stage breast lesions with high or low risk of progressing to cancer — a task that is challenging for human observers. This approach can also be applied to other diseases: In an analysis of kidney tissue, the researchers found that images of expanded samples revealed signs of kidney disease that can normally only be seen with an electron microscope.

“Using expansion microscopy, we are able to diagnose diseases that were previously impossible to diagnose with a conventional light microscope,” says Octavian Bucur, an instructor at Harvard Medical School, Beth Israel Deaconess Medical Center (BIDMC), and the Ludwig Center at Harvard, and one of the paper’s lead authors.

MIT postdoc Yongxin Zhao is the paper’s co-lead author. Boyden and Andrew Beck, a former associate professor at Harvard Medical School and BIDMC, are the paper’s senior authors.


“A few chemicals and a light microscope”

Boyden’s original expansion microscopy technique is based on embedding tissue samples in a dense, evenly generated polymer that swells when water is added. Before the swelling occurs, the researchers anchor to the polymer gel the molecules that they want to image, and they digest other proteins that normally hold tissue together.

This tissue enlargement allows researchers to obtain images with a resolution of around 70 nanometers, which was previously possible only with very specialized and expensive microscopes.

In the new study, the researchers set out to adapt the expansion process for biopsy tissue samples, which are usually embedded in paraffin wax, flash frozen, or stained with a chemical that makes cellular structures more visible.

The MIT/Harvard team devised a process to convert these samples into a state suitable for expansion. For example, they remove the chemical stain or paraffin by exposing the tissues to a chemical solvent called xylene. Then, they heat up the sample in another chemical called citrate. After that, the tissues go through an expansion process similar to the original version of the technique, but with stronger digestion steps to compensate for the strong chemical fixation of the samples.

During this procedure, the researchers can also add fluorescent labels for molecules of interest, including proteins that mark particular types of cells, or DNA or RNA with a specific sequence.

“The work of Zhao et al. describes a very clever way of extending the resolution of light microscopy to resolve detail beyond that seen with conventional methods,” says David Rimm, a professor of pathology at the Yale University School of Medicine, who was not involved in the research.

The researchers tested this approach on tissue samples from patients with early-stage breast lesions. One way to predict whether these lesions will become malignant is to evaluate the appearance of the cells’ nuclei. Benign lesions with atypical nuclei have about a fivefold higher probability of progressing to cancer than those with typical nuclei.

However, studies have revealed significant discrepancies between the assessments of nuclear atypia performed by different pathologists, which can potentially lead to an inaccurate diagnosis and unnecessary surgery. An improved system for differentiating benign lesions with atypical and typical nuclei could potentially prevent 400,000 misdiagnoses and hundreds of millions of dollars every year in the United States, according to the researchers.

After expanding the tissue samples, the MIT/Harvard team analyzed them with a machine learning algorithm that can rate the nuclei based on dozens of features, including orientation, diameter, and how much they deviate from true circularity. This algorithm was able to distinguish between lesions that were likely to become invasive and those that were not, with an accuracy of 93 percent on expanded samples compared to only 71 percent on the pre-expanded tissue.

“These two types of lesions look highly similar to the naked eye, but one has much less risk of cancer,” Zhao says.

The researchers also analyzed kidney tissue samples from patients with nephrotic syndrome, which impairs the kidneys’ ability to filter blood. In these patients, tiny finger-like projections that filter the blood are lost or damaged. These structures are spaced about 200 nanometers apart and therefore can usually be seen only with an electron microscope or expensive super resolution microscopes.

When the researchers showed the images of the expanded tissue samples to a group of scientists that included pathologists and nonpathologists, the group was able to identify the diseased tissue with 90 percent accuracy overall, compared to only 65 percent accuracy with unexpanded tissue samples.

“Now you can diagnose nephrotic kidney disease without needing an electron microscope, a very expensive machine,” Boyden says. “You can do it with a few chemicals and a light microscope.”

Uncovering patterns

Using this approach, the researchers anticipate that scientists could develop more precise diagnostics for many other diseases. To do that, scientists and doctors will need to analyze many more patient samples, allowing them to discover patterns that would be impossible to see otherwise.

“If you can expand a tissue by one-hundredfold in volume, all other things being equal, you’re getting 100 times the information,” Boyden says.

For example, researchers could distinguish cancer cells based on how many copies of a particular gene they have. Extra copies of genes such as HER2, which the researchers imaged in one part of this study, indicate a subtype of breast cancer that is eligible for specific treatments.

Scientists could also look at the architecture of the genome, or at how cell shapes change as they become cancerous and interact with other cells of the body. Another possible application is identifying proteins that are expressed specifically on the surface of cancer cells, allowing researchers to design immunotherapies that mark those cells for destruction by the patient’s immune system.

Boyden and his colleagues run training courses several times a month at MIT, where visitors can come and watch expansion microscopy techniques, and they have made their protocols available on their website. They hope that many more people will begin using this approach to study a variety of diseases.

“Cancer biopsies are just the beginning,” Boyden says. “We have a new pipeline for taking clinical samples and expanding them, and we are finding that we can apply expansion to many different diseases. Expansion will enable computational pathology to take advantage of more information in a specimen than previously possible.”

Humayun Irshad, a research fellow at Harvard/BIDMC and an author of the study, agrees: “Expanded images result in more informative features, which in turn result in higher-performing classification models.”

Other authors include Harvard pathologist Astrid Weins, who helped oversee the kidney study. Other authors from MIT (Fei Chen) and BIDMC/Harvard (Andreea Stancu, Eun-Young Oh, Marcello DiStasio, Vanda Torous, Benjamin Glass, Isaac E. Stillman, and Stuart J. Schnitt) also contributed to this study.

The research was funded, in part, by the New York Stem Cell Foundation Robertson Investigator Award, the National Institutes of Health Director’s Pioneer Award, the Department of Defense Multidisciplinary University Research Initiative, the Open Philanthropy Project, the Ludwig Center at Harvard, and Harvard Catalyst.

Feng Zhang Wins the 2017 Blavatnik National Award for Young Scientists

The Blavatnik Family Foundation and the New York Academy of Sciences today announced the 2017 Laureates of the Blavatnik National Awards for Young Scientists. Starting with a pool of 308 nominees – the most promising scientific researchers aged 42 years and younger nominated by America’s top academic and research institutions – a distinguished jury first narrowed their selections to 30 Finalists, and then to three outstanding Laureates, one each from the disciplines of Life Sciences, Chemistry and Physical Sciences & Engineering. Each Laureate will receive $250,000 – the largest unrestricted award of its kind for early career scientists and engineers. This year’s Blavatnik National Laureates are:

Feng Zhang, PhD, Core Member, Broad Institute of MIT and Harvard; Associate Professor of Brain and Cognitive Sciences and Biomedical Engineering, MIT; Robertson Investigator, New York Stem Cell Foundation; James and Patricia Poitras ’63 Professor in Neuroscience, McGovern Institute for Brain Research at MIT. Dr. Zhang is being recognized for his role in developing the CRISPR-Cas9 gene-editing system and demonstrating pioneering uses in mammalian cells, and for his development of revolutionary technologies in neuroscience.

Melanie S. Sanford, PhD, Moses Gomberg Distinguished University Professor and Arthur F. Thurnau Professor of Chemistry, University of Michigan. Dr. Sanford is being celebrated for developing simpler chemical approaches – with less environmental impact – to the synthesis of molecules that have applications ranging from carbon dioxide recycling to drug discovery.

Yi Cui, PhD, Professor of Materials Science and Engineering, Photon Science and Chemistry, Stanford University and SLAC National Accelerator Laboratory. Dr. Cui is being honored for his technological innovations in the use of nanomaterials for environmental protection and the development of sustainable energy sources.

“The work of these three brilliant Laureates demonstrates the exceptional science being performed at America’s premiere research institutions and the discoveries that will make the lives of future generations immeasurably better,” said Len Blavatnik, Founder and Chairman of Access Industries, head of the Blavatnik Family Foundation, and an Academy Board Governor.

“Each of our 2017 National Laureates is shifting paradigms in areas that profoundly affect the way we tackle the health of our population and our planet — improved ways to store energy, “greener” drug and fuel production, and novel tools to correct disease-causing genetic mutations,” said Ellis Rubinstein, President and CEO of the Academy and Chair of the Awards’ Scientific Advisory Council. “Recognition programs like the Blavatnik Awards provide incentives and resources for rising stars, and help them to continue their important work. We look forward to learning where their innovations and future discoveries will take us in the years ahead.”

The annual Blavatnik Awards, established in 2007 by the Blavatnik Family Foundation and administered by the New York Academy of Sciences, recognize exceptional young researchers who will drive the next generation of innovation by answering today’s most complex and intriguing scientific questions.