CRISPR: From toolkit to therapy

Think of the human body as a community of cells with specialized roles. Each cell carries the same blueprint, an array of genes comprising the genome, but different cell types have unique functions — immune cells fight invading bacteria, while neurons transmit information.

But when something goes awry, the specialization of these cells becomes a challenge for treatment. For example, neurons lack active cell repair systems required for promising gene editing techniques like CRISPR.

Can current gene editing tools be modified to work in neurons? Can we reach neurons without impacting healthy cells nearby? McGovern Institute researchers are trying to answer these questions by developing gene editing tools and delivery systems that can target — and repair — faulty brain cells.

Expanding the toolkit

Feng Zhang with folded arms in lab
McGovern Investigator Feng Zhang in his lab.

Natural CRISPR systems help bacteria fend off would-be attackers. Our first glimpse of the impact of such systems was the use of CRISPR-Cas9 to edit human cells.

“Harnessing Cas9 was a major game-changer in the life sciences,” explains Feng Zhang, an investigator at the McGovern Institute and the James and Patricia Poitras Professor of Neuroscience at MIT. “But Cas9 is just one flavor of one kind of bacterial defense system — there is a treasure trove of natural systems that may have enormous potential, just waiting to be unlocked.”

By finding and optimizing new molecular tools, the Zhang lab and others have developed CRISPR tools that can now potentially target neurons and fix diverse mutation types, bringing gene therapy within reach.

Precise in space and time

A single letter change to a gene can be devastating. These genes may function only briefly during development, so a temporary “fix” during this window could be beneficial. For such cases, the Zhang lab and others have engineered tools that target short-lived RNAs. These molecules act as messengers, carrying information from DNA to be converted into functional factors in the cell.

“RNA editing is powerful from an ethical and safety standpoint,” explains Soumya Kannan, a graduate student in the Zhang lab working on these tools. “By targeting RNA molecules, which are only present for a short time, we can avoid permanent changes to the genetic material, and we can make these changes in any type of cell.”

Soumya Kannan in the lab
Graduate student Soumya Kannan is developing smaller CRISPR tools that can be more easily packaged into viral vectors for delivery. Photo: Caitlin Cunningham

Zhang’s team has developed twin RNA-editing tools, REPAIR and RESCUE, which can fix single RNA bases by bringing together a base editor with the CRISPR protein Cas13. These RNA-editing tools can be used in neurons because they do not rely on cellular machinery to make the targeted changes. They also have the potential to tackle a wide array of diseases in other tissue types.

CAST addition

If a gene is severely disrupted, more radical help may be needed: insertion of a normal gene. For this situation, Zhang’s lab recently identified CRISPR-associated transposases (CASTs) from cyanobacteria. CASTs combine Cas12k, which is targeted by a guide RNA to a precise genome location, with an enzyme that can insert gene-sized pieces of DNA.

“With traditional CRISPR you can make simple changes, similar to changing a few letters or words in a Word document. The new system can ‘copy and paste’ entire genes.” – Alim Ladha

Transposases were originally identified as enzymes that help rogue genes “jump” from one place to another in the genome. CAST uses a similar activity to insert entire genes self-sufficiently without help from the target cell so, like REPAIR and RESCUE, it can potentially be used in neurons.

“Our initial work was to fully characterize how this new system works, and test whether it can actually insert genes,” explains Alim Ladha, a graduate fellow in the Tan-Yang Center for Autism Research, who worked on CAST with Jonathan Strecker, a postdoctoral fellow in the Zhang lab.

The goal is now to use CAST to precisely target neurons and other specific cell types affected by disease.

Toward delivery

As the gene-editing toolbox expands, McGovern labs are working on precise delivery systems.Adeno-associated virus (AAV) is an FDA-approved virus for delivering genes, but has limited room to carry the necessary cargo — CRISPR machinery plus templates — to fix genes.

To tackle this problem, McGovern Investigators Guoping Feng and Feng Zhang are working on reducing the cargo needed for therapy. In addition, the Zhang, Gootenberg and Abudayyeh labs are working on methods to precisely deliver the therapeutic packages to neurons, such as new tissue-specific viruses that can carry bigger payloads. Finally, entirely new modalities for delivery are being explored in the effort to develop gene therapy to a point where it can be safely delivered to patients.

“Cas9 has been a very useful tool for the life sciences,” says Zhang. “And it’ll be exciting to see continued progress with the broadening toolkit and delivery systems, as we make further progress toward safe gene therapies.

Controlling attention with brain waves

Having trouble paying attention? MIT neuroscientists may have a solution for you: Turn down your alpha brain waves. In a new study, the researchers found that people can enhance their attention by controlling their own alpha brain waves based on neurofeedback they receive as they perform a particular task.

The study found that when subjects learned to suppress alpha waves in one hemisphere of their parietal cortex, they were able to pay better attention to objects that appeared on the opposite side of their visual field. This is the first time that this cause-and-effect relationship has been seen, and it suggests that it may be possible for people to learn to improve their attention through neurofeedback.

Desimone lab study shows that people can boost attention by manipulating their own alpha brain waves with neurofeedback training.

“There’s a lot of interest in using neurofeedback to try to help people with various brain disorders and behavioral problems,” says Robert Desimone, director of MIT’s McGovern Institute for Brain Research. “It’s a completely noninvasive way of controlling and testing the role of different types of brain activity.”

It’s unknown how long these effects might last and whether this kind of control could be achieved with other types of brain waves, such as beta waves, which are linked to Parkinson’s disease. The researchers are now planning additional studies of whether this type of neurofeedback training might help people suffering from attentional or other neurological disorders.

Desimone is the senior author of the paper, which appears in Neuron on Dec. 4. McGovern Institute postdoc Yasaman Bagherzadeh is the lead author of the study. Daniel Baldauf, a former McGovern Institute research scientist, and Dimitrios Pantazis, a McGovern Institute principal research scientist, are also authors of the paper.

Alpha and attention

There are billions of neurons in the brain, and their combined electrical signals generate oscillations known as brain waves. Alpha waves, which oscillate in the frequency of 8 to 12 hertz, are believed to play a role in filtering out distracting sensory information.

Previous studies have shown a strong correlation between attention and alpha brain waves, particularly in the parietal cortex. In humans and in animal studies, a decrease in alpha waves has been linked to enhanced attention. However, it was unclear if alpha waves control attention or are just a byproduct of some other process that governs attention, Desimone says.

To test whether alpha waves actually regulate attention, the researchers designed an experiment in which people were given real-time feedback on their alpha waves as they performed a task. Subjects were asked to look at a grating pattern in the center of a screen, and told to use mental effort to increase the contrast of the pattern as they looked at it, making it more visible.

During the task, subjects were scanned using magnetoencephalography (MEG), which reveals brain activity with millisecond precision. The researchers measured alpha levels in both the left and right hemispheres of the parietal cortex and calculated the degree of asymmetry between the two levels. As the asymmetry between the two hemispheres grew, the grating pattern became more visible, offering the participants real-time feedback.

McGovern postdoc Yasaman sits in a magnetoencephalography (MEG) scanner. Photo: Justin Knight

Although subjects were not told anything about what was happening, after about 20 trials (which took about 10 minutes), they were able to increase the contrast of the pattern. The MEG results indicated they had done so by controlling the asymmetry of their alpha waves.

“After the experiment, the subjects said they knew that they were controlling the contrast, but they didn’t know how they did it,” Bagherzadeh says. “We think the basis is conditional learning — whenever you do a behavior and you receive a reward, you’re reinforcing that behavior. People usually don’t have any feedback on their brain activity, but when we provide it to them and reward them, they learn by practicing.”

Although the subjects were not consciously aware of how they were manipulating their brain waves, they were able to do it, and this success translated into enhanced attention on the opposite side of the visual field. As the subjects looked at the pattern in the center of the screen, the researchers flashed dots of light on either side of the screen. The participants had been told to ignore these flashes, but the researchers measured how their visual cortex responded to them.

One group of participants was trained to suppress alpha waves in the left side of the brain, while the other was trained to suppress the right side. In those who had reduced alpha on the left side, their visual cortex showed a larger response to flashes of light on the right side of the screen, while those with reduced alpha on the right side responded more to flashes seen on the left side.

“Alpha manipulation really was controlling people’s attention, even though they didn’t have any clear understanding of how they were doing it,” Desimone says.

Persistent effect

After the neurofeedback training session ended, the researchers asked subjects to perform two additional tasks that involve attention, and found that the enhanced attention persisted. In one experiment, subjects were asked to watch for a grating pattern, similar to what they had seen during the neurofeedback task, to appear. In some of the trials, they were told in advance to pay attention to one side of the visual field, but in others, they were not given any direction.

When the subjects were told to pay attention to one side, that instruction was the dominant factor in where they looked. But if they were not given any cue in advance, they tended to pay more attention to the side that had been favored during their neurofeedback training.

In another task, participants were asked to look at an image such as a natural outdoor scene, urban scene, or computer-generated fractal shape. By tracking subjects’ eye movements, the researchers found that people spent more time looking at the side that their alpha waves had trained them to pay attention to.

“It is promising that the effects did seem to persist afterwards,” says Desimone, though more study is needed to determine how long these effects might last.

The research was funded by the McGovern Institute.

McGovern scientists named STAT Wunderkinds

McGovern researchers Sam Rodriques and Jonathan Strecker have been named to the class of 2019 STAT wunderkinds. This group of 22 researchers was selected from a national pool of hundreds of nominees, and aims to recognize trail-blazing scientists that are on the cusp of launching their careers but not yet fully independent.

“We were thrilled to receive this news,” said Robert Desimone, director of the McGovern Institute. “It’s great to see the remarkable progress being made by young scientists in McGovern labs be recognized in this way.”

Finding context

Sam Rodriques works in Ed Boyden’s lab at the McGovern Institute, where he develops new technologies that enable researchers to understand the behaviors of cells within their native spatial and temporal context.

“Psychiatric disease is a huge problem, but only a handful of first-in-class drugs for psychiatric diseases approved since the 1960s,” explains Rodriques, also affiliated with the MIT Media Lab and Broad Institute. “Coming up with novel cures is going to require new ways to generate hypotheses about the biological processes that underpin disease.”

Rodriques also works on several technologies within the Boyden lab, including preserving spatial information in molecular mapping technologies, finding ways of following neural connectivity in the brain, and Implosion Fabrication, or “Imp Fab.” This nanofabrication technology allows objects to be evenly shrunk to the nanoscale and has a wide range of potential applications, including building new miniature devices for examining neural function.

“I was very surprised, not expecting it at all!” explains Rodriques when asked about becoming a STAT Wunderkind, “I’m sure that all of the hundreds of applicants are very accomplished scientists, and so to be chosen like this is really an honor.”

New tools for gene editing

Jonathan Strecker is currently a postdoc working in Feng Zhang’s lab, and associated with both the McGovern Institute and Broad Institute. While CRISPR-Cas9 continues to have a profound effect and huge potential for research and biomedical, and agricultural applications, the ability to move entire genes into specific target locations remained out reach.

“Genome editing with CRISPR-Cas enzymes typically involves cutting and disrupting genes, or making certain base edits,” explains Strecker, “however, inserting large pieces of DNA is still hard to accomplish.”

As a postdoctoral researcher in the lab of CRISPR pioneer Feng Zhang, Strecker led research that showed how large sequences could be inserted into a genome at a given location.

“Nature often has interesting solutions to these problems and we were fortunate to identify and characterize a remarkable CRISPR system from cyanobacteria that functions as a programmable transposase.”

Importantly, the system he discovered, called CAST, doesn’t require cellular machinery to insert DNA. This is important as it means that CAST could work in many cell types, including those that have stopped dividing such as neurons, something that is being pursued.

By finding new sources of inspiration, be it nature or art, both Rodriques and Strecker join a stellar line up of young investigators being recognized for creativity and innovation.

 

MIT appoints 14 faculty members to named professorships

The School of Science has announced that 14 of its faculty members have been appointed to named professorships. The faculty members selected for these positions receive additional support to pursue their research and develop their careers.

Riccardo Comin is an assistant professor in the Department of Physics. He has been named a Class of 1947 Career Development Professor. This three-year professorship is granted in recognition of the recipient’s outstanding work in both research and teaching. Comin is interested in condensed matter physics. He uses experimental methods to synthesize new materials, as well as analysis through spectroscopy and scattering to investigate solid state physics. Specifically, the Comin lab attempts to discover and characterize electronic phases of quantum materials. Recently, his lab, in collaboration with colleagues, discovered that weaving a conductive material into a particular pattern known as the “kagome” pattern can result in quantum behavior when electricity is passed through.

Joseph Davis, assistant professor in the Department of Biology, has been named a Whitehead Career Development Professor. He looks at how cells build and deconstruct complex molecular machinery. The work of his lab group relies on biochemistry, biophysics, and structural approaches that include spectrometry and microscopy. A current project investigates the formation of the ribosome, an essential component in all cells. His work has implications for metabolic engineering, drug delivery, and materials science.

Lawrence Guth is now the Claude E. Shannon (1940) Professor of Mathematics. Guth explores harmonic analysis and combinatorics, and he is also interested in metric geometry and identifying connections between geometric inequalities and topology. The subject of metric geometry revolves around being able to estimate measurements, including length, area, volume and distance, and combinatorial geometry is essentially the estimation of the intersection of patters in simple shapes, including lines and circles.

Michael Halassa, an assistant professor in the Department of Brain and Cognitive Sciences, will hold the three-year Class of 1958 Career Development Professorship. His area of interest is brain circuitry. By investigating the networks and connections in the brain, he hopes to understand how they operate — and identify any ways in which they might deviate from normal operations, causing neurological and psychiatric disorders. Several publications from his lab discuss improvements in the treatment of the deleterious symptoms of autism spectrum disorder and schizophrenia, and his latest news provides insights on how the brain filters out distractions, particularly noise. Halassa is an associate investigator at the McGovern Institute for Brain Research and an affiliate member of the Picower Institute for Learning and Memory.

Sebastian Lourido, an assistant professor and the new Latham Family Career Development Professor in the Department of Biology for the next three years, works on treatments for infectious disease by learning about parasitic vulnerabilities. Focusing on human pathogens, Lourido and his lab are interested in what allows parasites to be so widespread and deadly, looking on a molecular level. This includes exploring how calcium regulates eukaryotic cells, which, in turn, affect processes such as muscle contraction and membrane repair, in addition to kinase responses.

Brent Minchew is named a Cecil and Ida Green Career Development Professor for a three-year term. Minchew, a faculty member in the Department of Earth, Atmospheric and Planetary Sciences, studies glaciers using remote sensing methods, such as interferometric synthetic aperture radar. His research into glaciers, including their mechanics, rheology, and interactions with their surrounding environment, extends as far as observing their responses to climate change. His group recently determined that Antarctica, in a worst-case scenario climate projection, would not contribute as much as predicted to rising sea level.

Elly Nedivi, a professor in the departments of Brain and Cognitive Sciences and Biology, has been named the inaugural William R. (1964) And Linda R. Young Professor. She works on brain plasticity, defined as the brain’s ability to adapt with experience, by identifying genes that play a role in plasticity and their neuronal and synaptic functions. In one of her lab’s recent publications, they suggest that variants of a particular gene may undermine expression or production of a protein, increasing the risk of bipolar disorder. In addition, she collaborates with others at MIT to develop new microscopy tools that allow better analysis of brain connectivity. Nedivi is also a member of the Picower Institute for Learning and Memory.

Andrei Negut has been named a Class of 1947 Career Development Professor for a three-year term. Negut, a member of the Department of Mathematics, fixates on problems in geometric representation theory. This topic requires investigation within algebraic geometry and representation theory simultaneously, with implications for mathematical physics, symplectic geometry, combinatorics and probability theory.

Matĕj Peč, the Victor P. Starr Career Development Professor in the Department of Earth, Atmospheric and Planetary Science until 2021, studies how the movement of the Earth’s tectonic plates affects rocks, mechanically and microstructurally. To investigate such a large-scale topic, he utilizes high-pressure, high-temperature experiments in a lab to simulate the driving forces associated with plate motion, and compares results with natural observations and theoretical modeling. His lab has identified a particular boundary beneath the Earth’s crust where rock properties shift from brittle, like peanut brittle, to viscous, like honey, and determined how that layer accommodates building strain between the two. In his investigations, he also considers the effect on melt generation miles underground.

Kerstin Perez has been named the three-year Class of 1948 Career Development Professor in the Department of Physics. Her research interest is dark matter. She uses novel analytical tools, such as those affixed on a balloon-borne instrument that can carry out processes similar to that of a particle collider (like the Large Hadron Collider) to detect new particle interactions in space with the help of cosmic rays. In another research project, Perez uses a satellite telescope array on Earth to search for X-ray signatures of mysterious particles. Her work requires heavy involvement with collaborative observatories, instruments, and telescopes. Perez is affiliated with the Kavli Institute for Astrophysics and Space Research.

Bjorn Poonen, named a Distinguished Professor of Science in the Department of Mathematics, studies number theory and algebraic geometry. He, his colleagues, and his lab members generate algorithms that can solve polynomial equations with the particular requirement that the solutions be rational numbers. These types of problems can be useful in encoding data. He also helps to determine what is undeterminable, that is exploring the limits of computing.

Daniel Suess, named a Class of 1948 Career Development Professor in the Department of Chemistry, uses molecular chemistry to explain global biogeochemical cycles. In the fields of inorganic and biological chemistry, Suess and his lab look into understanding complex and challenging reactions and clustering of particular chemical elements and their catalysts. Most notably, these reactions include those that are essential to solar fuels. Suess’s efforts to investigate both biological and synthetic systems have broad aims of both improving human health and decreasing environmental impacts.

Alison Wendlandt is the new holder of the five-year Cecil and Ida Green Career Development Professorship. In the Department of Chemistry, the Wendlandt research group focuses on physical organic chemistry and organic and organometallic synthesis to develop reaction catalysts. Her team fixates on designing new catalysts, identifying processes to which these catalysts can be applied, and determining principles that can expand preexisting reactions. Her team’s efforts delve into the fields of synthetic organic chemistry, reaction kinetics, and mechanics.

Julien de Wit, a Department of Earth, Atmospheric and Planetary Sciences assistant professor, has been named a Class of 1954 Career Development Professor. He combines math and science to answer questions about big-picture planetary questions. Using data science, de Wit develops new analytical techniques for mapping exoplanetary atmospheres, studies planet-star interactions of planetary systems, and determines atmospheric and planetary properties of exoplanets from spectroscopic information. He is a member of the scientific team involved in the Search for habitable Planets EClipsing ULtra-cOOl Stars (SPECULOOS) TRANsiting Planets and Planetesimals Small Telescope (TRAPPIST), made up of an international collection of observatories. He is affiliated with the Kavli Institute.

Drug combination reverses hypersensitivity to noise

People with autism often experience hypersensitivity to noise and other sensory input. MIT neuroscientists have now identified two brain circuits that help tune out distracting sensory information, and they have found a way to reverse noise hypersensitivity in mice by boosting the activity of those circuits.

One of the circuits the researchers identified is involved in filtering noise, while the other exerts top-down control by allowing the brain to switch its attention between different sensory inputs.

The researchers showed that restoring the function of both circuits worked much better than treating either circuit alone. This demonstrates the benefits of mapping and targeting multiple circuits involved in neurological disorders, says Michael Halassa, an assistant professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research.

“We think this work has the potential to transform how we think about neurological and psychiatric disorders, [so that we see them] as a combination of circuit deficits,” says Halassa, the senior author of the study. “The way we should approach these brain disorders is to map, to the best of our ability, what combination of deficits are there, and then go after that combination.”

MIT postdoc Miho Nakajima and research scientist L. Ian Schmitt are the lead authors of the paper, which appears in Neuron on Oct. 21. Guoping Feng, the James W. and Patricia Poitras Professor of Neuroscience and a member of the McGovern Institute, is also an author of the paper.

Hypersensitivity

Many gene variants have been linked with autism, but most patients have very few, if any, of those variants. One of those genes is ptchd1, which is mutated in about 1 percent of people with autism. In a 2016 study, Halassa and Feng found that during development this gene is primarily expressed in a part of the thalamus called the thalamic reticular nucleus (TRN).

That study revealed that neurons of the TRN help the brain to adjust to changes in sensory input, such as noise level or brightness. In mice with ptchd1 missing, TRN neurons fire too fast, and they can’t adjust when noise levels change. This prevents the TRN from performing its usual sensory filtering function, Halassa says.

“Neurons that are there to filter out noise, or adjust the overall level of activity, are not adapting. Without the ability to fine-tune the overall level of activity, you can get overwhelmed very easily,” he says.

In the 2016 study, the researchers also found that they could restore some of the mice’s noise filtering ability by treating them with a drug called EBIO that activates neurons’ potassium channels. EBIO has harmful cardiac side effects so likely could not be used in human patients, but other drugs that boost TRN activity may have a similar beneficial effect on hypersensitivity, Halassa says.

In the new Neuron paper, the researchers delved more deeply into the effects of ptchd1, which is also expressed in the prefrontal cortex. To explore whether the prefrontal cortex might play a role in the animals’ hypersensitivity, the researchers used a task in which mice have to distinguish between three different tones, presented with varying amounts of background noise.

Normal mice can learn to use a cue that alerts them whenever the noise level is going to be higher, improving their overall performance on the task. A similar phenomenon is seen in humans, who can adjust better to noisier environments when they have some advance warning, Halassa says. However, mice with the ptchd1 mutation were unable to use these cues to improve their performance, even when their TRN deficit was treated with EBIO.

This suggested that another brain circuit must be playing a role in the animals’ ability to filter out distracting noise. To test the possibility that this circuit is located in the prefrontal cortex, the researchers recorded from neurons in that region while mice lacking ptch1 performed the task. They found that neuronal activity died out much faster in these mice than in the prefrontal cortex of normal mice. That led the researchers to test another drug, known as modafinil, which is FDA-approved to treat narcolepsy and is sometimes prescribed to improve memory and attention.

The researchers found that when they treated mice missing ptchd1 with both modafinil and EBIO, their hypersensitivity disappeared, and their performance on the task was the same as that of normal mice.

Targeting circuits

This successful reversal of symptoms suggests that the mice missing ptchd1 experience a combination of circuit deficits that each contribute differently to noise hypersensitivity. One circuit filters noise, while the other helps to control noise filtering based on external cues. Ptch1 mutations affect both circuits, in different ways that can be treated with different drugs.

Both of those circuits could also be affected by other genetic mutations that have been linked to autism and other neurological disorders, Halassa says. Targeting those circuits, rather than specific genetic mutations, may offer a more effective way to treat such disorders, he says.

“These circuits are important for moving things around the brain — sensory information, cognitive information, working memory,” he says. “We’re trying to reverse-engineer circuit operations in the service of figuring out what to do about a real human disease.”

He now plans to study circuit-level disturbances that arise in schizophrenia. That disorder affects circuits involving cognitive processes such as inference — the ability to draw conclusions from available information.

The research was funded by the Simons Center for the Social Brain at MIT, the Stanley Center for Psychiatric Research at the Broad Institute, the McGovern Institute for Brain Research at MIT, the Pew Foundation, the Human Frontiers Science Program, the National Institutes of Health, the James and Patricia Poitras Center for Psychiatric Disorders Research at MIT, a Japan Society for the Promotion of Science Fellowship, and a National Alliance for the Research of Schizophrenia and Depression Young Investigator Award.

Word Play

Ev Fedorenko uses the widely translated book “Alice in Wonderland” to test brain responses to different languages.

Language is a uniquely human ability that allows us to build vibrant pictures of non-existent places (think Wonderland or Westeros). How does the brain build mental worlds from words? Can machines do the same? Can we recover this ability after brain injury? These questions require an understanding of how the brain processes language, a fascination for Ev Fedorenko.

“I’ve always been interested in language. Early on, I wanted to found a company that teaches kids languages that share structure — Spanish, French, Italian — in one go,” says Fedorenko, an associate investigator at the McGovern Institute and an assistant professor in brain and cognitive sciences at MIT.

Her road to understanding how thoughts, ideas, emotions, and meaning can be delivered through sound and words became clear when she realized that language was accessible through cognitive neuroscience.

Early on, Fedorenko made a seminal finding that undermined dominant theories of the time. Scientists believed a single network was extracting meaning from all we experience: language, music, math, etc. Evolving separate networks for these functions seemed unlikely, as these capabilities arose recently in human evolution.

Language Regions
Ev Fedorenko has found that language regions of the brain (shown in teal) are sensitive to both word meaning and sentence structure. Image: Ev Fedorenko

But when Fedorenko examined brain activity in subjects while they read or heard sentences in the MRI, she found a network of brain regions that is indeed specialized for language.

“A lot of brain areas, like motor and social systems, were already in place when language emerged during human evolution,” explains Fedorenko. “In some sense, the brain seemed fully occupied. But rather than co-opt these existing systems, the evolution of language in humans involved language carving out specific brain regions.”

Different aspects of language recruit brain regions across the left hemisphere, including Broca’s area and portions of the temporal lobe. Many believe that certain regions are involved in processing word meaning while others unpack the rules of language. Fedorenko and colleagues have however shown that the entire language network is selectively engaged in linguistic tasks, processing both the rules (syntax) and meaning (semantics) of language in the same brain areas.

Semantic Argument

Fedorenko’s lab even challenges the prevailing view that syntax is core to language processing. By gradually degrading sentence structure through local word swaps (see figure), they found that language regions still respond strongly to these degraded sentences, deciphering meaning from them, even as syntax, or combinatorial rules, disappear.

The Fedorenko lab has shown that the brain finds meaning in a sentence, even when “local” words are swapped (2, 3). But when clusters of neighboring words are scrambled (4), the brain struggles to find its meaning.

“A lot of focus in language research has been on structure-building, or building a type of hierarchical graph of the words in a sentence. But actually the language system seems optimized and driven to find rich, representational meaning in a string of words processed together,” explains Fedorenko.

Computing Language

When asked about emerging areas of research, Fedorenko points to the data structures and algorithms underlying linguistic processing. Modern computational models can perform sophisticated tasks, including translation, ever more effectively. Consider Google translate. A decade ago, the system translated one word at a time with laughable results. Now, instead of treating words as providing context for each other, the latest artificial translation systems are performing more accurately. Understanding how they resolve meaning could be very revealing.

“Maybe we can link these models to human neural data to both get insights about linguistic computations in the human brain, and maybe help improve artificial systems by making them more human-like,” says Fedorenko.

She is also trying to understand how the system breaks down, how it over-performs, and even more philosophical questions. Can a person who loses language abilities (with aphasia, for example) recover — a very relevant question given the language-processing network occupies such specific brain regions. How are some unique people able to understand 10, 15 or even more languages? Do we need words to have thoughts?

Using a battery of approaches, Fedorenko seems poised to answer some of these questions.

New method visualizes groups of neurons as they compute

Using a fluorescent probe that lights up when brain cells are electrically active, MIT and Boston University researchers have shown that they can image the activity of many neurons at once, in the brains of mice.

McGovern Investigator Ed Boyden has developed a technology that allows neuroscientists to visualize the activity of circuits within the brain and link them to specific behaviors.

This technique, which can be performed using a simple light microscope, could allow neuroscientists to visualize the activity of circuits within the brain and link them to specific behaviors, says Edward Boyden, the Y. Eva Tan Professor in Neurotechnology and a professor of biological engineering and of brain and cognitive sciences at MIT.

“If you want to study a behavior, or a disease, you need to image the activity of populations of neurons because they work together in a network,” says Boyden, who is also a member of MIT’s McGovern Institute for Brain Research, Media Lab, and Koch Institute for Integrative Cancer Research.

Using this voltage-sensing molecule, the researchers showed that they could record electrical activity from many more neurons than has been possible with any existing, fully genetically encoded, fluorescent voltage probe.

Boyden and Xue Han, an associate professor of biomedical engineering at Boston University, are the senior authors of the study, which appears in the Oct. 9 online edition of Nature. The lead authors of the paper are MIT postdoc Kiryl Piatkevich, BU graduate student Seth Bensussen, and BU research scientist Hua-an Tseng.

Seeing connections

Neurons compute using rapid electrical impulses, which underlie our thoughts, behavior, and perception of the world. Traditional methods for measuring this electrical activity require inserting an electrode into the brain, a process that is labor-intensive and usually allows researchers to record from only one neuron at a time. Multielectrode arrays allow the monitoring of electrical activity from many neurons at once, but they don’t sample densely enough to get all the neurons within a given volume.  Calcium imaging does allow such dense sampling, but it measures calcium, an indirect and slow measure of neural electrical activity.

In 2018, MIT researchers developed a light-sensitive protein that can be embedded into neuron membranes, where it emits a fluorescent signal that indicates how much voltage a particular cell is experiencing. Image courtesy of the researchers

In 2018, Boyden’s team developed an alternative way to monitor electrical activity by labeling neurons with a fluorescent probe. Using a technique known as directed protein evolution, his group engineered a molecule called Archon1 that can be genetically inserted into neurons, where it becomes embedded in the cell membrane. When a neuron’s electrical activity increases, the molecule becomes brighter, and this fluorescence can be seen with a standard light microscope.

In the 2018 paper, Boyden and his colleagues showed that they could use the molecule to image electrical activity in the brains of transparent worms and zebrafish embryos, and also in mouse brain slices. In the new study, they wanted to try to use it in living, awake mice as they engaged in a specific behavior.

To do that, the researchers had to modify the probe so that it would go to a subregion of the neuron membrane. They found that when the molecule inserts itself throughout the entire cell membrane, the resulting images are blurry because the axons and dendrites that extend from neurons also fluoresce. To overcome that, the researchers attached a small peptide that guides the probe specifically to membranes of the cell bodies of neurons. They called this modified protein SomArchon.

“With SomArchon, you can see each cell as a distinct sphere,” Boyden says. “Rather than having one cell’s light blurring all its neighbors, each cell can speak by itself loudly and clearly, uncontaminated by its neighbors.”

The researchers used this probe to image activity in a part of the brain called the striatum, which is involved in planning movement, as mice ran on a ball. They were able to monitor activity in several neurons simultaneously and correlate each one’s activity with the mice’s movement. Some neurons’ activity went up when the mice were running, some went down, and others showed no significant change.

“Over the years, my lab has tried many different versions of voltage sensors, and none of them have worked in living mammalian brains until this one,” Han says.

Using this fluorescent probe, the researchers were able to obtain measurements similar to those recorded by an electrical probe, which can pick up activity on a very rapid timescale. This makes the measurements more informative than existing techniques such as imaging calcium, which neuroscientists often use as a proxy for electrical activity.

“We want to record electrical activity on a millisecond timescale,” Han says. “The timescale and activity patterns that we get from calcium imaging are very different. We really don’t know exactly how these calcium changes are related to electrical dynamics.”

With the new voltage sensor, it is also possible to measure very small fluctuations in activity that occur even when a neuron is not firing a spike. This could help neuroscientists study how small fluctuations impact a neuron’s overall behavior, which has previously been very difficult in living brains, Han says.

Mapping circuits

The researchers also showed that this imaging technique can be combined with optogenetics — a technique developed by the Boyden lab and collaborators that allows researchers to turn neurons on and off with light by engineering them to express light-sensitive proteins. In this case, the researchers activated certain neurons with light and then measured the resulting electrical activity in these neurons.

This imaging technology could also be combined with expansion microscopy, a technique that Boyden’s lab developed to expand brain tissue before imaging it, make it easier to see the anatomical connections between neurons in high resolution.

“One of my dream experiments is to image all the activity in a brain, and then use expansion microscopy to find the wiring between those neurons,” Boyden says. “Then can we predict how neural computations emerge from the wiring.”

Such wiring diagrams could allow researchers to pinpoint circuit abnormalities that underlie brain disorders, and may also help researchers to design artificial intelligence that more closely mimics the human brain, Boyden says.

The MIT portion of the research was funded by Edward and Kay Poitras, the National Institutes of Health, including a Director’s Pioneer Award, Charles Hieken, John Doerr, the National Science Foundation, the HHMI-Simons Faculty Scholars Program, the Human Frontier Science Program, and the U.S. Army Research Office.

Controlling our internal world

Olympic skaters can launch, perform multiple aerial turns, and land gracefully, anticipating imperfections and reacting quickly to correct course. To make such elegant movements, the brain must have an internal model of the body to control, predict, and make almost instantaneous adjustments to motor commands. So-called “internal models” are a fundamental concept in engineering and have long been suggested to underlie control of movement by the brain, but what about processes that occur in the absence of movement, such as contemplation, anticipation, planning?

Using a novel combination of task design, data analysis, and modeling, MIT neuroscientist Mehrdad Jazayeri and colleagues now provide compelling evidence that the core elements of an internal model also control purely mental processes in a study published in Nature Neuroscience.

“During my thesis I realized that I’m interested, not so much in how our senses react to sensory inputs, but instead in how my internal model of the world helps me make sense of those inputs,”says Jazayeri, the Robert A. Swanson Career Development Professor of Life Sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Indeed, understanding the building blocks exerting control of such mental processes could help to paint a better picture of disruptions in mental disorders, such as schizophrenia.

Internal models for mental processes

Scientists working on the motor system have long theorized that the brain overcomes noisy and slow signals using an accurate internal model of the body. This internal model serves three critical functions: it provides motor to control movement, simulates upcoming movement to overcome delays, and uses feedback to make real-time adjustments.

“The framework that we currently use to think about how the brain controls our actions is one that we have borrowed from robotics: we use controllers, simulators, and sensory measurements to control machines and train operators,” explains Reza Shadmehr, a professor at the Johns Hopkins School of Medicine who was not involved with the study. “That framework has largely influenced how we imagine our brain controlling our movements.”

Jazazyeri and colleagues wondered whether the same framework might explain the control principles governing mental states in the absence of any movement.

“When we’re simply sitting, thoughts and images run through our heads and, fundamental to intellect, we can control them,” explains lead author Seth Egger, a former postdoctoral associate in the Jazayeri lab and now at Duke University.

“We wanted to find out what’s happening between our ears when we are engaged in thinking,” says Egger.

Imagine, for example, a sign language interpreter keeping up with a fast speaker. To track speech accurately, the translator continuously anticipates where the speech is going, rapidly adjusting when the actual words deviate from the prediction. The interpreter could be using an internal model to anticipate upcoming words, and use feedback to make adjustments on the fly.

1-2-3…Go

Hypothesizing about how the components of an internal model function in scenarios such as translation is one thing. Cleanly measuring and proving the existence of these elements is much more complicated as the activity of the controller, simulator, and feedback are intertwined. To tackle this problem, Jazayeri and colleagues devised a clever task with primate models in which the controller, simulator, and feedback act at distinct times.

In this task, called “1-2-3-Go,” the animal sees three consecutive flashes (1, 2, and 3) that form a regular beat, and learns to make an eye movement (Go) when they anticipate the 4th flash should occur. During the task, researchers measured neural activity in a region of the frontal cortex they had previously linked to the timing of movement.

Jazayeri and colleagues had clear predictions about when the controller would act (between the third flash and “Go”) and when feedback would be engaged (with each flash of light). The key surprise came when researchers saw evidence for the simulator anticipating the third flash. This unexpected neural activity has dynamics that resemble the controller, but was not associated with a response. In other words, the researchers uncovered a covert plan that functions as the simulator, thus uncovering all three elements of an internal model for a mental process, the planning and anticipation of “Go” in the “1-2-3-Go” sequence.

“Jazayeri’s work is important because it demonstrates how to study mental simulation in animals,” explains Shadmehr, “and where in the brain that simulation is taking place.”

Having found how and where to measure an internal model in action, Jazayeri and colleagues now plan to ask whether these control strategies can explain how primates effortlessly generalize their knowledge from one behavioral context to another. For example, how does an interpreter rapidly adjust when someone with widely different speech habits takes the podium? This line of investigation promises to shed light on high-level mental capacities of the primate brain that simpler animals seem to lack, that go awry in mental disorders, and that designers of artificial intelligence systems so fondly seek.

What is the social brain?

As part of our Ask the Brain series, Anila D’Mello, a postdoctoral fellow in John Gabrieli’s lab answers the question,”What is the social brain?”

_____

Anila D'Mello portrait
Anila D’Mello is the Simons Center for the Social Brain Postdoctoral Fellow in John Gabrieli’s lab at the McGovern Institute.

“Knock Knock.”
“Who’s there?”
“The Social Brain.”
“The Social Brain, who?”

Call and response jokes, like the “Knock Knock” joke above, leverage our common understanding of how a social interaction typically proceeds. Joke telling allows us to interact socially with others based on our shared experiences and understanding of the world. But where do these abilities “live” in the brain and how does the social brain develop?

Neuroimaging and lesion studies have identified a network of brain regions that support social interaction, including the ability to understand and partake in jokes – we refer to this as the “social brain.” This social brain network is made up of multiple regions throughout the brain that together support complex social interactions. Within this network, each region likely contributes to a specific type of social processing. The right temporo-parietal junction, for instance, is important for thinking about another person’s mental state, whereas the amygdala is important for the interpretation of emotional facial expressions and fear processing. Damage to these brain regions can have striking effects on social behaviors. One recent study even found that individuals with bigger amygdala volumes had larger and more complex social networks!

Though social interaction is such a fundamental human trait, we aren’t born with a prewired social brain.

Much of our social ability is grown and honed over time through repeated social interactions. Brain networks that support social interaction continue to specialize into adulthood. Neuroimaging work suggests that though newborn infants may have all the right brain parts to support social interaction, these regions may not yet be specialized or connected in the right way. This means that early experiences and environments can have large influences on the social brain. For instance, social neglect, especially very early in development, can have negative impacts on social behaviors and on how the social brain is wired. One prominent example is that of children raised in orphanages or institutions, who are sometimes faced with limited adult interaction or access to language. Children raised in these conditions are more likely to have social challenges including difficulties forming attachments. Prolonged lack of social stimulation also alters the social brain in these children resulting in changes in amygdala size and connections between social brain regions.

The social brain is not just a result of our environment. Genetics and biology also contribute to the social brain in ways we don’t yet fully understand. For example, individuals with autism / autistic individuals may experience difficulties with social interaction and communication. This may include challenges with things like understanding the punchline of a joke. These challenges in autism have led to the hypothesis that there may be differences in the social brain network in autism. However, despite documented behavioral differences in social tasks, there is conflicting brain imaging evidence for whether differences exist between people with and without autism in the social brain network.

Examples such as that of autism imply that the reality of the social brain is probably much more complex than the story painted here. It is likely that social interaction calls upon many different parts of the brain, even beyond those that we have termed the “social brain,” that must work in concert to support this highly complex set of behaviors. These include regions of the brain important for listening, seeing, speaking, and moving. In addition, it’s important to remember that the social brain and regions that make it up do not stand alone. Regions of the social brain also play an intimate role in language, humor, and other cognitive processes.

“Knock Knock”
“Who’s there?”
“The Social Brain”
“The Social Brain, who?”
“I just told you…didn’t you read what I wrote?”

Anila D’Mello earned her bachelor’s degree in psychology from Georgetown University in 2012, and went on to receive her PhD in Behavior, Cognition, and Neuroscience from American University in 2017. She joined the Gabrieli lab as a postdoc in 2017 and studies the neural correlates of social communication in autism.

_____

Do you have a question for The Brain? Ask it here.

Better sleep habits lead to better college grades

Two MIT professors have found a strong relationship between students’ grades and how much sleep they’re getting. What time students go to bed and the consistency of their sleep habits also make a big difference. And no, getting a good night’s sleep just before a big test is not good enough — it takes several nights in a row of good sleep to make a difference.

Those are among the conclusions from an experiment in which 100 students in an MIT engineering class were given Fitbits, the popular wrist-worn devices that track a person’s activity 24/7, in exchange for the researchers’ access to a semester’s worth of their activity data. The findings — some unsurprising, but some quite unexpected — are reported today in the journal Science of Learning in a paper by former MIT postdoc Kana Okano, professors Jeffrey Grossman and John Gabrieli, and two others.

One of the surprises was that individuals who went to bed after some particular threshold time — for these students, that tended to be 2 a.m., but it varied from one person to another — tended to perform less well on their tests no matter how much total sleep they ended up getting.

The study didn’t start out as research on sleep at all. Instead, Grossman was trying to find a correlation between physical exercise and the academic performance of students in his class 3.091 (Introduction to Solid-State Chemistry). In addition to having 100 of the students wear Fitbits for the semester, he also enrolled about one-fourth of them in an intense fitness class in MIT’s Department of Athletics, Physical Education, and Recreation, with the help of assistant professors Carrie Moore and Matthew Breen, who created the class specifically for this study. The thinking was that there might be measurable differences in test performance between the two groups.

There wasn’t. Those without the fitness classes performed just as well as those who did take them. “What we found at the end of the day was zero correlation with fitness, which I must say was disappointing since I believed, and still believe, there is a tremendous positive impact of exercise on cognitive performance,” Grossman says.

He speculates that the intervals between the fitness program and the classes may have been too long to show an effect. But meanwhile, in the vast amount of data collected during the semester, some other correlations did become obvious. While the devices weren’t explicitly monitoring sleep, the Fitbit program’s proprietary algorithms did detect periods of sleep and changes in sleep quality, primarily based on lack of activity.

These correlations were not at all subtle, Grossman says. There was essentially a straight-line relationship between the average amount of sleep a student got and their grades on the 11 quizzes, three midterms, and final exam, with the grades ranging from A’s to C’s. “There’s lots of scatter, it’s a noisy plot, but it’s a straight line,” he says. The fact that there was a correlation between sleep and performance wasn’t surprising, but the extent of it was, he says. Of course, this correlation can’t absolutely prove that sleep was the determining factor in the students’ performance, as opposed to some other influence that might have affected both sleep and grades. But the results are a strong indication, Grossman says, that sleep “really, really matters.”

“Of course, we knew already that more sleep would be beneficial to classroom performance, from a number of previous studies that relied on subjective measures like self-report surveys,” Grossman says. “But in this study the benefits of sleep are correlated to performance in the context of a real-life college course, and driven by large amounts of objective data collection.”

The study also revealed no improvement in scores for those who made sure to get a good night’s sleep right before a big test. According to the data, “the night before doesn’t matter,” Grossman says. “We’ve heard the phrase ‘Get a good night’s sleep, you’ve got a big day tomorrow.’ It turns out this does not correlate at all with test performance. Instead, it’s the sleep you get during the days when learning is happening that matter most.”

Another surprising finding is that there appears to be a certain cutoff for bedtimes, such that going to bed later results in poorer performance, even if the total amount of sleep is the same. “When you go to bed matters,” Grossman says. “If you get a certain amount of sleep  — let’s say seven hours — no matter when you get that sleep, as long as it’s before certain times, say you go to bed at 10, or at 12, or at 1, your performance is the same. But if you go to bed after 2, your performance starts to go down even if you get the same seven hours. So, quantity isn’t everything.”

Quality of sleep also mattered, not just quantity. For example, those who got relatively consistent amounts of sleep each night did better than those who had greater variations from one night to the next, even if they ended up with the same average amount.

This research also helped to provide an explanation for something that Grossman says he had noticed and wondered about for years, which is that on average, the women in his class have consistently gotten better grades than the men. Now, he has a possible answer: The data show that the differences in quantity and quality of sleep can fully account for the differences in grades. “If we correct for sleep, men and women do the same in class. So sleep could be the explanation for the gender difference in our class,” he says.

More research will be needed to understand the reasons why women tend to have better sleep habits than men. “There are so many factors out there that it could be,” Grossman says. “I can envision a lot of exciting follow-on studies to try to understand this result more deeply.”

“The results of this study are very gratifying to me as a sleep researcher, but are terrifying to me as a parent,” says Robert Stickgold, a professor of psychiatry and director of the Center for Sleep and Cognition at Harvard Medical School, who was not connected with this study. He adds, “The overall course grades for students averaging six and a half hours of sleep were down 50 percent from other students who averaged just one hour more sleep. Similarly, those who had just a half-hour more night-to-night variation in their total sleep time had grades that dropped 45 percent below others with less variation. This is huge!”

Stickgold says “a full quarter of the variation in grades was explained by these sleep parameters (including bedtime). All students need to not only be aware of these results, but to understand their implication for success in college. I can’t help but believe the same is true for high school students.” But he adds one caution: “That said, correlation is not the same as causation. While I have no doubt that less and more variable sleep will hurt a student’s grades, it’s also possible that doing poorly in classes leads to less and more variable sleep, not the other way around, or that some third factor, such as ADHD, could independently lead to poorer grades and poorer sleep.”

The team also included technical assistant Jakub Kaezmarzyk and Harvard Business School researcher Neha Dave. The study was supported by MIT’s Department of Materials Science and Engineering, the Lubin Fund, and the MIT Integrated Learning Initiative.