Delving deep into the brain

Launched in 2013, the national BRAIN Initiative aims to revolutionize our understanding of cognition by mapping the activity of every neuron in the human brain, revealing how brain circuits interact to create memories, learn new skills, and interpret the world around us.

Before that can happen, neuroscientists need new tools that will let them probe the brain more deeply and in greater detail, says Alan Jasanoff, an MIT associate professor of biological engineering. “There’s a general recognition that in order to understand the brain’s processes in comprehensive detail, we need ways to monitor neural function deep in the brain with spatial, temporal, and functional precision,” he says.

Jasanoff and colleagues have now taken a step toward that goal: They have established a technique that allows them to track neural communication in the brain over time, using magnetic resonance imaging (MRI) along with a specialized molecular sensor. This is the first time anyone has been able to map neural signals with high precision over large brain regions in living animals, offering a new window on brain function, says Jasanoff, who is also an associate member of MIT’s McGovern Institute for Brain Research.

His team used this molecular imaging approach, described in the May 1 online edition of Science, to study the neurotransmitter dopamine in a region called the ventral striatum, which is involved in motivation, reward, and reinforcement of behavior. In future studies, Jasanoff plans to combine dopamine imaging with functional MRI techniques that measure overall brain activity to gain a better understanding of how dopamine levels influence neural circuitry.

“We want to be able to relate dopamine signaling to other neural processes that are going on,” Jasanoff says. “We can look at different types of stimuli and try to understand what dopamine is doing in different brain regions and relate it to other measures of brain function.”

Tracking dopamine

Dopamine is one of many neurotransmitters that help neurons to communicate with each other over short distances. Much of the brain’s dopamine is produced by a structure called the ventral tegmental area (VTA). This dopamine travels through the mesolimbic pathway to the ventral striatum, where it combines with sensory information from other parts of the brain to reinforce behavior and help the brain learn new tasks and motor functions. This circuit also plays a major role in addiction.
To track dopamine’s role in neural communication, the researchers used an MRI sensor they had previously designed, consisting of an iron-containing protein that acts as a weak magnet. When the sensor binds to dopamine, its magnetic interactions with the surrounding tissue weaken, which dims the tissue’s MRI signal. This allows the researchers to see where in the brain dopamine is being released. The researchers also developed an algorithm that lets them calculate the precise amount of dopamine present in each fraction of a cubic millimeter of the ventral striatum.

After delivering the MRI sensor to the ventral striatum of rats, Jasanoff’s team electrically stimulated the mesolimbic pathway and was able to detect exactly where in the ventral striatum dopamine was released. An area known as the nucleus accumbens core, known to be one of the main targets of dopamine from the VTA, showed the highest levels. The researchers also saw that some dopamine is released in neighboring regions such as the ventral pallidum, which regulates motivation and emotions, and parts of the thalamus, which relays sensory and motor signals in the brain.

Each dopamine stimulation lasted for 16 seconds and the researchers took an MRI image every eight seconds, allowing them to track how dopamine levels changed as the neurotransmitter was released from cells and then disappeared. “We could divide up the map into different regions of interest and determine dynamics separately for each of those regions,” Jasanoff says.

He and his colleagues plan to build on this work by expanding their studies to other parts of the brain, including the areas most affected by Parkinson’s disease, which is caused by the death of dopamine-generating cells. Jasanoff’s lab is also working on sensors to track other neurotransmitters, allowing them to study interactions between neurotransmitters during different tasks.

The paper’s lead author is postdoc Taekwan Lee. Technical assistant Lili Cai and postdocs Victor Lelyveld and Aviad Hai also contributed to the research, which was funded by the National Institutes of Health and the Defense Advanced Research Projects Agency.

How the brain pays attention

Picking out a face in the crowd is a complicated task: Your brain has to retrieve the memory of the face you’re seeking, then hold it in place while scanning the crowd, paying special attention to finding a match.

A new study by MIT neuroscientists reveals how the brain achieves this type of focused attention on faces or other objects: A part of the prefrontal cortex known as the inferior frontal junction (IFJ) controls visual processing areas that are tuned to recognize a specific category of objects, the researchers report in the April 10 online edition of Science.

Scientists know much less about this type of attention, known as object-based attention, than spatial attention, which involves focusing on what’s happening in a particular location. However, the new findings suggest that these two types of attention have similar mechanisms involving related brain regions, says Robert Desimone, the Doris and Don Berkey Professor of Neuroscience, director of MIT’s McGovern Institute for Brain Research, and senior author of the paper.

“The interactions are surprisingly similar to those seen in spatial attention,” Desimone says. “It seems like it’s a parallel process involving different areas.”

In both cases, the prefrontal cortex — the control center for most cognitive functions — appears to take charge of the brain’s attention and control relevant parts of the visual cortex, which receives sensory input. For spatial attention, that involves regions of the visual cortex that map to a particular area within the visual field.

In the new study, the researchers found that IFJ coordinates with a brain region that processes faces, known as the fusiform face area (FFA), and a region that interprets information about places, known as the parahippocampal place area (PPA). The FFA and PPA were first identified in the human cortex by Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience at MIT.

The IFJ has previously been implicated in a cognitive ability known as working memory, which is what allows us to gather and coordinate information while performing a task — such as remembering and dialing a phone number, or doing a math problem.

For this study, the researchers used magnetoencephalography (MEG) to scan human subjects as they viewed a series of overlapping images of faces and houses. Unlike functional magnetic resonance imaging (fMRI), which is commonly used to measure brain activity, MEG can reveal the precise timing of neural activity, down to the millisecond. The researchers presented the overlapping streams at two different rhythms — two images per second and 1.5 images per second — allowing them to identify brain regions responding to those stimuli.

“We wanted to frequency-tag each stimulus with different rhythms. When you look at all of the brain activity, you can tell apart signals that are engaged in processing each stimulus,” says Daniel Baldauf, a postdoc at the McGovern Institute and the lead author of the paper.

Each subject was told to pay attention to either faces or houses; because the houses and faces were in the same spot, the brain could not use spatial information to distinguish them. When the subjects were told to look for faces, activity in the FFA and the IFJ became synchronized, suggesting that they were communicating with each other. When the subjects paid attention to houses, the IFJ synchronized instead with the PPA.

The researchers also found that the communication was initiated by the IFJ and the activity was staggered by 20 milliseconds — about the amount of time it would take for neurons to electrically convey information from the IFJ to either the FFA or PPA. The researchers believe that the IFJ holds onto the idea of the object that the brain is looking for and directs the correct part of the brain to look for it.
Further bolstering this idea, the researchers used an MRI-based method to measure the white matter that connects different brain regions and found that the IFJ is highly connected with both the FFA and PPA.

Members of Desimone’s lab are now studying how the brain shifts its focus between different types of sensory input, such as vision and hearing. They are also investigating whether it might be possible to train people to better focus their attention by controlling the brain interactions  involved in this process.

“You have to identify the basic neural mechanisms and do basic research studies, which sometimes generate ideas for things that could be of practical benefit,” Desimone says. “It’s too early to say whether this training is even going to work at all, but it’s something that we’re actively pursuing.”

The research was funded by the National Institutes of Health and the National Science Foundation.

MRI reveals genetic activity

Doctors commonly use magnetic resonance imaging (MRI) to diagnose tumors, damage from stroke, and many other medical conditions. Neuroscientists also rely on it as a research tool for identifying parts of the brain that carry out different cognitive functions.

Now, a team of biological engineers at MIT is trying to adapt MRI to a much smaller scale, allowing researchers to visualize gene activity inside the brains of living animals. Tracking these genes with MRI would enable scientists to learn more about how the genes control processes such as forming memories and learning new skills, says Alan Jasanoff, an MIT associate professor of biological engineering and leader of the research team.

“The dream of molecular imaging is to provide information about the biology of intact organisms, at the molecule level,” says Jasanoff, who is also an associate member of MIT’s McGovern Institute for Brain Research. “The goal is to not have to chop up the brain, but instead to actually see things that are happening inside.”

To help reach that goal, Jasanoff and colleagues have developed a new way to image a “reporter gene” — an artificial gene that turns on or off to signal events in the body, much like an indicator light on a car’s dashboard. In the new study, the reporter gene encodes an enzyme that interacts with a magnetic contrast agent injected into the brain, making the agent visible with MRI. This approach, described in a recent issue of the journal Chemical Biology, allows researchers to determine when and where that reporter gene is turned on.

An on/off switch

MRI uses magnetic fields and radio waves that interact with protons in the body to produce detailed images of the body’s interior. In brain studies, neuroscientists commonly use functional MRI to measure blood flow, which reveals which parts of the brain are active during a particular task. When scanning other organs, doctors sometimes use magnetic “contrast agents” to boost the visibility of certain tissues.

The new MIT approach includes a contrast agent called a manganese porphyrin and the new reporter gene, which codes for a genetically engineered enzyme that alters the electric charge on the contrast agent. Jasanoff and colleagues designed the contrast agent so that it is soluble in water and readily eliminated from the body, making it difficult to detect by MRI. However, when the engineered enzyme, known as SEAP, slices phosphate molecules from the manganese porphyrin, the contrast agent becomes insoluble and starts to accumulate in brain tissues, allowing it to be seen.

The natural version of SEAP is found in the placenta, but not in other tissues. By injecting a virus carrying the SEAP gene into the brain cells of mice, the researchers were able to incorporate the gene into the cells’ own genome. Brain cells then started producing the SEAP protein, which is secreted from the cells and can be anchored to their outer surfaces. That’s important, Jasanoff says, because it means that the contrast agent doesn’t have to penetrate the cells to interact with the enzyme.

Researchers can then find out where SEAP is active by injecting the MRI contrast agent, which spreads throughout the brain but accumulates only near cells producing the SEAP protein.

Exploring brain function

In this study, which was designed to test this general approach, the detection system revealed only whether the SEAP gene had been successfully incorporated into brain cells. However, in future studies, the researchers intend to engineer the SEAP gene so it is only active when a particular gene of interest is turned on.

Jasanoff first plans to link the SEAP gene with so-called “early immediate genes,” which are necessary for brain plasticity — the weakening and strengthening of connections between neurons, which is essential to learning and memory.

“As people who are interested in brain function, the top questions we want to address are about how brain function changes patterns of gene expression in the brain,” Jasanoff says. “We also imagine a future where we might turn the reporter enzyme on and off when it binds to neurotransmitters, so we can detect changes in neurotransmitter levels as well.”

Assaf Gilad, an assistant professor of radiology at Johns Hopkins University, says the MIT team has taken a “very creative approach” to developing noninvasive, real-time imaging of gene activity. “These kinds of genetically engineered reporters have the potential to revolutionize our understanding of many biological processes,” says Gilad, who was not involved in the study.

The research was funded by the Raymond and Beverly Sackler Foundation, the National Institutes of Health, and an MIT-Germany Seed Fund grant. The paper’s lead author is former MIT postdoc Gil Westmeyer; other authors are former MIT technical assistant Yelena Emer and Jutta Lintelmann of the German Research Center for Environmental Health.

Optogenetic toolkit goes multicolor

Optogenetics is a technique that allows scientists to control neurons’ electrical activity with light by engineering them to express light-sensitive proteins. Within the past decade, it has become a very powerful tool for discovering the functions of different types of cells in the brain.

Most of these light-sensitive proteins, known as opsins, respond to light in the blue-green range. Now, a team led by MIT has discovered an opsin that is sensitive to red light, which allows researchers to independently control the activity of two populations of neurons at once, enabling much more complex studies of brain function.

“If you want to see how two different sets of cells interact, or how two populations of the same cell compete against each other, you need to be able to activate those populations independently,” says Ed Boyden, a member of the McGovern Institute for Brain Research at MIT and a senior author of the new study.

The new opsin is one of about 60 light-sensitive proteins found in a screen of 120 species of algae. The study, which appears in the Feb. 9 online edition of Nature Methods, also yielded the fastest opsin, enabling researchers to study neuron activity patterns with millisecond timescale precision.

Boyden and Gane Ka-Shu Wong, a professor of medicine and biological sciences at the University of Alberta, are the paper’s senior authors, and the lead author is MIT postdoc Nathan Klapoetke. Researchers from the Howard Hughes Medical Institute’s Janelia Farm Research Campus, the University of Pennsylvania, the University of Cologne, and the Beijing Genomics Institute also contributed to the study.

In living color

Opsins occur naturally in many algae and bacteria, which use the light-sensitive proteins to help them respond to their environment and generate energy.

To achieve optical control of neurons, scientists engineer brain cells to express the gene for an opsin, which transports ions across the cell’s membrane to alter its voltage. Depending on the opsin used, shining light on the cell either lowers the voltage and silences neuron firing, or boosts voltage and provokes the cell to generate an electrical impulse. This effect is nearly instantaneous and easily reversible.

Using this approach, researchers can selectively turn a population of cells on or off and observe what happens in the brain. However, until now, they could activate only one population at a time, because the only opsins that responded to red light also responded to blue light, so they couldn’t be paired with other opsins to control two different cell populations.

To seek additional useful opsins, the MIT researchers worked with Wong’s team at the University of Alberta, which is sequencing the transcriptomes of 1,000 plants, including some algae. (The transcriptome is similar to the genome but includes only the genes that are expressed by a cell, not the entirety of its genetic material.)

Once the team obtained genetic sequences that appeared to code for opsins, Klapoetke tested their light-responsiveness in mammalian brain tissue, working with Martha Constantine-Paton, a professor of brain and cognitive sciences and of biology, a member of the McGovern Institute for Brain Research at MIT, and also an author of the paper. The red-light-sensitive opsin, which the researchers named Chrimson, can mediate neural activity in response to light with a 735-nanometer
wavelength.

The researchers also discovered a blue-light-driven opsin that has two highly desirable traits: It operates at high speed, and it is sensitive to very dim light. This opsin, called Chronos, can be stimulated with levels of blue light that are too weak to activate Chrimson.

“You can use short pulses of dim blue light to drive the blue one, and you can use strong red light to drive Chrimson, and that allows you to do true two-color, zero-cross-talk activation in intact brain tissue,” says Boyden, who is a member of MIT’s Media Lab and an associate professor of biological engineering and brain and cognitive sciences at MIT.

Researchers had previously tried to modify naturally occurring opsins to make them respond faster and react to dimmer light, but trying to optimize one feature often made other features worse.

“It was apparent that when trying to engineer traits like color, light sensitivity, and kinetics, there are always tradeoffs,” Klapoetke says. “We’re very lucky that something natural actually was more than several times faster and also five or six times more light-sensitive than anything else.”

Selective control

These new opsins lend themselves to several types of studies that were not possible before, Boyden says. For one, scientists could not only manipulate activity of a cell population of interest, but also control upstream cells that influence the target population by secreting neurotransmitters.

Pairing Chrimson and Chronos could also allow scientists to study the functions of different types of cells in the same microcircuit within the brain. Such cells are usually located very close together, but with the new opsins they can be controlled independently with two different colors of light.

“I think the tools described in this excellent paper represent a major advance for both basic and translational neuroscience,” says Botond Roska, a senior group leader at the Friedrich Miescher Institute for Biomedical Research in Switzerland, who was not part of the research team. “Optogenetic tools that are shifted towards the infrared range, such as Chrimson described in this paper, are much better than the more blue-shifted variants since these are less toxic, activate less the pupillary reflex, and activate less the remaining photoreceptors of patients.”

Most optogenetic studies thus far have been done in mice, but Chrimson could be used for optogenetic studies of fruit flies, a commonly used experimental organism. Researchers have had trouble using blue-light-sensitive opsins in fruit flies because the light can get into the flies’ eyes and startle them, interfering with the behavior being studied.

Vivek Jayaraman, a research group leader at Janelia Farms and an author of the paper, was able to show that this startle response does not occur when red light is used to stimulate Chrimson in fruit flies.

Because red light is less damaging to tissue than blue light, Chrimson also holds potential for eventual therapeutic use in humans, Boyden says. Animal studies with other opsins have shown promise in helping to restore vision after the loss of photoreceptor cells in the retina.

The researchers are now trying to modify Chrimson to respond to light in the infrared range. They are also working on making both Chrimson and Chronos faster and more light sensitive.

MIT’s portion of the project was funded by the National Institutes of Health, the MIT Media Lab, the National Science Foundation, the Wallace H. Coulter Foundation, the Alfred P. Sloan Foundation, a NARSAD Young Investigator Grant, the Human Frontiers Science Program, an NYSCF Robertson Neuroscience Investigator Award, the IET A.F. Harvey Prize, Janet and Sheldon Razin ’59, and the Skolkovo Institute of Science and Technology.

Expanding our view of vision

Every time you open your eyes, visual information flows into your brain, which interprets what you’re seeing. Now, for the first time, MIT neuroscientists have noninvasively mapped this flow of information in the human brain with unique accuracy, using a novel brain-scanning technique.

This technique, which combines two existing technologies, allows researchers to identify precisely both the location and timing of human brain activity. Using this new approach, the MIT researchers scanned individuals’ brains as they looked at different images and were able to pinpoint, to the millisecond, when the brain recognizes and categorizes an object, and where these processes occur.

“This method gives you a visualization of ‘when’ and ‘where’ at the same time. It’s a window into processes happening at the millisecond and millimeter scale,” says Aude Oliva, a principal research scientist in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).

Oliva is the senior author of a paper describing the findings in the Jan. 26 issue of Nature Neuroscience. Lead author of the paper is CSAIL postdoc Radoslaw Cichy. Dimitrios Pantazis, a research scientist at MIT’s McGovern Institute for Brain Research, is also an author of the paper.

When and where

Until now, scientists have been able to observe the location or timing of human brain activity at high resolution, but not both, because different imaging techniques are not easily combined. The most commonly used type of brain scan, functional magnetic resonance imaging (fMRI), measures changes in blood flow, revealing which parts of the brain are involved in a particular task. However, it works too slowly to keep up with the brain’s millisecond-by-millisecond dynamics.

Another imaging technique, known as magnetoencephalography (MEG), uses an array of hundreds of sensors encircling the head to measure magnetic fields produced by neuronal activity in the brain. These sensors offer a dynamic portrait of brain activity over time, down to the millisecond, but do not tell the precise location of the signals.

To combine the time and location information generated by these two scanners, the researchers used a computational technique called representational similarity analysis, which relies on the fact that two similar objects (such as two human faces) that provoke similar signals in fMRI will also produce similar signals in MEG. This method has been used before to link fMRI with recordings of neuronal electrical activity in monkeys, but the MIT researchers are the first to use it to link fMRI and MEG data from human subjects.

In the study, the researchers scanned 16 human volunteers as they looked at a series of 92 images, including faces, animals, and natural and manmade objects. Each image was shown for half a second.

“We wanted to measure how visual information flows through the brain. It’s just pure automatic machinery that starts every time you open your eyes, and it’s incredibly fast,” Cichy says. “This is a very complex process, and we have not yet looked at higher cognitive processes that come later, such as recalling thoughts and memories when you are watching objects.”

Each subject underwent the test multiple times — twice in an fMRI scanner and twice in an MEG scanner — giving the researchers a huge set of data on the timing and location of brain activity. All of the scanning was done at the Athinoula A. Martinos Imaging Center at the McGovern Institute.

Millisecond by millisecond

By analyzing this data, the researchers produced a timeline of the brain’s object-recognition pathway that is very similar to results previously obtained by recording electrical signals in the visual cortex of monkeys, a technique that is extremely accurate but too invasive to use in humans.

About 50 milliseconds after subjects saw an image, visual information entered a part of the brain called the primary visual cortex, or V1, which recognizes basic elements of a shape, such as whether it is round or elongated. The information then flowed to the inferotemporal cortex, where the brain identified the object as early as 120 milliseconds. Within 160 milliseconds, all objects had been classified into categories such as plant or animal.

The MIT team’s strategy “provides a rich new source of evidence on this highly dynamic process,” says Nikolaus Kriegeskorte, a principal investigator in cognition and brain sciences at Cambridge University.

“The combination of MEG and fMRI in humans is no surrogate for invasive animal studies with techniques that simultaneously have high spatial and temporal precision, but Cichy et al. come closer to characterizing the dynamic emergence of representational geometries across stages of processing in humans than any previous work. The approach will be useful for future studies elucidating other perceptual and cognitive processes,” says Kriegeskorte, who was not part of the research team.

The MIT researchers are now using representational similarity analysis to study the accuracy of computer models of vision by comparing brain scan data with the models’ predictions of how vision works.

Using this approach, scientists should also be able to study how the human brain analyzes other types of information such as motor, verbal, or sensory signals, the researchers say. It could also shed light on processes that underlie conditions such as memory disorders or dyslexia, and could benefit patients suffering from paralysis or neurodegenerative diseases.

“This is the first time that MEG and fMRI have been connected in this way, giving us a unique perspective,” Pantazis says. “We now have the tools to precisely map brain function both in space and time, opening up tremendous possibilities to study the human brain.”

The research was funded by the National Eye Institute, the National Science Foundation, and a Feodor Lynen Research Fellowship from the Humboldt Foundation.

Speeding up gene discovery

Since the completion of the Human Genome Project, which identified nearly 20,000 protein-coding genes, scientists have been trying to decipher the roles of those genes. A new approach developed at MIT, the Broad Institute, and the Whitehead Institute should speed up the process by allowing researchers to study the entire genome at once.

The new system, known as CRISPR, allows researchers to permanently and selectively delete genes from a cell’s DNA. In two new papers, the researchers showed that they could study all the genes in the genome by deleting a different gene in each of a huge population of cells, then observing which cells proliferated under different conditions.

“With this work, it is now possible to conduct systematic genetic screens in mammalian cells. This will greatly aid efforts to understand the function of both protein-coding genes as well as noncoding genetic elements,” says David Sabatini, a member of the Whitehead Institute, MIT professor of biology, and a senior author of one of the papers, both of which appear in this week’s online edition of Science.

Using this approach, the researchers were able to identify genes that allow melanoma cells to proliferate, as well as genes that confer resistance to certain chemotherapy drugs. Such studies could help scientists develop targeted cancer treatments by revealing the genes that cancer cells depend on to survive.

Feng Zhang, the W.M. Keck Assistant Professor in Biomedical Engineering and senior author of the other Science paper, developed the CRISPR system by exploiting a naturally occurring bacterial protein that recognizes and snips viral DNA. This protein, known as Cas9, is recruited by short RNA molecules called guides, which bind to the DNA to be cut. This DNA-editing complex offers very precise control over which genes are disrupted, by simply changing the sequence of the RNA guide.

“One of the things we’ve realized is that you can easily reprogram these enzymes with a short nucleic-acid chain. This paper takes advantage of that and shows that you can scale that to large numbers and really sample across the whole genome,” says Zhang, who is also a member of MIT’s McGovern Institute for Brain Research and the Broad Institute.

Genome-wide screens

For their new paper, Zhang and colleagues created a library of about 65,000 guide RNA strands that target nearly every known gene. They delivered genes for these guides, along with genes for the CRISPR machinery, to human cells. Each cell took up one of the guides, and the gene targeted by that guide was deleted. If the gene lost was necessary for survival, the cell died.

“This is the first work that really introduces so many mutations in a controlled fashion, which really opens a lot of possibilities in functional genomics,” says Ophir Shalem, a Broad Institute postdoc and one of the lead authors of the Zhang paper, along with Broad Institute postdoc Neville Sanjana.

This approach enabled the researchers to identify genes essential to the survival of two populations of cells: cancer cells and pluripotent stem cells. The researchers also identified genes necessary for melanoma cells to survive treatment with the chemotherapy drug vemurafenib.

In the other paper, led by Sabatini and Eric Lander, the director of the Broad Institute and an MIT professor of biology, the research team targeted a smaller set of about 7,000 genes, but they designed more RNA guide sequences for each gene. The researchers expected that each sequence would block its target gene equally well, but they found that cells with different guides for the same gene had varying survival rates.

“That suggested that there were intrinsic differences between guide RNA sequences that led to differences in their efficiency at cleaving the genomic DNA,” says Tim Wang, an MIT graduate student in biology and lead author of the paper.
From that data, the researchers deduced some rules that appear to govern the efficiency of the CRISPR-Cas9 system. They then used those rules to create an algorithm that can predict the most successful sequences to target a given gene.

“These papers together demonstrate the extraordinary power and versatility of the CRISPR-Cas9 system as a tool for genomewide discovery of the mechanisms underlying mammalian biology,” Lander says. “And we are just at the beginning: We’re still uncovering the capabilities of this system and its many applications.”

Efficient alternative

The researchers say that the CRISPR approach could offer a more efficient and reliable alternative to RNA interference (RNAi), which is currently the most widely used method for studying gene functions. RNAi works by delivering short RNA strands known as shRNA that destroy messenger RNA (mRNA), which carries DNA’s instructions to the rest of the cell.

The drawback to RNAi is that it targets mRNA and not DNA, so it is impossible to get 100 percent elimination of the gene. “CRISPR can completely deplete a given protein in a cell, whereas shRNA will reduce the levels but it will never reach complete depletion,” Zhang says.

Michael Elowitz, a professor of biology, bioengineering, and applied physics at the California Institute of Technology, says the demonstration of the new technique is “an astonishing achievement.”

“Being able to do things on this enormous scale, at high accuracy, is going to revolutionize biology, because for the first time we can start to contemplate the kinds of comprehensive and complex genetic manipulations of cells that are necessary to really understand how complex genetic circuits work,” says Elowitz, who was not involved in the research.

In future studies, the researchers plan to conduct genomewide screens of cells that have become cancerous through the loss of tumor suppressor genes such as BRCA1. If scientists can discover which genes are necessary for those cells to thrive, they may be able to develop drugs that are highly cancer-specific, Wang says.

This strategy could also be used to help find drugs that counterattack tumor cells that have developed resistance to existing chemotherapy drugs, by identifying genes that those cells rely on for survival.

The researchers also hope to use the CRISPR system to study the function of the vast majority of the genome that does not code for proteins. “Only 2 percent of the genome is coding. That’s what these two studies have focused on, that 2 percent, but really there’s that other 98 percent which for a long time has been like dark matter,” Sanjana says.

The research from the Lander/Sabatini group was funded by the National Institutes of Health; the National Human Genome Research Institute; the Broad Institute, and the National Science Foundation. The research from the Zhang group was supported by the NIH Director’s Pioneer Award; the NIH; the Keck, McKnight, Merkin, Vallee, Damon Runyon, Searle Scholars, Klingenstein, and Simon Foundations; Bob Metcalfe; the Klarman Family Foundation; the Simons Center for the Social Brain at MIT; and Jane Pauley.

Even when test scores go up, some cognitive abilities don’t

To evaluate school quality, states require students to take standardized tests; in many cases, passing those tests is necessary to receive a high-school diploma. These high-stakes tests have also been shown to predict students’ future educational attainment and adult employment and income.

Such tests are designed to measure the knowledge and skills that students have acquired in school — what psychologists call “crystallized intelligence.” However, schools whose students have the highest gains on test scores do not produce similar gains in “fluid intelligence” — the ability to analyze abstract problems and think logically — according to a new study from MIT neuroscientists working with education researchers at Harvard University and Brown University.

In a study of nearly 1,400 eighth-graders in the Boston public school system, the researchers found that some schools have successfully raised their students’ scores on the Massachusetts Comprehensive Assessment System (MCAS). However, those schools had almost no effect on students’ performance on tests of fluid intelligence skills, such as working memory capacity, speed of information processing, and ability to solve abstract problems.

“Our original question was this: If you have a school that’s effectively helping kids from lower socioeconomic environments by moving up their scores and improving their chances to go to college, then are those changes accompanied by gains in additional cognitive skills?” says John Gabrieli, the Grover M. Hermann Professor of Health Sciences and Technology, professor of brain and cognitive sciences, and senior author of a forthcoming Psychological Science paper describing the findings.

Instead, the researchers found that educational practices designed to raise knowledge and boost test scores do not improve fluid intelligence. “It doesn’t seem like you get these skills for free in the way that you might hope, despite learning a lot by being a good student,” says Gabrieli, who is also a member of MIT’s McGovern Institute for Brain Research.

Measuring cognition

This study grew out of a larger effort to find measures beyond standardized tests that can predict long-term success for students. “As we started that study, it struck us that there’s been surprisingly little evaluation of different kinds of cognitive abilities and how they relate to educational outcomes,” Gabrieli says.

The data for the Psychological Science study came from students attending traditional, charter, and exam schools in Boston. Some of those schools have had great success improving their students’ MCAS scores — a boost that studies have found also translates to better performance on the SAT and Advanced Placement tests.

The researchers calculated how much of the variation in MCAS scores was due to the school that students attended. For MCAS scores in English, schools accounted for 24 percent of the variation, and they accounted for 34 percent of the math MCAS variation. However, the schools accounted for very little of the variation in fluid cognitive skills — less than 3 percent for all three skills combined.

In one example of a test of fluid reasoning, students were asked to choose which of six pictures completed the missing pieces of a puzzle — a task requiring integration of information such as shape, pattern, and orientation.

“It’s not always clear what dimensions you have to pay attention to get the problem correct. That’s why we call it fluid, because it’s the application of reasoning skills in novel contexts,” says Amy Finn, an MIT postdoc and lead author of the paper.

Even stronger evidence came from a comparison of about 200 students who had entered a lottery for admittance to a handful of Boston’s oversubscribed charter schools, many of which achieve strong improvement in MCAS scores. The researchers found that students who were randomly selected to attend high-performing charter schools did significantly better on the math MCAS than those who were not chosen, but there was no corresponding increase in fluid intelligence scores.

However, the researchers say their study is not about comparing charter schools and district schools. Rather, the study showed that while schools of both types varied in their impact on test scores, they did not vary in their impact on fluid cognitive skills.

“What’s nice about this study is it seems to narrow down the possibilities of what educational interventions are achieving,” says Daniel Willingham, a professor of psychology at the University of Virginia who was not part of the research team. “We’re usually primarily concerned with outcomes in schools, but the underlying mechanisms are also important.”

The researchers plan to continue tracking these students, who are now in 10th grade, to see how their academic performance and other life outcomes evolve. They have also begun to participate in a new study of high school seniors to track how their standardized test scores and cognitive abilities influence their rates of college attendance and graduation.

Implications for education

Gabrieli notes that the study should not be interpreted as critical of schools that are improving their students’ MCAS scores. “It’s valuable to push up the crystallized abilities, because if you can do more math, if you can read a paragraph and answer comprehension questions, all those things are positive,” he says.

He hopes that the findings will encourage educational policymakers to consider adding practices that enhance cognitive skills. Although many studies have shown that students’ fluid cognitive skills predict their academic performance, such skills are seldom explicitly taught.

“Schools can improve crystallized abilities, and now it might be a priority to see if there are some methods for enhancing the fluid ones as well,” Gabrieli says.

Some studies have found that educational programs that focus on improving memory, attention, executive function, and inductive reasoning can boost fluid intelligence, but there is still much disagreement over what programs are consistently effective.

The research was a collaboration with the Center for Education Policy Research at Harvard University, Transforming Education, and Brown University, and was funded by the Bill and Melinda Gates Foundation and the National Institutes of Health.

Brain balances learning new skills, retaining old skills

To learn new motor skills, the brain must be plastic: able to rapidly change the strengths of connections between neurons, forming new patterns that accomplish a particular task. However, if the brain were too plastic, previously learned skills would be lost too easily.

A new computational model developed by MIT neuroscientists explains how the brain maintains the balance between plasticity and stability, and how it can learn very similar tasks without interference between them.

The key, the researchers say, is that neurons are constantly changing their connections with other neurons. However, not all of the changes are functionally relevant — they simply allow the brain to explore many possible ways to execute a certain skill, such as a new tennis stroke.

“Your brain is always trying to find the configurations that balance everything so you can do two tasks, or three tasks, or however many you’re learning,” says Robert Ajemian, a research scientist in MIT’s McGovern Institute for Brain Research and lead author of a paper describing the findings in the Proceedings of the National Academy of Sciences the week of Dec. 9. “There are many ways to solve a task, and you’re exploring all the different ways.”

As the brain explores different solutions, neurons can become specialized for specific tasks, according to this theory.

Noisy circuits

As the brain learns a new motor skill, neurons form circuits that can produce the desired output — a command that will activate the body’s muscles to perform a task such as swinging a tennis racket. Perfection is usually not achieved on the first try, so feedback from each effort helps the brain to find better solutions.

This works well for learning one skill, but complications arise when the brain is trying to learn many different skills at once. Because the same distributed network controls related motor tasks, new modifications to existing patterns can interfere with previously learned skills.

“This is particularly tricky when you’re learning very similar things,” such as two different tennis strokes, says Institute Professor Emilio Bizzi, the paper’s senior author and a member of the McGovern Institute.

The Bizzi lab shows how the brain utilizes the operating characteristics of neurons to form sensorimotor memories in a way that differs profoundly from computer memory.
The Bizzi lab shows how the brain utilizes the operating characteristics of neurons to form sensorimotor memories in a way that differs profoundly from computer memory.

In a serial network such as a computer chip, this would be no problem — instructions for each task would be stored in a different location on the chip. However, the brain is not organized like a computer chip. Instead, it is massively parallel and highly connected — each neuron connects to, on average, about 10,000 other neurons.

That connectivity offers an advantage, however, because it allows the brain to test out so many possible solutions to achieve combinations of tasks. The constant changes in these connections, which the researchers call hyperplasticity, is balanced by another inherent trait of neurons — they have a very low signal to noise ratio, meaning that they receive about as much useless information as useful input from their neighbors.

Most models of neural activity don’t include noise, but the MIT team says noise is a critical element of the brain’s learning ability. “Most people don’t want to deal with noise because it’s a nuisance,” Ajemian says. “We set out to try to determine if noise can be used in a beneficial way, and we found that it allows the brain to explore many solutions, but it can only be utilized if the network is hyperplastic.”

This model helps to explain how the brain can learn new things without unlearning previously acquired skills, says Ferdinando Mussa-Ivaldi, a professor of physiology at Northwestern University.

“What the paper shows is that, counterintuitively, if you have neural networks and they have a high level of random noise, that actually helps instead of hindering the stability problem,” says Mussa-Ivaldi, who was not part of the research team.

Without noise, the brain’s hyperplasticity would overwrite existing memories too easily. Conversely, low plasticity would not allow any new skills to be learned, because the tiny changes in connectivity would be drowned out by all of the inherent noise.

The model is supported by anatomical evidence showing that neurons exhibit a great deal of plasticity even when learning is not taking place, as measured by the growth and formation of connections of dendrites — the tiny extensions that neurons use to communicate with each other.

Like riding a bike

The constantly changing connections explain why skills can be forgotten unless they are practiced often, especially if they overlap with other routinely performed tasks.

“That’s why an expert tennis player has to warm up for an hour before a match,” Ajemian says. The warm-up is not for their muscles, instead, the players need to recalibrate the neural networks that control different tennis strokes that are stored in the brain’s motor cortex.

However, skills such as riding a bicycle, which is not very similar to other common skills, are retained more easily. “Once you’ve learned something, if it doesn’t overlap or intersect with other skills, you will forget it but so slowly that it’s essentially permanent,” Ajemian says.

The researchers are now investigating whether this type of model could also explain how the brain forms memories of events, as well as motor skills.

The research was funded by the National Science Foundation.

McGovern neuroscientists discover new role for ‘hunger hormone’

About a dozen years ago, scientists discovered that a hormone called ghrelin enhances appetite. Dubbed the “hunger hormone,” ghrelin was quickly targeted by drug companies seeking treatments for obesity — none of which have yet panned out.

MIT neuroscientists have now discovered that ghrelin’s role goes far beyond controlling hunger. The researchers found that ghrelin released during chronic stress makes the brain more vulnerable to traumatic events, suggesting that it may predispose people to posttraumatic stress disorder (PTSD).

Drugs that reduce ghrelin levels, originally developed to try to combat obesity, could help protect people who are at high risk for PTSD, such as soldiers serving in war, says Ki Goosens, an assistant professor of brain and cognitive sciences at MIT, and senior author of a paper describing the findings in the Oct. 15 online edition of Molecular Psychiatry.

“Perhaps we could give people who are going to be deployed into an active combat zone a ghrelin vaccine before they go, so they will have a lower incidence of PTSD. That’s exciting because right now there’s nothing given to people to prevent PTSD,” says Goosens, who is also a member of MIT’s McGovern Institute for Brain Research.

Lead author of the paper is Retsina Meyer, a recent MIT PhD recipient. Other authors are McGovern postdoc Anthony Burgos-Robles, graduate student Elizabeth Liu, and McGovern research scientist Susana Correia.

Stress and fear

Stress is a useful response to dangerous situations because it provokes action to escape or fight back. However, when stress is chronic, it can produce anxiety, depression and other mental illnesses.

At MIT, Goosens discovered that one brain structure that is especially critical for generating fear, the amygdala, has a special response to chronic stress. The amygdala produces large amounts of growth hormone during stress, a change that seems not to occur in other brain regions.

In the new paper, Goosens and her colleagues found that the release of the growth hormone in the amygdala is controlled by ghrelin, which is produced primarily in the stomach and travels throughout the body, including the brain.

Ghrelin levels are elevated by chronic stress. In humans, this might be produced by factors such as unemployment, bullying, or loss of a family member. Ghrelin stimulates the secretion of growth hormone from the brain; the effects of growth hormone from the pituitary gland in organs such as the liver and bones have been extensively studied. However, the role of growth hormone in the brain, particularly the amygdala, is not well known.

The researchers found that when rats were given either a drug to stimulate the ghrelin receptor or gene therapy to overexpress growth hormone over a prolonged period, they became much more susceptible to fear than normal rats. Fear was measured by training all of the rats to fear an innocuous, novel tone. While all rats learned to fear the tone, the rats with prolonged increased activity of the ghrelin receptor or overexpression of growth hormone were the most fearful, assessed by how long they froze after hearing the tone. Blocking the cell receptors that interact with ghrelin or growth hormone reduced fear to normal levels in chronically stressed rats.

When rats were exposed to chronic stress over a prolonged period, their circulating ghrelin and amygdalar growth hormone levels also went up, and fearful memories were encoded more strongly. This is similar to what the researchers believe happens in people who suffer from PTSD.

“When you have people with a history of stress who encounter a traumatic event, they are more likely to develop PTSD because that history of stress has altered something about their biology. They have an excessively strong memory of the traumatic event, and that is one of the things that drives their PTSD symptoms,” Goosens says.

New drugs, new targets

Over the last century, scientists have described the hypothalamic-pituitary-adrenal (HPA) axis, which produces adrenaline, cortisol (corticosterone in rats), and other hormones that stimulate “fight or flight” behavior. Since then, stress research has focused almost exclusively on the HPA axis.

After discovering ghrelin’s role in stress, the MIT researchers suspected that ghrelin was also linked to the HPA axis. However, they were surprised to find that when the rats’ adrenal glands — the source of corticosterone, adrenaline, and noradrenaline — were removed, the animals still became overly fearful when chronically stressed. The authors also showed that repeated ghrelin-receptor stimulation did not trigger release of HPA hormones, and that blockade of the ghrelin receptor did not blunt release of HPA stress hormones. Therefore, the ghrelin-initiated stress pathway appears to act independently of the HPA axis. “That’s important because it gives us a whole new target for stress therapies,” Goosens says.

Pharmaceutical companies have developed at least a dozen possible drug compounds that interfere with ghrelin. Many of these drugs have been found safe for humans, but have not been shown to help people lose weight. The researchers believe these drugs could offer a way to vaccinate people entering stressful situations, or even to treat people who already suffer from PTSD, because ghrelin levels remain high long after the chronic stress ends.

PTSD affects about 7.7 million American adults, including soldiers and victims of crimes, accidents, or natural disasters. About 40 to 50 percent of patients recover within five years, Meyer says, but the rest never get better.

The researchers hypothesize that the persistent elevation of ghrelin following trauma exposure could be one of the factors that maintain PTSD. “So, could you immediately reverse PTSD? Maybe not, but maybe the ghrelin could get damped down and these people could go through cognitive behavioral therapy, and over time, maybe we can reverse it,” Meyer says.

Working with researchers at Massachusetts General Hospital, Goosens’ lab is now planning to study ghrelin levels in human patients suffering from anxiety and fear disorders. They are also planning a clinical trial of a drug that blocks ghrelin to see if it can prevent relapse of depression.

The research was funded by the U.S. Army Research Office, the Defense Advanced Research Projects Agency, and the National Institute of Mental Health.

Brain scans may help diagnose dyslexia

About 10 percent of the U.S. population suffers from dyslexia, a condition that makes learning to read difficult. Dyslexia is usually diagnosed around second grade, but the results of a new study from MIT could help identify those children before they even begin reading, so they can be given extra help earlier.

The study, done with researchers at Boston Children’s Hospital, found a correlation between poor pre-reading skills in kindergartners and the size of a brain structure that connects two language-processing areas.

Previous studies have shown that in adults with poor reading skills, this structure, known as the arcuate fasciculus, is smaller and less organized than in adults who read normally. However, it was unknown if these differences cause reading difficulties or result from lack of reading experience.

“We were very interested in looking at children prior to reading instruction and whether you would see these kinds of differences,” says John Gabrieli, the Grover M. Hermann Professor of Health Sciences and Technology, professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research.

Gabrieli and Nadine Gaab, an assistant professor of pediatrics at Boston Children’s Hospital, are the senior authors of a paper describing the results in the Aug. 14 issue of the Journal of Neuroscience. Lead authors of the paper are MIT postdocs Zeynep Saygin and Elizabeth Norton.

The path to reading

The new study is part of a larger effort involving approximately 1,000 children at schools throughout Massachusetts and Rhode Island. At the beginning of kindergarten, children whose parents give permission to participate are assessed for pre-reading skills, such as being able to put words together from sounds.

“From that, we’re able to provide — at the beginning of kindergarten — a snapshot of how that child’s pre-reading abilities look relative to others in their classroom or other peers, which is a real benefit to the child’s parents and teachers,” Norton says.

The researchers then invite a subset of the children to come to MIT for brain imaging. The Journal of Neuroscience study included 40 children who had their brains scanned using a technique known as diffusion-weighted imaging, which is based on magnetic resonance imaging (MRI).

This type of imaging reveals the size and organization of the brain’s white matter — bundles of nerves that carry information between brain regions. The researchers focused on three white-matter tracts associated with reading skill, all located on the left side of the brain: the arcuate fasciculus, the inferior longitudinal fasciculus (ILF) and the superior longitudinal fasciculus (SLF).

When comparing the brain scans and the results of several different types of pre-reading tests, the researchers found a correlation between the size and organization of the arcuate fasciculus and performance on tests of phonological awareness — the ability to identify and manipulate the sounds of language.

Phonological awareness can be measured by testing how well children can segment sounds, identify them in isolation, and rearrange them to make new words. Strong phonological skills have previously been linked with ease of learning to read. “The first step in reading is to match the printed letters with the sounds of letters that you know exist in the world,” Norton says.

The researchers also tested the children on two other skills that have been shown to predict reading ability — rapid naming, which is the ability to name a series of familiar objects as quickly as you can, and the ability to name letters. They did not find any correlation between these skills and the size or organization of the white-matter structures scanned in this study.

Early intervention

The left arcuate fasciculus connects Broca’s area, which is involved in speech production, and Wernicke’s area, which is involved in understanding written and spoken language. A larger and more organized arcuate fasciculus could aid in communication between those two regions, the researchers say.

Gabrieli points out that the structural differences found in the study don’t necessarily reflect genetic differences; environmental influences could also be involved. “At the moment when the children arrive at kindergarten, which is approximately when we scan them, we don’t know what factors lead to these brain differences,” he says.

The researchers plan to follow three waves of children as they progress to second grade and evaluate whether the brain measures they have identified predict poor reading skills.

“We don’t know yet how it plays out over time, and that’s the big question: Can we, through a combination of behavioral and brain measures, get a lot more accurate at seeing who will become a dyslexic child, with the hope that that would motivate aggressive interventions that would help these children right from the start, instead of waiting for them to fail?” Gabrieli says.

For at least some dyslexic children, offering extra training in phonological skills can help them improve their reading skills later on, studies have shown.

The research was funded by the National Institutes of Health, the Poitras Center for Affective Disorders Research, the Ellison Medical Foundation and the Halis Family Foundation.