Visualizing the Brain

Zeynep Saygin, a postdoc in Nancy Kanwisher’s lab uses a technology known as diffusion-weighted MR imaging to reveal long-range connections in the brain.

Tracking the roots of reading ability

Researchers in the Gabrieli lab have found that differences in a key language structure can be seen even before children start learning to read.

Brain scans may help diagnose dyslexia

About 10 percent of the U.S. population suffers from dyslexia, a condition that makes learning to read difficult. Dyslexia is usually diagnosed around second grade, but the results of a new study from MIT could help identify those children before they even begin reading, so they can be given extra help earlier.

The study, done with researchers at Boston Children’s Hospital, found a correlation between poor pre-reading skills in kindergartners and the size of a brain structure that connects two language-processing areas.

Previous studies have shown that in adults with poor reading skills, this structure, known as the arcuate fasciculus, is smaller and less organized than in adults who read normally. However, it was unknown if these differences cause reading difficulties or result from lack of reading experience.

“We were very interested in looking at children prior to reading instruction and whether you would see these kinds of differences,” says John Gabrieli, the Grover M. Hermann Professor of Health Sciences and Technology, professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research.

Gabrieli and Nadine Gaab, an assistant professor of pediatrics at Boston Children’s Hospital, are the senior authors of a paper describing the results in the Aug. 14 issue of the Journal of Neuroscience. Lead authors of the paper are MIT postdocs Zeynep Saygin and Elizabeth Norton.

The path to reading

The new study is part of a larger effort involving approximately 1,000 children at schools throughout Massachusetts and Rhode Island. At the beginning of kindergarten, children whose parents give permission to participate are assessed for pre-reading skills, such as being able to put words together from sounds.

“From that, we’re able to provide — at the beginning of kindergarten — a snapshot of how that child’s pre-reading abilities look relative to others in their classroom or other peers, which is a real benefit to the child’s parents and teachers,” Norton says.

The researchers then invite a subset of the children to come to MIT for brain imaging. The Journal of Neuroscience study included 40 children who had their brains scanned using a technique known as diffusion-weighted imaging, which is based on magnetic resonance imaging (MRI).

This type of imaging reveals the size and organization of the brain’s white matter — bundles of nerves that carry information between brain regions. The researchers focused on three white-matter tracts associated with reading skill, all located on the left side of the brain: the arcuate fasciculus, the inferior longitudinal fasciculus (ILF) and the superior longitudinal fasciculus (SLF).

When comparing the brain scans and the results of several different types of pre-reading tests, the researchers found a correlation between the size and organization of the arcuate fasciculus and performance on tests of phonological awareness — the ability to identify and manipulate the sounds of language.

Phonological awareness can be measured by testing how well children can segment sounds, identify them in isolation, and rearrange them to make new words. Strong phonological skills have previously been linked with ease of learning to read. “The first step in reading is to match the printed letters with the sounds of letters that you know exist in the world,” Norton says.

The researchers also tested the children on two other skills that have been shown to predict reading ability — rapid naming, which is the ability to name a series of familiar objects as quickly as you can, and the ability to name letters. They did not find any correlation between these skills and the size or organization of the white-matter structures scanned in this study.

Early intervention

The left arcuate fasciculus connects Broca’s area, which is involved in speech production, and Wernicke’s area, which is involved in understanding written and spoken language. A larger and more organized arcuate fasciculus could aid in communication between those two regions, the researchers say.

Gabrieli points out that the structural differences found in the study don’t necessarily reflect genetic differences; environmental influences could also be involved. “At the moment when the children arrive at kindergarten, which is approximately when we scan them, we don’t know what factors lead to these brain differences,” he says.

The researchers plan to follow three waves of children as they progress to second grade and evaluate whether the brain measures they have identified predict poor reading skills.

“We don’t know yet how it plays out over time, and that’s the big question: Can we, through a combination of behavioral and brain measures, get a lot more accurate at seeing who will become a dyslexic child, with the hope that that would motivate aggressive interventions that would help these children right from the start, instead of waiting for them to fail?” Gabrieli says.

For at least some dyslexic children, offering extra training in phonological skills can help them improve their reading skills later on, studies have shown.

The research was funded by the National Institutes of Health, the Poitras Center for Affective Disorders Research, the Ellison Medical Foundation and the Halis Family Foundation.

Brains on Trial: Neuroscience and the Law

What if we could peer into the brain to determine guilt or innocence? Could advances in neuroscience help reform our criminal justice system?

We invite you to join the discussion with a distinguished group of legal and neuroscience experts who will debate these and related questions on Tuesday, September 17th. Alan Alda will moderate the panel of experts, show clips from his 2-part PBS special, “Brains on Trial,” and engage the audience in a Q&A session. We hope you will join us!

 

REGISTER NOW

 

 

BRAINS ON TRIAL DISCUSSION WITH ALAN ALDA

DATE: Tuesday September 17, 2013
TIME: 6:00 – 8:30
LOCATION: McGovern Institute for Brain Research at MIT (MIT Bldg 46, Third Floor Atrium)
QUESTIONS? brainsontrial@mit.edu or 617.324.2077

 

MODERATOR

Alan AldaAlan Alda, a seven-time Emmy Award–winner, played Hawkeye Pierce on the classic television series, M*A*S*H, and appeared in continuing roles on ER, The West Wing, and 30 Rock. His long-time interest in science and in promoting a greater public understanding of science led to his hosting the award-winning PBS series Scientific American Frontiers for eleven years, on which he interviewed hundreds of scientists from around the world. He has 33 Emmy nominations as actor, writer, and director, and is a Television Hall of Fame inductee. He has also appeared on the Broadway stage, where he received three Tony nominations.


PANELISTS

Robert DesimoneRobert Desimone is director of the McGovern Institute and the Doris and Don Berkey Professor in MIT’s Department of Brain and Cognitive Sciences. He served as network co-director of the Macarthur Law and Neuroscience Project from 2008-2010. Prior to joining the McGovern Institute in 2004, he was director of the Intramural Research Program at the National Institutes of Mental Health. He is a member of the National Academy of Sciences and the American Academy of Arts and Sciences and a recipient of numerous awards, including the Troland Prize of the National Academy of Sciences.


Joshua GreeneJoshua D. Greene is the John and Ruth Hazel Associate Professor of the Social Sciences in the Department of Psychology at Harvard University. He is an experimental psychologist, neuroscientist, and philosopher. He studies the psychology and neuroscience of morality, focusing on the interplay between emotion and reasoning in moral decision-making. In 2012 he was awarded the Stanton Prize by the Society for Philosophy and Psychology. He is the author of the forthcoming book Moral Tribes: Emotion, Reason, and the Gap Between Us and Them.


Nancy KanwisherNancy Kanwisher is the Walter A. Rosenblith Professor of Cognitive Neuroscience in the Department of Brain and Cognitive Sciences and a founding member of the McGovern Institute. She joined the MIT faculty in 1997, and prior to that was a faculty member at UCLA and Harvard University. In 1999, she received the National Academy of Sciences Troland Research Award. The Kanwisher lab uses brain imaging to study the functional organization of the human brain.

 

Luna Bea 2012.jpgBea Luna is a professor of psychiatry at the University of Pittsburgh School of Medicine. She is the director of the Laboratory of Neurocognitive Development, where she leads projects investigating the brain basis of typical and abnormal adolescent development of voluntary behaviors and motivation.

 

 

Stephen J. MorseStephen J. Morse is the Ferdinand Wakeman Hubbell Professor of Law, Professor of Psychology and Law in Psychiatry, and Associate Director of the Center for Neuroscience and Society at the University of Pennsylvania. He is a veteran in the fields of psychology and law and was instrumental in building the MacArthur Law and Neuroscience Project. Being both a scientist and a legal expert, he understands and writes extensively on the relevance of neuroscience to law. He also serves as a member of the MacArthur Foundation Research Network on Law and neuroscience.


This event is based on a two-part PBS special, “Brains on Trial with Alan Alda,” scheduled for broadcast on September 11 and 18 at 10PM. Watch the preview below:

Brains on Trial with Alan Alda takes a fictitious crime – a convenience store robbery that goes horribly wrong – and builds from it a gripping courtroom drama. As the trial unfolds it takes us into the brains of the major participants – defendant, witnesses, jurors, judge – while Alan Alda visits the laboratories of some dozen neuroscientists exploring how brains work when they become entangled with the law. The research he discovers poses the controversial question: How does our rapidly expanding ability to peer into people’s minds and decode their thoughts and feelings affect trials like the one we are watching in the future? And should it?

Are we there yet?

“Are we there yet?”

As anyone who has traveled with young children knows, maintaining focus on distant goals can be a challenge. A new study from MIT suggests how the brain achieves this task, and indicates that the neurotransmitter dopamine may signal the value of long-term rewards. The findings may also explain why patients with Parkinson’s disease — in which dopamine signaling is impaired — often have difficulty in sustaining motivation to finish tasks.

The work is described this week in the journal Nature.

Previous studies have linked dopamine to rewards, and have shown that dopamine neurons show brief bursts of activity when animals receive an unexpected reward. These dopamine signals are believed to be important for reinforcement learning, the process by which an animal learns to perform actions that lead to reward.

Taking the long view

In most studies, that reward has been delivered within a few seconds. In real life, though, gratification is not always immediate: Animals must often travel in search of food, and must maintain motivation for a distant goal while also responding to more immediate cues. The same is true for humans: A driver on a long road trip must remain focused on reaching a final destination while also reacting to traffic, stopping for snacks, and entertaining children in the back seat.

The MIT team, led by Institute Professor Ann Graybiel — who is also an investigator at MIT’s McGovern Institute for Brain Research — decided to study how dopamine changes during a maze task approximating work for delayed gratification. The researchers trained rats to navigate a maze to reach a reward. During each trial a rat would hear a tone instructing it to turn either right or left at an intersection to find a chocolate milk reward.

Rather than simply measuring the activity of dopamine-containing neurons, the MIT researchers wanted to measure how much dopamine was released in the striatum, a brain structure known to be important in reinforcement learning. They teamed up with Paul Phillips of the University of Washington, who has developed a technology called fast-scan cyclic voltammetry (FSCV) in which tiny, implanted, carbon-fiber electrodes allow continuous measurements of dopamine concentration based on its electrochemical fingerprint.

“We adapted the FSCV method so that we could measure dopamine at up to four different sites in the brain simultaneously, as animals moved freely through the maze,” explains first author Mark Howe, a former graduate student with Graybiel who is now a postdoc in the Department of Neurobiology at Northwestern University. “Each probe measures the concentration of extracellular dopamine within a tiny volume of brain tissue, and probably reflects the activity of thousands of nerve terminals.”

Gradual increase in dopamine

From previous work, the researchers expected that they might see pulses of dopamine released at different times in the trial, “but in fact we found something much more surprising,” Graybiel says: The level of dopamine increased steadily throughout each trial, peaking as the animal approached its goal — as if in anticipation of a reward.

The rats’ behavior varied from trial to trial — some runs were faster than others, and sometimes the animals would stop briefly — but the dopamine signal did not vary with running speed or trial duration. Nor did it depend on the probability of getting a reward, something that had been suggested by previous studies.

“Instead, the dopamine signal seems to reflect how far away the rat is from its goal,” Graybiel explains. “The closer it gets, the stronger the signal becomes.” The researchers also found that the size of the signal was related to the size of the expected reward: When rats were trained to anticipate a larger gulp of chocolate milk, the dopamine signal rose more steeply to a higher final concentration.

In some trials the T-shaped maze was extended to a more complex shape, requiring animals to run further and to make extra turns before reaching a reward. During these trials, the dopamine signal ramped up more gradually, eventually reaching the same level as in the shorter maze. “It’s as if the animal were adjusting its expectations, knowing that it had further to go,” Graybiel says.

The traces represent brain activity in rats as they navigate through different mazes to receive a chocolate milk reward.
The traces represent brain activity in rats as they navigate through different mazes to receive a chocolate milk reward.

An ‘internal guidance system’

“This means that dopamine levels could be used to help an animal make choices on the way to the goal and to estimate the distance to the goal,” says Terrence Sejnowski of the Salk Institute, a computational neuroscientist who is familiar with the findings but who was not involved with the study. “This ‘internal guidance system’ could also be useful for humans, who also have to make choices along the way to what may be a distant goal.”

One question that Graybiel hopes to examine in future research is how the signal arises within the brain. Rats and other animals form cognitive maps of their spatial environment, with so-called “place cells” that are active when the animal is in a specific location. “As our rats run the maze repeatedly,” she says, “we suspect they learn to associate each point in the maze with its distance from the reward that they experienced on previous runs.”

As for the relevance of this research to humans, Graybiel says, “I’d be shocked if something similar were not happening in our own brains.” It’s known that Parkinson’s patients, in whom dopamine signaling is impaired, often appear to be apathetic, and have difficulty in sustaining motivation to complete a long task. “Maybe that’s because they can’t produce this slow ramping dopamine signal,” Graybiel says.

Patrick Tierney at MIT and Stefan Sandberg at the University of Washington also contributed to the study, which was funded by the National Institutes of Health, the National Parkinson Foundation, the CHDI Foundation, the Sydney family and Mark Gorenberg.

Photo Album: MRI Installation

Photos: Justin Knight and Doreen Reuchsel

McGovern Institute gets new brain scanner

After months of planning and construction, we are delighted to report that we have installed a new 3-tesla MRI scanner for human neuroimaging. The $2M scanner, a Siemens Magnetom Trio, was delivered to the Martinos Imaging Center on July 25, 2013.

The core of the scanner is a large electromagnet, weighing around 13 tons and containing superconducting coils that are chilled in liquid helium to within a few degrees of absolute zero. It is housed in a custom-built room, with a specially reinforced floor to support the scanner’s weight, and with some 5000 steel panels to shield the system from RF interference.

The acquisition of the new scanner was made possible by Bruce Dayton, Jeffrey and Nancy Halis, the Simons Foundation, and an anonymous donor.  The scanner is expected to be fully operational by the fall, and will be used for a wide range of studies on brain function, in both children and adults.

Click here to view a photo album of the MRI installation.

Genome editing becomes more accurate

Earlier this year, MIT researchers developed a way to easily and efficiently edit the genomes of living cells. Now, the researchers have discovered key factors that influence the accuracy of the system, an important step toward making it safer for potential use in humans, says Feng Zhang, leader of the research team.

With this technology, scientists can deliver or disrupt multiple genes at once, raising the possibility of treating human disease by targeting malfunctioning genes. To help with that process, Zhang’s team, led by graduate students Patrick Hsu and David Scott, has now created a computer model that can identify the best genetic sequences to target a given gene.

“Using this, you will be able to identify ways to target almost every gene. Within every gene, there are hundreds of locations that can be edited, and this will help researchers narrow down which ones are better than others,” says Zhang, an assistant professor of brain and cognitive sciences at MIT and senior author of a paper describing the new model, appearing in the July 21 online edition of Nature Biotechnology.

The genome-editing system, known as CRISPR, exploits a protein-RNA complex that bacteria use to defend themselves from infection. The complex includes short RNA sequences bound to an enzyme called Cas9, which slices DNA. These RNA sequences are designed to target specific locations in the genome; when they encounter a match, Cas9 cuts the DNA.

This approach can be used either to disrupt the function of a gene or to replace it with a new one. To replace the gene, the researchers must also add a DNA template for the new gene, which would be copied into the genome after the DNA is cut.

This technique offers a much faster and more efficient way to create transgenic mice, which are often used to study human disease. Current methods for creating such mice require adding small pieces of DNA to mouse embryonic cells. However, the process is inefficient and time-consuming.

With CRISPR, many genes are edited at once, and the entire process can be done in three weeks, says Zhang, who is the W. M. Keck Career Development Professor in Biomedical Engineering at MIT and a core member of the Broad Institute and MIT’s McGovern Institute for Brain Research. The system can also be used to create genetically modified cell lines for lab experiments much more efficiently.

Fine-tuning

Since Zhang and his colleagues first described the original system in January, more than 2,000 labs around the world have started using the system to generate their own genetically modified cell lines or animals. In the new paper, the researchers describe improvements in both the efficiency and accuracy of gene editing.

To modify genes using this system, an RNA “guide strand” complementary to a 20-base-pair sequence of targeted DNA is delivered to cells. After the RNA strand binds to the target DNA, it recruits the Cas9 enzyme, which snips the DNA in the correct location.

The researchers discovered they could minimize the chances of the Cas9-RNA complex accidentally cleaving the wrong site by making sure the target sequence is not too similar to other sequences found in the genome. They found that if an off-target sequence differs from the target sequence by three or fewer base pairs, the editing complex will likely also cleave that sequence, which could have deleterious effects for the cell.

The team’s new computer model can search any sequence within the mouse or human genome and identify 20-base-pair sequences within that region that have the least overlap with sequences elsewhere in the genome.

Another way to improve targeting specificity is by adjusting the dosage of the guide RNA, the researchers found. In general, decreasing the amount of RNA delivered minimizes damage to off-target sites but has a much smaller effect on cleavage of the target sequence. For each sequence, the “sweet spot” with the best balance of high on-target effects and low off-target effects can be calculated, Zhang says.

“The real value of this paper is that it does a very comprehensive and systematic analysis to understand the causes of off-target effects. That analysis suggests a lot of possible ways to eliminate or reduce off-target effects,” says Michael Terns, a professor of biochemistry and molecular biology at the University of Georgia who was not part of the research team.

Zhang and his colleagues also optimized the structure of the RNA guide needed for efficient activation of Cas9. In the January paper describing the original system, the researchers found that two separate RNA strands working together — one that binds to the target DNA and another that recruits Cas9 — produced better results than when those two strands were fused together before delivery. However, in experiments reported in the new paper, the researchers found that they could boost the efficiency of the fused RNA strand by making the strand longer. These longer RNA guide strands include a hairpin structure that may stabilize the molecules and help them interact with Cas9, Zhang says.

Zhang’s team is now working on further improving the specificity of the system, and plans to start generating cell lines and animals that could be used to study how the brain develops and builds neural circuits. By disrupting genes known to be involved in those processes, they can learn more about how they work and how they are impaired in neurological disease.

The research was funded by a National Institutes of Health Director’s Pioneer Award; an NIH Transformative R01 grant; the Keck, McKnight, Damon Runyan, Searle Scholars, Klingenstein and Simons foundations; Bob Metcalfe; and Jane Pauley.

Controlling genes with light

Although human cells have an estimated 20,000 genes, only a fraction of those are turned on at any given time, depending on the cell’s needs — which can change by the minute or hour. To find out what those genes are doing, researchers need tools that can manipulate their status on similarly short timescales.

That is now possible, thanks to a new technology developed at MIT and the Broad Institute that can rapidly start or halt the expression of any gene of interest simply by shining light on the cells.

The work is based on a technique known as optogenetics, which uses proteins that change their function in response to light. In this case, the researchers adapted the light-sensitive proteins to either stimulate or suppress the expression of a specific target gene almost immediately after the light comes on.

“Cells have very dynamic gene expression happening on a fairly short timescale, but so far the methods that are used to perturb gene expression don’t even get close to those dynamics. To understand the functional impact of those gene-expression changes better, we have to be able to match the naturally occurring dynamics as closely as possible,” says Silvana Konermann, an MIT graduate student in brain and cognitive sciences.

The ability to precisely control the timing and duration of gene expression should make it much easier to figure out the roles of particular genes, especially those involved in learning and memory. The new system can also be used to study epigenetic modifications — chemical alterations of the proteins that surround DNA — which are also believed to play an important role in learning and memory.

Konermann and Mark Brigham, a graduate student at Harvard University, are the lead authors of a paper describing the technique in the July 22 online edition of Nature. The paper’s senior author is Feng Zhang, the W. M. Keck Career Development Professor in Biomedical Engineering at MIT and a core member of the Broad Institute and MIT’s McGovern Institute for Brain Research.

Shining light on genes

The new system consists of several components that interact with each other to control the copying of DNA into messenger RNA (mRNA), which carries genetic instructions to the rest of the cell. The first is a DNA-binding protein known as a transcription activator-like effector (TALE). TALEs are modular proteins that can be strung together in a customized way to bind any DNA sequence.

Fused to the TALE protein is a light-sensitive protein called CRY2 that is naturally found in Arabidopsis thaliana, a small flowering plant. When light hits CRY2, it changes shape and binds to its natural partner protein, known as CIB1. To take advantage of this, the researchers engineered a form of CIB1 that is fused to another protein that can either activate or suppress gene copying.

After the genes for these components are delivered to a cell, the TALE protein finds its target DNA and wraps around it. When light shines on the cells, the CRY2 protein binds to CIB1, which is floating in the cell. CIB1 brings along a gene activator, which initiates transcription, or the copying of DNA into mRNA. Alternatively, CIB1 could carry a repressor, which shuts off the process.

A single pulse of light is enough to stimulate the protein binding and initiate DNA copying.

The researchers found that pulses of light delivered every minute or so are the most effective way to achieve continuous transcription for the desired period of time. Within 30 minutes of light delivery, the researchers detected an uptick in the amount of mRNA being produced from the target gene. Once the pulses stop, the mRNA starts to degrade within about 30 minutes.

In this study, the researchers tried targeting nearly 30 different genes, both in neurons grown in the lab and in living animals. Depending on the gene targeted and how much it is normally expressed, the researchers were able to boost transcription by a factor of two to 200.




Epigenetic modifications



An important element of gene-expression control is epigenetic modification. One major class of epigenetic effectors is chemical modification of the proteins, known as histones, that anchor chromosomal DNA and control access to the underlying genes. The researchers showed that they can also alter these epigenetic modifications by fusing TALE proteins with histone modifiers.

Epigenetic modifications are thought to play a key role in learning and forming memories, but this has not been very well explored because there are no good ways to disrupt the modifications, short of blocking histone modification of the entire genome. The new technique offers a much more precise way to interfere with modifications of individual genes.

“We want to allow people to prove the causal role of specific epigenetic modifications in the genome,” Zhang says.

So far, the researchers have demonstrated that some of the histone effector domains can be tethered to light-sensitive proteins; they are now trying to expand the types of histone modifiers they can incorporate into the system.

“It would be really useful to expand the number of epigenetic marks that we can control. At the moment we have a successful set of histone modifications, but there are a good deal more of them that we and others are going to want to be able to use this technology for,” Brigham says.

The research was funded by a Hubert Schoemaker Fellowship; a National Institutes of Health Transformative R01 Award; an NIH Director’s Pioneer Award; the Keck, McKnight, Vallee, Damon Runyon, Searle Scholars, Klingenstein and Simons foundations; and Bob Metcalfe and Jane Pauley.

Breaking habits before they start

Our daily routines can become so ingrained that we perform them automatically, such as taking the same route to work every day. Some behaviors, such as smoking or biting your fingernails, become so habitual that we can’t stop even if we want to.

Although breaking habits can be hard, MIT neuroscientists have now shown that they can prevent them from taking root in the first place, in rats learning to run a maze to earn a reward. The researchers first demonstrated that activity in two distinct brain regions is necessary in order for habits to crystallize. Then, they were able to block habits from forming by interfering with activity in one of the brain regions — the infralimbic (IL) cortex, which is located in the prefrontal cortex.

The MIT researchers, led by Institute Professor Ann Graybiel, used a technique called optogenetics to block activity in the IL cortex. This allowed them to control cells of the IL cortex using light. When the cells were turned off during every maze training run, the rats still learned to run the maze correctly, but when the reward was made to taste bad, they stopped, showing that a habit had not formed. If it had, they would keep going back by habit.

“It’s usually so difficult to break a habit,” Graybiel says. “It’s also difficult to have a habit not form when you get a reward for what you’re doing. But with this manipulation, it’s absolutely easy. You just turn the light on, and bingo.”

Graybiel, a member of MIT’s McGovern Institute for Brain Research, is the senior author of a paper describing the findings in the June 27 issue of the journal Neuron. Kyle Smith, a former MIT postdoc who is now an assistant professor at Dartmouth College, is the paper’s lead author.

Patterns of habitual behavior


Previous studies of how habits are formed and controlled have implicated the IL cortex as well as the striatum, a part of the brain related to addiction and repetitive behavioral problems, as well as normal functions such as decision-making, planning and response to reward. It is believed that the motor patterns needed to execute a habitual behavior are stored in the striatum and its circuits.

Recent studies from Graybiel’s lab have shown that disrupting activity in the IL cortex can block the expression of habits that have already been learned and stored in the striatum. Last year, Smith and Graybiel found that the IL cortex appears to decide which of two previously learned habits will be expressed.

“We have evidence that these two areas are important for habits, but they’re not connected at all, and no one has much of an idea of what the cells are doing as a habit is formed, as the habit is lost, and as a new habit takes over,” Smith says.

To investigate that, Smith recorded activity in cells of the IL cortex as rats learned to run a maze. He found activity patterns very similar to those that appear in the striatum during habit formation. Several years ago, Graybiel found that a distinctive “task-bracketing” pattern develops when habits are formed. This means that the cells are very active when the animal begins its run through the maze, are quiet during the run, and then fire up again when the task is finished.

This kind of pattern “chunks” habits into a large unit that the brain can simply turn on when the habitual behavior is triggered, without having to think about each individual action that goes into the habitual behavior.

The researchers found that this pattern took longer to appear in the IL cortex than in the striatum, and it was also less permanent. Unlike the pattern in the striatum, which remains stored even when a habit is broken, the IL cortex pattern appears and disappears as habits are formed and broken. This was the clue that the IL cortex, not the striatum, was tracking the development of the habit.


Multiple layers of control
 


The researchers’ ability to optogenetically block the formation of new habits suggests that the IL cortex not only exerts real-time control over habits and compulsions, but is also needed for habits to form in the first place.

“The previous idea was that the habits were stored in the sensorimotor system and this cortical area was just selecting the habit to be expressed. Now we think it’s a more fundamental contribution to habits, that the IL cortex is more actively making this happen,” Smith says.

This arrangement offers multiple layers of control over habitual behavior, which could be advantageous in reining in automatic behavior, Graybiel says. It is also possible that the IL cortex is contributing specific pieces of the habitual behavior, in addition to exerting control over whether it occurs, according to the researchers. They are now trying to determine whether the IL cortex and the striatum are communicating with and influencing each other, or simply acting in parallel.

The study suggests a new way to look for abnormal activity that might cause disorders of repetitive behavior, Smith says. Now that the researchers have identified the neural signature of a normal habit, they can look for signs of habitual behavior that is learned too quickly or becomes too rigid. Finding such a signature could allow scientists to develop new ways to treat disorders of repetitive behavior by using deep brain stimulation, which uses electronic impulses delivered by a pacemaker to suppress abnormal brain activity.

The research was funded by the National Institutes of Health, the Office of Naval Research, the Stanley H. and Sheila G. Sydney Fund and funding from R. Pourian and Julia Madadi.