Bridging the gap between research and the classroom

In a moment more reminiscent of a Comic-Con event than a typical MIT symposium, Shawn Robinson, senior research associate at the University of Wisconsin at Madison, helped kick off the first-ever MIT Science of Reading event dressed in full superhero attire as Doctor Dyslexia Dude — the star of a graphic novel series he co-created to engage and encourage young readers, rooted in his own experiences as a student with dyslexia.

The event, co-sponsored by the MIT Integrated Learning Initiative (MITili) and the McGovern Institute for Brain Research at MIT, took place earlier this month and brought together researchers, educators, administrators, parents, and students to explore how scientific research can better inform educational practices and policies — equipping teachers with scientifically-based strategies that may lead to better outcomes for students.

Professor John Gabrieli, MITili director, explained the great need to focus the collective efforts of educators and researchers on literacy.

“Reading is critical to all learning and all areas of knowledge. It is the first great educational experience for all children, and can shape a child’s first sense of self,” he said. “If reading is a challenge or a burden, it affects children’s social and emotional core.”

A great divide

Reading is also a particularly important area to address because so many American students struggle with this fundamental skill. More than six out of every 10 fourth graders in the United States are not proficient readers, and changes in reading scores for fourth and eighth graders have increased only slightly since 1992, according to the National Assessment of Education Progress.

Gabrieli explained that, just as with biomedical research, where there can be a “valley of death” between basic research and clinical application, the same seems to apply to education. Although there is substantial current research aiming to better understand why students might have difficulty reading in the ways they are currently taught, the research often does not necessarily shape the practices of teachers — or how the teachers themselves are trained to teach.

This divide between the research and practical applications in the classroom might stem from a variety of factors. One issue might be the inaccessibility of research publications that are available for free to all — as well as the general need for scientific findings to be communicated in a clear, accessible, engaging way that can lead to actual implementation. Another challenge is the stark difference in pacing between scientific research and classroom teaching. While research can take years to complete and publish, teachers have classrooms full of students — all with different strengths and challenges — who urgently need to learn in real time.

Natalie Wexler, author of “The Knowledge Gap,” described some of the obstacles to getting the findings of cognitive science integrated into the classroom as matters of “head, heart, and habit.” Teacher education programs tend to focus more on some of the outdated psychological models, like Piaget’s theory of cognitive development, and less on recent cognitive science research. Teachers also have to face the emotional realities of working with their students, and might be concerned that a new approach would cause students to feel bored or frustrated. In terms of habit, some new, evidence-based approaches may be, in a practical sense, difficult for teachers to incorporate into the classroom.

“Teaching is an incredibly complex activity,” noted Wexler.

From labs to classrooms

Throughout the day, speakers and panelists highlighted some key insights gained from literacy research, along with some of the implications these might have on education.

Mark Seidenberg, professor of psychology at the University of Wisconsin at Madison and author of “Language at the Speed of Sight,” discussed studies indicating the strong connection between spoken and printed language.

“Reading depends on speech,” said Seidenberg. “Writing systems are codes for expressing spoken language … Spoken language deficits have an enormous impact on children’s reading.”

The integration of speech and reading in the brain increases with reading skill. For skilled readers, the patterns of brain activity (measured using functional magnetic resonance imaging) while comprehending spoken and written language are very similar. Becoming literate affects the neural representation of speech, and knowledge of speech affects the representation of print — thus the two become deeply intertwined.

In addition, researchers have found that the language of books, even for young children, include words and expressions that are rarely encountered in speech to children. Therefore, reading aloud to children exposes them to a broader range of linguistic expressions — including more complex ones that are usually only taught much later. Thus reading to children can be especially important, as research indicates that better knowledge of spoken language facilitates learning to read.

Although behavior and performance on tests are often used as indicators of how well a student can read, neuroscience data can now provide additional information. Neuroimaging of children and young adults identifies brain regions that are critical for integrating speech and print, and can spot differences in the brain activity of a child who might be especially at-risk for reading difficulties. Brain imaging can also show how readers’ brains respond to certain reading and comprehension tasks, and how they adapt to different circumstances and challenges.

“Brain measures can be more sensitive than behavioral measures in identifying true risk,” said Ola Ozernov-Palchik, a postdoc at the McGovern Institute.

Ozernov-Palchik hopes to apply what her team is learning in their current studies to predict reading outcomes for other children, as well as continue to investigate individual differences in dyslexia and dyslexia-risk using behavior and neuroimaging methods.

Identifying certain differences early on can be tremendously helpful in providing much-needed early interventions and tailored solutions. Many speakers noted the problem with the current “wait-to-fail” model of noticing that a child has a difficult time reading in second or third grade, and then intervening. Research suggests that earlier intervention could help the child succeed much more than later intervention.

Speakers and panelists spoke about current efforts, including Reach Every Reader (a collaboration between MITili, the Harvard Graduate School of Education, and the Florida Center for Reading Research), that seek to provide support to students by bringing together education practitioners and scientists.

“We have a lot of information, but we have the challenge of how to enact it in the real world,” said Gabrieli, noting that he is optimistic about the potential for the additional conversations and collaborations that might grow out of the discussions of the Science of Reading event. “We know a lot of things can be better and will require partnerships, but there is a path forward.”

McGovern neuroscientists develop a new model for autism

Using the genome-editing system CRISPR, researchers at MIT and in China have engineered macaque monkeys to express a gene mutation linked to autism and other neurodevelopmental disorders in humans. These monkeys show some behavioral traits and brain connectivity patterns similar to those seen in humans with these conditions.

Mouse studies of autism and other neurodevelopmental disorders have yielded drug candidates that have been tested in clinical trials, but none of them have succeeded. Many pharmaceutical companies have given up on testing such drugs because of the poor track record so far.

The new type of model, however, could help scientists to develop better treatment options for some neurodevelopmental disorders, says Guoping Feng, who is the James W. and Patricia Poitras Professor of Neuroscience, a member of MIT’s McGovern Institute for Brain Research, and one of the senior authors of the study.

“Our goal is to generate a model to help us better understand the neural biological mechanism of autism, and ultimately to discover treatment options that will be much more translatable to humans,” says Feng, who is also an institute member of the Broad Institute of MIT and Harvard and a senior scientist in the Broad’s Stanley Center for Psychiatric Research.

“We urgently need new treatment options for autism spectrum disorder, and treatments developed in mice have so far been disappointing. While the mouse research remains very important, we believe that primate genetic models will help us to develop better medicines and possibly even gene therapies for some severe forms of autism,” says Robert Desimone, the director of MIT’s McGovern Institute for Brain Research, the Doris and Don Berkey Professor of Neuroscience, and an author of the paper.

Huihui Zhou of the Shenzhen Institutes of Advanced Technology, Andy Peng Xiang of Sun Yat-Sen University, and Shihua Yang of South China Agricultural University are also senior authors of the study, which appears in the June 12 online edition of Nature. The paper’s lead authors are former MIT postdoc Yang Zhou, MIT research scientist Jitendra Sharma, Broad Institute group leader Rogier Landman, and Qiong Ke of Sun Yat-Sen University. The research team also includes Mriganka Sur, the Paul and Lilah E. Newton Professor in the Department of Brain and Cognitive Sciences and a member of MIT’s Picower Institute for Learning and Memory.

Gene variants

Scientists have identified hundreds of genetic variants associated with autism spectrum disorder, many of which individually confer only a small degree of risk. In this study, the researchers focused on one gene with a strong association, known as SHANK3. In addition to its link with autism, mutations or deletions of SHANK3 can also cause a related rare disorder called Phelan-McDermid Syndrome, whose most common characteristics include intellectual disability, impaired speech and sleep, and repetitive behaviors. The majority of these individuals are also diagnosed with autism spectrum disorder, as many of the symptoms overlap.

The protein encoded by SHANK3 is found in synapses — the junctions between brain cells that allow them to communicate with each other. It is particularly active in a part of the brain called the striatum, which is involved in motor planning, motivation, and habitual behavior. Feng and his colleagues have previously studied mice with Shank3 mutations and found that they show some of the traits associated with autism, including avoidance of social interaction and obsessive, repetitive behavior.

Although mouse studies can provide a great deal of information on the molecular underpinnings of disease, there are drawbacks to using them to study neurodevelopmental disorders, Feng says. In particular, mice lack the highly developed prefrontal cortex that is the seat of many uniquely primate traits, such as making decisions, sustaining focused attention, and interpreting social cues, which are often affected by brain disorders.

The recent development of the CRISPR genome-editing technique offered a way to engineer gene variants into macaque monkeys, which has previously been very difficult to do. CRISPR consists of a DNA-cutting enzyme called Cas9 and a short RNA sequence that guides the enzyme to a specific area of the genome. It can be used to disrupt genes or to introduce new genetic sequences at a particular location.

Members of the research team based in China, where primate reproductive technology is much more advanced than in the United States, injected the CRISPR components into fertilized macaque eggs, producing embryos that carried the Shank3 mutation.

Researchers at MIT, where much of the data was analyzed, found that the macaques with Shank3 mutations showed behavioral patterns similar to those seen in humans with the mutated gene. They tended to wake up frequently during the night, and they showed repetitive behaviors. They also engaged in fewer social interactions than other macaques.

Magnetic resonance imaging (MRI) scans also revealed similar patterns to humans with autism spectrum disorder. Neurons showed reduced functional connectivity in the striatum as well as the thalamus, which relays sensory and motor signals and is also involved in sleep regulation. Meanwhile, connectivity was strengthened in other regions, including the sensory cortex.

Michael Platt, a professor of neuroscience and psychology at the University of Pennsylvania, says the macaque models should help to overcome some of the limitations of studying neurological disorders in mice, whose behavioral symptoms and underlying neurobiology are often different from those seen in humans.

“Because the macaque model shows a much more complete recapitulation of the human behavioral phenotype, I think we should stand a much greater chance of identifying the degree to which any particular therapy, whether it’s a drug or any other intervention, addresses the core symptoms,” says Platt, who was not involved in the study.

Drug development

Within the next year, the researchers hope to begin testing treatments that may affect autism-related symptoms. They also hope to identify biomarkers, such as the distinctive functional brain connectivity patterns seen in MRI scans, that would help them to evaluate whether drug treatments are having an effect.

A similar approach could also be useful for studying other types of neurological disorders caused by well-characterized genetic mutations, such as Rett Syndrome and Fragile X Syndrome. Fragile X is the most common inherited form of intellectual disability in the world, affecting about 1 in 4,000 males and 1 in 8,000 females. Rett Syndrome, which is more rare and almost exclusively affects girls, produces severe impairments in language and motor skills and can also cause seizures and breathing problems.

“Given the limitations of mouse models, patients really need this kind of advance to bring them hope,” Feng says. “We don’t know whether this will succeed in developing treatments, but we will see in the next few years how this can help us to translate some of the findings from the lab to the clinic.”

The research was funded, in part, by the Shenzhen Overseas Innovation Team Project, the Guangdong Innovative and Entrepreneurial Research Team Program, the National Key R&D Program of China, the External Cooperation Program of the Chinese Academy of Sciences, the Patrick J. McGovern Foundation, the National Natural Science Foundation of China, the Shenzhen Science, Technology Commission, the James and Patricia Poitras Center for Psychiatric Disorders Research at the McGovern Institute at MIT, the Stanley Center for Psychiatric Research at the Broad Institute of MIT and Harvard, and the Hock E. Tan and K. Lisa Yang Center for Autism Research at the McGovern Institute at MIT. The research facilities in China where the primate work was conducted are accredited by AAALAC International, a private, nonprofit organization that promotes the humane treatment of animals in science through voluntary accreditation and assessment programs.

How we tune out distractions

Imagine trying to focus on a friend’s voice at a noisy party, or blocking out the phone conversation of the person sitting next to you on the bus while you try to read. Both of these tasks require your brain to somehow suppress the distracting signal so you can focus on your chosen input.

MIT neuroscientists have now identified a brain circuit that helps us to do just that. The circuit they identified, which is controlled by the prefrontal cortex, filters out unwanted background noise or other distracting sensory stimuli. When this circuit is engaged, the prefrontal cortex selectively suppresses sensory input as it flows into the thalamus, the site where most sensory information enters the brain.

“This is a fundamental operation that cleans up all the signals that come in, in a goal-directed way,” says Michael Halassa, an assistant professor of brain and cognitive sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

The researchers are now exploring whether impairments of this circuit may be involved in the hypersensitivity to noise and other stimuli that is often seen in people with autism.

Miho Nakajima, an MIT postdoc, is the lead author of the paper, which appears in the June 12 issue of Neuron. Research scientist L. Ian Schmitt is also an author of the paper.

Shifting attention

Our brains are constantly bombarded with sensory information, and we are able to tune out much of it automatically, without even realizing it. Other distractions that are more intrusive, such as your seatmate’s phone conversation, require a conscious effort to suppress.

In a 2015 paper, Halassa and his colleagues explored how attention can be consciously shifted between different types of sensory input, by training mice to switch their focus between a visual and auditory cue. They found that during this task, mice suppress the competing sensory input, allowing them to focus on the cue that will earn them a reward.

This process appeared to originate in the prefrontal cortex (PFC), which is critical for complex cognitive behavior such as planning and decision-making. The researchers also found that a part of the thalamus that processes vision was inhibited when the animals were focusing on sound cues. However, there are no direct physical connections from the prefrontal cortex to the sensory thalamus, so it was unclear exactly how the PFC was exerting this control, Halassa says.

In the new study, the researchers again trained mice to switch their attention between visual and auditory stimuli, then mapped the brain connections that were involved. They first examined the outputs of the PFC that were essential for this task, by systematically inhibiting PFC projection terminals in every target. This allowed them to discover that the PFC connection to a brain region known as the striatum is necessary to suppress visual input when the animals are paying attention to the auditory cue.

Further mapping revealed that the striatum then sends input to a region called the globus pallidus, which is part of the basal ganglia. The basal ganglia then suppress activity in the part of the thalamus that processes visual information.

Using a similar experimental setup, the researchers also identified a parallel circuit that suppresses auditory input when animals pay attention to the visual cue. In that case, the circuit travels through parts of the striatum and thalamus that are associated with processing sound, rather than vision.

The findings offer some of the first evidence that the basal ganglia, which are known to be critical for planning movement, also play a role in controlling attention, Halassa says.

“What we realized here is that the connection between PFC and sensory processing at this level is mediated through the basal ganglia, and in that sense, the basal ganglia influence control of sensory processing,” he says. “We now have a very clear idea of how the basal ganglia can be involved in purely attentional processes that have nothing to do with motor preparation.”

Noise sensitivity

The researchers also found that the same circuits are employed not only for switching between different types of sensory input such as visual and auditory stimuli, but also for suppressing distracting input within the same sense — for example, blocking out background noise while focusing on one person’s voice.

The team also showed that when the animals are alerted that the task is going to be noisy, their performance actually improves, as they use this circuit to focus their attention.

“This study uses a dazzling array of techniques for neural circuit dissection to identify a distributed pathway, linking the prefrontal cortex to the basal ganglia to the thalamic reticular nucleus, that allows the mouse brain to enhance relevant sensory features and suppress distractors at opportune moments,” says Daniel Polley, an associate professor of otolaryngology at Harvard Medical School, who was not involved in the research. “By paring down the complexities of the sensory stimulus only to its core relevant features in the thalamus — before it reaches the cortex — our cortex can more efficiently encode just the essential features of the sensory world.”

Halassa’s lab is now doing similar experiments in mice that are genetically engineered to develop symptoms similar to those of people with autism. One common feature of autism spectrum disorder is hypersensitivity to noise, which could be caused by impairments of this brain circuit, Halassa says. He is now studying whether boosting the activity of this circuit might reduce sensitivity to noise.

“Controlling noise is something that patients with autism have trouble with all the time,” he says. “Now there are multiple nodes in the pathway that we can start looking at to try to understand this.”

The research was funded by the National Institutes of Mental Health, the National Institute of Neurological Disorders and Stroke, the Simons Foundation, the Alfred P. Sloan Foundation, the Esther A. and Joseph Klingenstein Fund, and the Human Frontier Science Program.

Antenna-like inputs unexpectedly active in neural computation

Most neurons have many branching extensions called dendrites that receive input from thousands of other neurons. Dendrites aren’t just passive information-carriers, however. According to a new study from MIT, they appear to play a surprisingly large role in neurons’ ability to translate incoming signals into electrical activity.

Neuroscientists had previously suspected that dendrites might be active only rarely, under specific circumstances, but the MIT team found that dendrites are nearly always active when the main cell body of the neuron is active.

“It seems like dendritic spikes are an intrinsic feature of how neurons in our brain can compute information. They’re not a rare event,” says Lou Beaulieu-Laroche, an MIT graduate student and the lead author of the study. “All the neurons that we looked at had these dendritic spikes, and they had dendritic spikes very frequently.”

The findings suggest that the role of dendrites in the brain’s computational ability is much larger than had previously been thought, says Mark Harnett, who is the Fred and Carole Middleton Career Development Assistant Professor of Brain and Cognitive Sciences, a member of the McGovern Institute for Brain Research, and the senior author of the paper.

“It’s really quite different than how the field had been thinking about this,” he says. “This is evidence that dendrites are actively engaged in producing and shaping the outputs of neurons.”

Graduate student Enrique Toloza and technical associate Norma Brown are also authors of the paper, which appears in Neuron on June 6.

“A far-flung antenna”

Dendrites receive input from many other neurons and carry those signals to the cell body, also called the soma. If stimulated enough, a neuron fires an action potential — an electrical impulse that spreads to other neurons. Large networks of these neurons communicate with each other to perform complex cognitive tasks such as producing speech.

Through imaging and electrical recording, neuroscientists have learned a great deal about the anatomical and functional differences between different types of neurons in the brain’s cortex, but little is known about how they incorporate dendritic inputs and decide whether to fire an action potential. Dendrites give neurons their characteristic branching tree shape, and the size of the “dendritic arbor” far exceeds the size of the soma.

“It’s an enormous, far-flung antenna that’s listening to thousands of synaptic inputs distributed in space along that branching structure from all the other neurons in the network,” Harnett says.

Some neuroscientists have hypothesized that dendrites are active only rarely, while others thought it possible that dendrites play a more central role in neurons’ overall activity. Until now, it has been difficult to test which of these ideas is more accurate, Harnett says.

To explore dendrites’ role in neural computation, the MIT team used calcium imaging to simultaneously measure activity in both the soma and dendrites of individual neurons in the visual cortex of the brain. Calcium flows into neurons when they are electrically active, so this measurement allowed the researchers to compare the activity of dendrites and soma of the same neuron. The imaging was done while mice performed simple tasks such as running on a treadmill or watching a movie.

Unexpectedly, the researchers found that activity in the soma was highly correlated with dendrite activity. That is, when the soma of a particular neuron was active, the dendrites of that neuron were also active most of the time. This was particularly surprising because the animals weren’t performing any kind of cognitively demanding task, Harnett says.

“They weren’t engaged in a task where they had to really perform and call upon cognitive processes or memory. This is pretty simple, low-level processing, and already we have evidence for active dendritic processing in almost all the neurons,” he says. “We were really surprised to see that.”

Evolving patterns

The researchers don’t yet know precisely how dendritic input contributes to neurons’ overall activity, or what exactly the neurons they studied are doing.

“We know that some of those neurons respond to some visual stimuli, but we don’t necessarily know what those individual neurons are representing. All we can say is that whatever the neuron is representing, the dendrites are actively participating in that,” Beaulieu-Laroche says.

While more work remains to determine exactly how the activity in the dendrites and the soma are linked, “it is these tour-de-force in vivo measurements that are critical for explicitly testing hypotheses regarding electrical signaling in neurons,” says Marla Feller, a professor of neurobiology at the University of California at Berkeley, who was not involved in the research.

The MIT team now plans to investigate how dendritic activity contributes to overall neuronal function by manipulating dendrite activity and then measuring how it affects the activity of the cell body, Harnett says. They also plan to study whether the activity patterns they observed evolve as animals learn a new task.

“One hypothesis is that dendritic activity will actually sharpen up for representing features of a task you taught the animals, and all the other dendritic activity, and all the other somatic activity, is going to get dampened down in the rest of the cortical cells that are not involved,” Harnett says.

The research was funded by the Natural Sciences and Engineering Research Council of Canada and the U.S. National Institutes of Health.

Putting vision models to the test

MIT neuroscientists have performed the most rigorous testing yet of computational models that mimic the brain’s visual cortex.

Using their current best model of the brain’s visual neural network, the researchers designed a new way to precisely control individual neurons and populations of neurons in the middle of that network. In an animal study, the team then showed that the information gained from the computational model enabled them to create images that strongly activated specific brain neurons of their choosing.

The findings suggest that the current versions of these models are similar enough to the brain that they could be used to control brain states in animals. The study also helps to establish the usefulness of these vision models, which have generated vigorous debate over whether they accurately mimic how the visual cortex works, says James DiCarlo, the head of MIT’s Department of Brain and Cognitive Sciences, an investigator in the McGovern Institute for Brain Research and the Center for Brains, Minds, and Machines, and the senior author of the study.

“People have questioned whether these models provide understanding of the visual system,” he says. “Rather than debate that in an academic sense, we showed that these models are already powerful enough to enable an important new application. Whether you understand how the model works or not, it’s already useful in that sense.”

MIT postdocs Pouya Bashivan and Kohitij Kar are the lead authors of the paper, which appears in the May 2 online edition of Science.

Neural control

Over the past several years, DiCarlo and others have developed models of the visual system based on artificial neural networks. Each network starts out with an arbitrary architecture consisting of model neurons, or nodes, that can be connected to each other with different strengths, also called weights.

The researchers then train the models on a library of more than 1 million images. As the researchers show the model each image, along with a label for the most prominent object in the image, such as an airplane or a chair, the model learns to recognize objects by changing the strengths of its connections.

It’s difficult to determine exactly how the model achieves this kind of recognition, but DiCarlo and his colleagues have previously shown that the “neurons” within these models produce activity patterns very similar to those seen in the animal visual cortex in response to the same images.

In the new study, the researchers wanted to test whether their models could perform some tasks that previously have not been demonstrated. In particular, they wanted to see if the models could be used to control neural activity in the visual cortex of animals.

“So far, what has been done with these models is predicting what the neural responses would be to other stimuli that they have not seen before,” Bashivan says. “The main difference here is that we are going one step further and using the models to drive the neurons into desired states.”

To achieve this, the researchers first created a one-to-one map of neurons in the brain’s visual area V4 to nodes in the computational model. They did this by showing images to animals and to the models, and comparing their responses to the same images. There are millions of neurons in area V4, but for this study, the researchers created maps for subpopulations of five to 40 neurons at a time.

“Once each neuron has an assignment, the model allows you to make predictions about that neuron,” DiCarlo says.

The researchers then set out to see if they could use those predictions to control the activity of individual neurons in the visual cortex. The first type of control, which they called “stretching,” involves showing an image that will drive the activity of a specific neuron far beyond the activity usually elicited by “natural” images similar to those used to train the neural networks.

The researchers found that when they showed animals these “synthetic” images, which are created by the models and do not resemble natural objects, the target neurons did respond as expected. On average, the neurons showed about 40 percent more activity in response to these images than when they were shown natural images like those used to train the model. This kind of control has never been reported before.

“That they succeeded in doing this is really amazing. It’s as if, for that neuron at least, its ideal image suddenly leaped into focus. The neuron was suddenly presented with the stimulus it had always been searching for,” says Aaron Batista, an associate professor of bioengineering at the University of Pittsburgh, who was not involved in the study. “This is a remarkable idea, and to pull it off is quite a feat. It is perhaps the strongest validation so far of the use of artificial neural networks to understand real neural networks.”

In a similar set of experiments, the researchers attempted to generate images that would drive one neuron maximally while also keeping the activity in nearby neurons very low, a more difficult task. For most of the neurons they tested, the researchers were able to enhance the activity of the target neuron with little increase in the surrounding neurons.

“A common trend in neuroscience is that experimental data collection and computational modeling are executed somewhat independently, resulting in very little model validation, and thus no measurable progress. Our efforts bring back to life this ‘closed loop’ approach, engaging model predictions and neural measurements that are critical to the success of building and testing models that will most resemble the brain,” Kar says.

Measuring accuracy

The researchers also showed that they could use the model to predict how neurons of area V4 would respond to synthetic images. Most previous tests of these models have used the same type of naturalistic images that were used to train the model. The MIT team found that the models were about 54 percent accurate at predicting how the brain would respond to the synthetic images, compared to nearly 90 percent accuracy when the natural images are used.

“In a sense, we’re quantifying how accurate these models are at making predictions outside the domain where they were trained,” Bashivan says. “Ideally the model should be able to predict accurately no matter what the input is.”

The researchers now hope to improve the models’ accuracy by allowing them to incorporate the new information they learn from seeing the synthetic images, which was not done in this study.

This kind of control could be useful for neuroscientists who want to study how different neurons interact with each other, and how they might be connected, the researchers say. Farther in the future, this approach could potentially be useful for treating mood disorders such as depression. The researchers are now working on extending their model to the inferotemporal cortex, which feeds into the amygdala, which is involved in processing emotions.

“If we had a good model of the neurons that are engaged in experiencing emotions or causing various kinds of disorders, then we could use that model to drive the neurons in a way that would help to ameliorate those disorders,” Bashivan says.

The research was funded by the Intelligence Advanced Research Projects Agency, the MIT-IBM Watson AI Lab, the National Eye Institute, and the Office of Naval Research.

Alumnus gives MIT $4.5 million to study effects of cannabis on the brain

The following news is adapted from a press release issued in conjunction with Harvard Medical School.

Charles R. Broderick, an alumnus of MIT and Harvard University, has made gifts to both alma maters to support fundamental research into the effects of cannabis on the brain and behavior.

The gifts, totaling $9 million, represent the largest donation to date to support independent research on the science of cannabinoids. The donation will allow experts in the fields of neuroscience and biomedicine at MIT and Harvard Medical School to conduct research that may ultimately help unravel the biology of cannabinoids, illuminate their effects on the human brain, catalyze treatments, and inform evidence-based clinical guidelines, societal policies, and regulation of cannabis.

Lagging behind legislation

With the increasing use of cannabis both for medicinal and recreational purposes, there is a growing concern about critical gaps in knowledge.

In 2017, the National Academies of Sciences, Engineering, and Medicine issued a report calling upon philanthropic organizations, private companies, public agencies and others to develop a “comprehensive evidence base” on the short- and long-term health effects — both beneficial and harmful — of cannabis use.

“Our desire is to fill the research void that currently exists in the science of cannabis,” says Broderick, who was an early investor in Canada’s medical marijuana market.

Broderick is the founder of Uji Capital LLC, a family office focused on quantitative opportunities in global equity capital markets. Identifying the growth of the Canadian legal cannabis market as a strategic investment opportunity, Broderick took equity positions in Tweed Marijuana Inc. and Aphria Inc., which have since grown into two of North America’s most successful cannabis companies. Subsequently, Broderick made a private investment in and served as a board member for Tokyo Smoke, a cannabis brand portfolio, which merged in 2017 to create Hiku Brands, where he served as chairman. Hiku Brands was acquired by Canopy Growth Corp. in 2018.

Through the Broderick gifts to Harvard Medical School and MIT’s School of Science through the Picower Institute for Learning and Memory and the McGovern Institute for Brain Research, the Broderick funds will support independent studies of the neurobiology of cannabis; its effects on brain development, various organ systems and overall health, including treatment and therapeutic contexts; and cognitive, behavioral and social ramifications.

“I want to destigmatize the conversation around cannabis — and, in part, that means providing facts to the medical community, as well as the general public,” says Broderick, who argues that independent research needs to form the basis for policy discussions, regardless of whether it is good for business. “Then we’re all working from the same information. We need to replace rhetoric with research.”

MIT: Focused on brain health and function

The gift to MIT from Broderick will provide $4.5 million over three years to support independent research for four scientists at the McGovern and Picower institutes.

Two of these researchers — John Gabrieli, the Grover Hermann Professor of Health Sciences and Technology, a professor of brain and cognitive sciences, and a member of MIT’s McGovern Institute for Brain Research; and Myriam Heiman, the Latham Family Associate Professor of Neuroscience at the Picower Institute — will separately explore the relationship between cannabis and schizophrenia.

Gabrieli, who directs the Martinos Imaging Center at MIT, will monitor any potential therapeutic value of cannabis for adults with schizophrenia using fMRI scans and behavioral studies.

“The ultimate goal is to improve brain health and wellbeing,” says Gabrieli. “And we have to make informed decisions on the way to this goal, wherever the science leads us. We need more data.”

Heiman, who is a molecular neuroscientist, will study how chronic exposure to phytocannabinoid molecules THC and CBD may alter the developmental molecular trajectories of cell types implicated in schizophrenia.

“Our lab’s research may provide insight into why several emerging lines of evidence suggest that adolescent cannabis use can be associated with adverse outcomes not seen in adults,” says Heiman.

In addition to these studies, Gabrieli also hopes to investigate whether cannabis can have therapeutic value for autism spectrum disorders, and Heiman plans to look at whether cannabis can have therapeutic value for Huntington’s disease.

MIT Institute Professor Ann Graybiel has proposed to study the cannabinoid 1 (CB1) receptor, which mediates many of the effects of cannabinoids. Her team recently found that CB1 receptors are tightly linked to dopamine — a neurotransmitter that affects both mood and motivation. Graybiel, who is also a member of the McGovern Institute, will examine how CB1 receptors in the striatum, a deep brain structure implicated in learning and habit formation, may influence dopamine release in the brain. These findings will be important for understanding the effects of cannabis on casual users, as well as its relationship to addictive states and neuropsychiatric disorders.

Earl Miller, Picower Professor of Neuroscience at the Picower Institute, will study effects of cannabinoids on both attention and working memory. His lab has recently formulated a model of working memory and unlocked how anesthetics reduce consciousness, showing in both cases a key role in the brain’s frontal cortex for brain rhythms, or the synchronous firing of neurons. He will observe how these rhythms may be affected by cannabis use — findings that may be able to shed light on tasks like driving where maintenance of attention is especially crucial.

Harvard Medical School: Mobilizing basic scientists and clinicians to solve an acute biomedical challenge 

The Broderick gift provides $4.5 million to establish the Charles R. Broderick Phytocannabinoid Research Initiative at Harvard Medical School, funding basic, translational and clinical research across the HMS community to generate fundamental insights about the effects of cannabinoids on brain function, various organ systems, and overall health.

The research initiative will span basic science and clinical disciplines, ranging from neurobiology and immunology to psychiatry and neurology, taking advantage of the combined expertise of some 30 basic scientists and clinicians across the school and its affiliated hospitals.

The epicenter of these research efforts will be the Department of Neurobiology under the leadership of Bruce Bean and Wade Regehr.

“I am excited by Bob’s commitment to cannabinoid science,” says Regehr, professor of neurobiology in the Blavatnik Institute at Harvard Medical School. “The research efforts enabled by Bob’s vision set the stage for unraveling some of the most confounding mysteries of cannabinoids and their effects on the brain and various organ systems.”

Bean, Regehr, and fellow neurobiologists Rachel Wilson and Bernardo Sabatini, for example, focus on understanding the basic biology of the cannabinoid system, which includes hundreds of plant and synthetic compounds as well as naturally occurring cannabinoids made in the brain.

Cannabinoid compounds activate a variety of brain receptors, and the downstream biological effects of this activation are astoundingly complex, varying by age and sex, and complicated by a person’s physiologic condition and overall health. This complexity and high degree of variability in individual biology has hampered scientific understanding of the positive and negative effects of cannabis on the human body. Bean, Regehr, and colleagues have already made critical insights showing how cannabinoids influence cell-to-cell communication in the brain.

“Even though cannabis products are now widely available, and some used clinically, we still understand remarkably little about how they influence brain function and neuronal circuits in the brain,” says Bean, the Robert Winthrop Professor of Neurobiology in the Blavatnik Institute at HMS. “This gift will allow us to conduct critical research into the neurobiology of cannabinoids, which may ultimately inform new approaches for the treatment of pain, epilepsy, sleep and mood disorders, and more.”

To propel research findings from lab to clinic, basic scientists from HMS will partner with clinicians from Harvard-affiliated hospitals, bringing together clinicians and scientists from disciplines including cardiology, vascular medicine, neurology, and immunology in an effort to glean a deeper and more nuanced understanding of cannabinoids’ effects on various organ systems and the body as a whole, rather than just on isolated organs.

For example, Bean and colleague Gary Yellen, who are studying the mechanisms of action of antiepileptic drugs, have become interested in the effects of cannabinoids on epilepsy, an interest they share with Elizabeth Thiele, director of the pediatric epilepsy program at Massachusetts General Hospital. Thiele is a pioneer in the use of cannabidiol for the treatment of drug-resistant forms of epilepsy. Despite proven clinical efficacy and recent FDA approval for rare childhood epilepsies, researchers still do not know exactly how cannabidiol quiets the misfiring brain cells of patients with the seizure disorder. Understanding its mechanism of action could help in developing new agents for treating other forms of epilepsy and other neurologic disorders.

Guoping Feng elected to American Academy of Arts and Sciences

Four MIT faculty members are among more than 200 leaders from academia, business, public affairs, the humanities, and the arts elected to the American Academy of Arts and Sciences, the academy announced today.

One of the nation’s most prestigious honorary societies, the academy is also a leading center for independent policy research. Members contribute to academy publications, as well as studies of science and technology policy, energy and global security, social policy and American institutions, the humanities and culture, and education.

Those elected from MIT this year are:

  • Dimitri A. Antoniadis, Ray and Maria Stata Professor of Electrical Engineering;
  • Anantha P. Chandrakasan, dean of the School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science;
  • Guoping Feng, the James W. (1963) and Patricia T. Poitras Professor of Brain and Cognitive Sciences; and
  • David R. Karger, professor of electrical engineering.

“We are pleased to recognize the excellence of our new members, celebrate their compelling accomplishments, and invite them to join the academy and contribute to its work,” said David W. Oxtoby, president of the American Academy of Arts and Sciences. “With the election of these members, the academy upholds the ideals of research and scholarship, creativity and imagination, intellectual exchange and civil discourse, and the relentless pursuit of knowledge in all its forms.”

The new class will be inducted at a ceremony in October in Cambridge, Massachusetts.

Since its founding in 1780, the academy has elected leading “thinkers and doers” from each generation, including George Washington and Benjamin Franklin in the 18th century, Maria Mitchell and Daniel Webster in the 19th century, and Toni Morrison and Albert Einstein in the 20th century. The current membership includes more than 200 Nobel laureates and 100 Pulitzer Prize winners.

Elephant or chair? How the brain IDs objects

As visual information flows into the brain through the retina, the visual cortex transforms the sensory input into coherent perceptions. Neuroscientists have long hypothesized that a part of the visual cortex called the inferotemporal (IT) cortex is necessary for the key task of recognizing individual objects, but the evidence has been inconclusive.

In a new study, MIT neuroscientists have found clear evidence that the IT cortex is indeed required for object recognition; they also found that subsets of this region are responsible for distinguishing different objects.

In addition, the researchers have developed computational models that describe how these neurons transform visual input into a mental representation of an object. They hope such models will eventually help guide the development of brain-machine interfaces (BMIs) that could be used for applications such as generating images in the mind of a blind person.

“We don’t know if that will be possible yet, but this is a step on the pathway toward those kinds of applications that we’re thinking about,” says James DiCarlo, the head of MIT’s Department of Brain and Cognitive Sciences, a member of the McGovern Institute for Brain Research, and the senior author of the new study.

Rishi Rajalingham, a postdoc at the McGovern Institute, is the lead author of the paper, which appears in the March 13 issue of Neuron.

Distinguishing objects

In addition to its hypothesized role in object recognition, the IT cortex also contains “patches” of neurons that respond preferentially to faces. Beginning in the 1960s, neuroscientists discovered that damage to the IT cortex could produce impairments in recognizing non-face objects, but it has been difficult to determine precisely how important the IT cortex is for this task.

The MIT team set out to find more definitive evidence for the IT cortex’s role in object recognition, by selectively shutting off neural activity in very small areas of the cortex and then measuring how the disruption affected an object discrimination task. In animals that had been trained to distinguish between objects such as elephants, bears, and chairs, they used a drug called muscimol to temporarily turn off subregions about 2 millimeters in diameter. Each of these subregions represents about 5 percent of the entire IT cortex.

These experiments, which represent the first time that researchers have been able to silence such small regions of IT cortex while measuring behavior over many object discriminations, revealed that the IT cortex is not only necessary for distinguishing between objects, but it is also divided into areas that handle different elements of object recognition.

The researchers found that silencing each of these tiny patches produced distinctive impairments in the animals’ ability to distinguish between certain objects. For example, one subregion might be involved in distinguishing chairs from cars, but not chairs from dogs. Each region was involved in 25 to 30 percent of the tasks that the researchers tested, and regions that were closer to each other tended to have more overlap between their functions, while regions far away from each other had little overlap.

“We might have thought of it as a sea of neurons that are completely mixed together, except for these islands of “face patches.” But what we’re finding, which many other studies had pointed to, is that there is large-scale organization over the entire region,” Rajalingham says.

The features that each of these regions are responding to are difficult to classify, the researchers say. The regions are not specific to objects such as dogs, nor easy-to-describe visual features such as curved lines.

“It would be incorrect to say that because we observed a deficit in distinguishing cars when a certain neuron was inhibited, this is a ‘car neuron,’” Rajalingham says. “Instead, the cell is responding to a feature that we can’t explain that is useful for car discriminations. There has been work in this lab and others that suggests that the neurons are responding to complicated nonlinear features of the input image. You can’t say it’s a curve, or a straight line, or a face, but it’s a visual feature that is especially helpful in supporting that particular task.”

Bevil Conway, a principal investigator at the National Eye Institute, says the new study makes significant progress toward answering the critical question of how neural activity in the IT cortex produces behavior.

“The paper makes a major step in advancing our understanding of this connection, by showing that blocking activity in different small local regions of IT has a different selective deficit on visual discrimination. This work advances our knowledge not only of the causal link between neural activity and behavior but also of the functional organization of IT: How this bit of brain is laid out,” says Conway, who was not involved in the research.

Brain-machine interface

The experimental results were consistent with computational models that DiCarlo, Rajalingham, and others in their lab have created to try to explain how IT cortex neuron activity produces specific behaviors.

“That is interesting not only because it says the models are good, but because it implies that we could intervene with these neurons and turn them on and off,” DiCarlo says. “With better tools, we could have very large perceptual effects and do real BMI in this space.”

The researchers plan to continue refining their models, incorporating new experimental data from even smaller populations of neurons, in hopes of developing ways to generate visual perception in a person’s brain by activating a specific sequence of neuronal activity. Technology to deliver this kind of input to a person’s brain could lead to new strategies to help blind people see certain objects.

“This is a step in that direction,” DiCarlo says. “It’s still a dream, but that dream someday will be supported by the models that are built up by this kind of work.”

The research was funded by the National Eye Institute, the Office of Naval Research, and the Simons Foundation.

MRI sensor images deep brain activity

Calcium is a critical signaling molecule for most cells, and it is especially important in neurons. Imaging calcium in brain cells can reveal how neurons communicate with each other; however, current imaging techniques can only penetrate a few millimeters into the brain.

MIT researchers have now devised a new way to image calcium activity that is based on magnetic resonance imaging (MRI) and allows them to peer much deeper into the brain. Using this technique, they can track signaling processes inside the neurons of living animals, enabling them to link neural activity with specific behaviors.

“This paper describes the first MRI-based detection of intracellular calcium signaling, which is directly analogous to powerful optical approaches used widely in neuroscience but now enables such measurements to be performed in vivo in deep tissue,” says Alan Jasanoff, an MIT professor of biological engineering, brain and cognitive sciences, and nuclear science and engineering, and an associate member of MIT’s McGovern Institute for Brain Research.

Jasanoff is the senior author of the paper, which appears in the Feb. 22 issue of Nature Communications. MIT postdocs Ali Barandov and Benjamin Bartelle are the paper’s lead authors. MIT senior Catherine Williamson, recent MIT graduate Emily Loucks, and Arthur Amos Noyes Professor Emeritus of Chemistry Stephen Lippard are also authors of the study.

Getting into cells

In their resting state, neurons have very low calcium levels. However, when they fire an electrical impulse, calcium floods into the cell. Over the past several decades, scientists have devised ways to image this activity by labeling calcium with fluorescent molecules. This can be done in cells grown in a lab dish, or in the brains of living animals, but this kind of microscopy imaging can only penetrate a few tenths of a millimeter into the tissue, limiting most studies to the surface of the brain.

“There are amazing things being done with these tools, but we wanted something that would allow ourselves and others to look deeper at cellular-level signaling,” Jasanoff says.

To achieve that, the MIT team turned to MRI, a noninvasive technique that works by detecting magnetic interactions between an injected contrast agent and water molecules inside cells.

Many scientists have been working on MRI-based calcium sensors, but the major obstacle has been developing a contrast agent that can get inside brain cells. Last year, Jasanoff’s lab developed an MRI sensor that can measure extracellular calcium concentrations, but these were based on nanoparticles that are too large to enter cells.

To create their new intracellular calcium sensors, the researchers used building blocks that can pass through the cell membrane. The contrast agent contains manganese, a metal that interacts weakly with magnetic fields, bound to an organic compound that can penetrate cell membranes. This complex also contains a calcium-binding arm called a chelator.

Once inside the cell, if calcium levels are low, the calcium chelator binds weakly to the manganese atom, shielding the manganese from MRI detection. When calcium flows into the cell, the chelator binds to the calcium and releases the manganese, which makes the contrast agent appear brighter in an MRI image.

“When neurons, or other brain cells called glia, become stimulated, they often experience more than tenfold increases in calcium concentration. Our sensor can detect those changes,” Jasanoff says.

Precise measurements

The researchers tested their sensor in rats by injecting it into the striatum, a region deep within the brain that is involved in planning movement and learning new behaviors. They then used potassium ions to stimulate electrical activity in neurons of the striatum, and were able to measure the calcium response in those cells.

Jasanoff hopes to use this technique to identify small clusters of neurons that are involved in specific behaviors or actions. Because this method directly measures signaling within cells, it can offer much more precise information about the location and timing of neuron activity than traditional functional MRI (fMRI), which measures blood flow in the brain.

“This could be useful for figuring out how different structures in the brain work together to process stimuli or coordinate behavior,” he says.

In addition, this technique could be used to image calcium as it performs many other roles, such as facilitating the activation of immune cells. With further modification, it could also one day be used to perform diagnostic imaging of the brain or other organs whose functions rely on calcium, such as the heart.

The research was funded by the National Institutes of Health and the MIT Simons Center for the Social Brain.

Mapping the brain at high resolution

Researchers have developed a new way to image the brain with unprecedented resolution and speed. Using this approach, they can locate individual neurons, trace connections between them, and visualize organelles inside neurons, over large volumes of brain tissue.

The new technology combines a method for expanding brain tissue, making it possible to image at higher resolution, with a rapid 3-D microscopy technique known as lattice light-sheet microscopy. In a paper appearing in Science Jan. 17, the researchers showed that they could use these techniques to image the entire fruit fly brain, as well as large sections of the mouse brain, much faster than has previously been possible. The team includes researchers from MIT, the University of California at Berkeley, the Howard Hughes Medical Institute, and Harvard Medical School/Boston Children’s Hospital.

This technique allows researchers to map large-scale circuits within the brain while also offering unique insight into individual neurons’ functions, says Edward Boyden, the Y. Eva Tan Professor in Neurotechnology, an associate professor of biological engineering and of brain and cognitive sciences at MIT, and a member of MIT’s McGovern Institute for Brain Research, Media Lab, and Koch Institute for Integrative Cancer Research.

“A lot of problems in biology are multiscale,” Boyden says. “Using lattice light-sheet microscopy, along with the expansion microscopy process, we can now image at large scale without losing sight of the nanoscale configuration of biomolecules.”

Boyden is one of the study’s senior authors, along with Eric Betzig, a senior fellow at the Janelia Research Campus and a professor of physics and molecular and cell biology at UC Berkeley. The paper’s lead authors are MIT postdoc Ruixuan Gao, former MIT postdoc Shoh Asano, and Harvard Medical School Assistant Professor Srigokul Upadhyayula.

Large-scale imaging

In 2015, Boyden’s lab developed a way to generate very high-resolution images of brain tissue using an ordinary light microscope. Their technique relies on expanding tissue before imaging it, allowing them to image the tissue at a resolution of about 60 nanometers. Previously, this kind of imaging could be achieved only with very expensive high-resolution microscopes, known as super-resolution microscopes.

In the new study, Boyden teamed up with Betzig and his colleagues at HHMI’s Janelia Research Campus to combine expansion microscopy with lattice light-sheet microscopy. This technology, which Betzig developed several years ago, has some key traits that make it ideal to pair with expansion microscopy: It can image large samples rapidly, and it induces much less photodamage than other fluorescent microscopy techniques.

“The marrying of the lattice light-sheet microscope with expansion microscopy is essential to achieve the sensitivity, resolution, and scalability of the imaging that we’re doing,” Gao says.

Imaging expanded tissue samples generates huge amounts of data — up to tens of terabytes per sample — so the researchers also had to devise highly parallelized computational image-processing techniques that could break down the data into smaller chunks, analyze it, and stitch it back together into a coherent whole.

In the Science paper, the researchers demonstrated the power of their new technique by imaging layers of neurons in the somatosensory cortex of mice, after expanding the tissue volume fourfold. They focused on a type of neuron known as pyramidal cells, one of the most common excitatory neurons found in the nervous system. To locate synapses, or connections, between these neurons, they labeled proteins found in the presynaptic and postsynaptic regions of the cells. This also allowed them to compare the density of synapses in different parts of the cortex.

Using this technique, it is possible to analyze millions of synapses in just a few days.

“We counted clusters of postsynaptic markers across the cortex, and we saw differences in synaptic density in different layers of the cortex,” Gao says. “Using electron microscopy, this would have taken years to complete.”

The researchers also studied patterns of axon myelination in different neurons. Myelin is a fatty substance that insulates axons and whose disruption is a hallmark of multiple sclerosis. The researchers were able to compute the thickness of the myelin coating in different segments of axons, and they measured the gaps between stretches of myelin, which are important because they help conduct electrical signals. Previously, this kind of myelin tracing would have required months to years for human annotators to perform.

This technology can also be used to image tiny organelles inside neurons. In the new paper, the researchers identified mitochondria and lysosomes, and they also measured variations in the shapes of these organelles.

Circuit analysis

The researchers demonstrated that this technique could be used to analyze brain tissue from other organisms as well; they used it to image the entire brain of the fruit fly, which is the size of a poppy seed and contains about 100,000 neurons. In one set of experiments, they traced an olfactory circuit that extends across several brain regions, imaged all dopaminergic neurons, and counted all synapses across the brain. By comparing multiple animals, they also found differences in the numbers and arrangements of synaptic boutons within each animal’s olfactory circuit.

In future work, Boyden envisions that this technique could be used to trace circuits that control memory formation and recall, to study how sensory input leads to a specific behavior, or to analyze how emotions are coupled to decision-making.

“These are all questions at a scale that you can’t answer with classical technologies,” he says.

The system could also have applications beyond neuroscience, Boyden says. His lab is planning to work with other researchers to study how HIV evades the immune system, and the technology could also be adapted to study how cancer cells interact with surrounding cells, including immune cells.

The research was funded by John Doerr, K. Lisa Yang and Y. Eva Tan, the Open Philanthropy Project, the National Institutes of Health, the Howard Hughes Medical Institute, the HHMI-Simons Faculty Scholars Program, the U.S. Army Research Laboratory and Army Research Office, the US-Israel Binational Science Foundation, Biogen, and Ionis Pharmaceuticals.