Stars, brains, and enzymes: a celebration of MIT science

“Our topic tonight, science and discovery, lives at the heart of MIT.” In his welcoming remarks for the first virtual MIT Better World gathering, W. Eric L. Grimson, MIT chancellor for academic advancement, detailed some of the ways MIT excels as a hub of scientific research and innovation. “Institute researchers are plumbing the secrets of the universe; modeling climate at a local, regional, and global scale; striving to understand how brains and bodies give rise to cognition and mind; and racing to find treatments and cures for diseases ranging from the acute, like Covid-19, to the chronic, like cancers and maladies of the aging brain,” said Grimson, who is also the Bernard M. Gordon Professor of Medical Engineering.

Members of the MIT community from around the globe were invited to attend the MIT Better World (Science) event, held online in November, to hear from Institute leaders, faculty, students, and alumni about the pursuit of scientific knowledge. Alumni in more than 80 countries registered to attend, and the evening put a special emphasis on Canada, which is home to a group of alumni and friends who served as virtual hosts, and to which Grimson and all of the opening session speakers captured in the video above have personal ties.

Grimson’s remarks were followed by presentations from the new dean of the MIT School of Science, Nergis Mavalvala; as well as Rebecca Saxe, the John W. Jarve (1978) Professor in Brain and Cognitive Sciences and associate investigator at the McGovern Institute for Brain Research; and microbiology PhD student Linda Zhong-Johnson.

Mavalvala, the Curtis (1963) and Kathleen Marble Professor of Astrophysics, described how she and colleagues have worked to improve the sensitivity of instruments used to detect gravitational waves through LIGO—the landmark research endeavor that has revealed, among other recent discoveries, that colliding neutron stars are the “factories” in which heavy elements like gold and platinum are manufactured. Having begun the role of School of Science dean this fall, Mavalvala now takes joy in enabling discoveries across the MIT community, including those focused on our own corner of the universe. “It’s a vast world out there, and for us to make a better world, we must first understand that world. At MIT, that’s just what we do.”

Saxe, who uses brain imaging to study human social cognition, described prescient experiments on social isolation conducted by her lab between 2017 and 2019. “Sometimes we do science just out of curiosity,” said Saxe as she explained why she, former postdoc Livia Tomova, and fellow researchers pursued a project with uncertain applications — only to find themselves writing what Saxe now calls “the most timely and relevant paper in my life” in March, just as the Covid-19 pandemic triggered widespread isolation measures.

The third speaker, Linda Zhong-Johnson, discussed her PhD research in the labs of Anthony J. Sinskey, professor of biology, and Christopher A. Voigt, the Daniel I.C. Wang Professor of Advanced Biotechnology. Her goal is to reduce the amount of plastic in landfills and oceans by studying enzymes that could digest polyethylene terephthalate, or PET, the plastic used to make most water bottles. “We’re getting closer to the answer,” she said. “I’m grateful to be at MIT, where we have the mandate and resources to keep exploring.”

More virtual MIT Better World events on the topics of health and sustainability are planned for this coming February and March. Meanwhile, watch the full session (above) and a range of breakout sessions on topics such as the politics of molecular medicine and the Mars 2020 mission, and learn more about the MIT Campaign for a Better World at betterworld.mit.edu.

Two MIT Brain and Cognitive Sciences faculty members earn funding from the G. Harold and Leila Y. Mathers Foundation

Two MIT neuroscientists have received grants from the G. Harold and Leila Y. Mathers Foundation to screen for genes that could help brain cells withstand Parkinson’s disease and to map how gene expression changes in the brain in response to drugs of abuse.

Myriam Heiman, an associate professor in MIT’s Department of Brain and Cognitive Sciences and a core member of the Picower Institute for Learning and Memory and the Broad Institute of MIT and Harvard, and Alan Jasanoff, who is also a professor in biological engineering, brain and cognitive sciences, nuclear science and engineering and an associate investigator at the McGovern Institute for Brain Research, each received three-year awards that formally begin January 1, 2021.

Jasanoff, who also directs MIT’s Center for Neurobiological Engineering, is known for developing sensors that monitor molecular hallmarks of neural activity in the living brain, in real time, via noninvasive MRI brain scanning. One of the MRI-detectable sensors that he has developed is for dopamine, a neuromodulator that is key to learning what behaviors and contexts lead to reward. Addictive drugs artificially drive dopamine release, thereby hijacking the brain’s reward prediction system. Studies have shown that dopamine and drugs of abuse activate gene transcription in specific brain regions, and that this gene expression changes as animals are repeatedly exposed to drugs. Despite the important implications of these neuroplastic changes for the process of addiction, in which drug-seeking behaviors become compulsive, there are no effective tools available to measure gene expression across the brain in real time.

Cerebral vasculature in mouse brain. The Jasanoff lab hopes to develop a method for mapping gene expression the brain with related labeling characteristics .
Image: Alan Jasanoff

With the new Mathers funding, Jasanoff is developing new MRI-detectable sensors for gene expression. With these cutting-edge tools, Jasanoff proposes to make an activity atlas of how the brain responds to drugs of abuse, both upon initial exposure and over repeated doses that simulate the experiences of drug addicted individuals.

“Our studies will relate drug-induced brain activity to longer term changes that reshape the brain in addiction,” says Jasanoff. “We hope these studies will suggest new biomarkers or treatments.”

Dopamine-producing neurons in a brain region called the substantia nigra are known to be especially vulnerable to dying in Parkinson’s disease, leading to the severe motor difficulties experienced during the progression of the incurable, chronic neurodegenerative disorder. The field knows little about what puts specific cells at such dire risk, or what molecular mechanisms might help them resist the disease. In her research on Huntington’s disease, another incurable neurodegenerative disorder in which a specific neuron population in the striatum is especially vulnerable, Heiman has been able to use an innovative method her lab pioneered to discover genes whose expression promotes neuron survival, yielding potential new drug targets. The technique involves conducting an unbiased screen in which her lab knocks out each of the 22,000 genes expressed in the mouse brain one by one in neurons in disease model mice and healthy controls. The technique allows her to determine which genes, when missing, contribute to neuron death amid disease and therefore which genes are particularly needed for survival. The products of those genes can then be evaluated as drug targets. With the new Mathers award, Heiman plans to apply the method to study Parkinson’s disease.

An immunofluorescence image taken in a brain region called the substantia nigra (SN) highlights tyrosine hydroxylase, a protein expressed by dopamine neurons. This type of neuron in the SN is especially vulnerable to neurodegeneration in Parkinson’s disease. Image: Preston Ge/Heiman Lab

“There is currently no molecular explanation for the brain cell loss seen in Parkinson’s disease or a cure for this devastating disease,” Heiman said. “This award will allow us to perform unbiased, genome-wide genetic screens in the brains of mouse models of Parkinson’s disease, probing for genes that allow brain cells to survive the effects of cellular perturbations associated with Parkinson’s disease. I’m extremely grateful for this generous support and recognition of our work from the Mathers Foundation, and hope that our study will elucidate new therapeutic targets for the treatment and even prevention of Parkinson’s disease.”

Sequencing inside cells

By bringing DNA sequencing out of the sequencer and directly to cells, MIT scientists have revealed an entirely new view of the genome. With a new method for in situ genome sequencing reported December 31, 2020, in the journal Science, researchers can, for the first time, see exactly how DNA sequences are organized and packed inside cells.

The approach, whose development was led by Ed Boyden, the Y. Eva Tan Professor in Neurotechnology at MIT, and Harvard University Stem Cell and Regenerative Biology faculty members Jason Buenrostro and Fei Chen, integrates DNA sequencing technology with microscopy to pinpoint exactly where specific DNA sequences are located inside intact cells.

While alternative methods allow scientists to reconstruct structural information about the genome, this is the first sequencing technology to give scientists a direct look.

The technology creates new opportunities to investigate a broad range of biology, from fundamental questions about how DNA’s three-dimensional organization affects its function to the structural changes and chromosomal rearrangements associated with aging, cancer, brain disorders, and other diseases.

Seeing is believing

“How structure yields function is one of the core themes of biology,” says Boyden, who is also an investigator at the McGovern Institute and the Howard Hughes Medical Institute.“And the history of biology tells us that when you can actually see something, you can make lots of advances.” Seeing how an organism’s genome is packed inside its cells could help explain how different cell types in the brain interpret the genetic code, or reveal structural patterns that mean the difference between health and disease, he says. Additionally, the researchers note, the technique also makes it possible to directly see how proteins and other factors interact with specific parts of the genome.

The new method builds on work underway in Boyden and Chen’s laboratories focused on sequencing RNA inside cells. Buenrostro collaborated with Boyden and Chen, who is also a core member of the Broad Institute, to adapt the technique for use with DNA. “It was clear the technology they had developed would be an extraordinary opportunity to have a new perspective on cells’ genomes,” Boyden says.

Their approach begins by fixing cells onto a glass surface to preserve their structure. Then, after inserting small DNA adapters into the genome, thousands of short segments of DNA—about 20 letters of code apiece—are amplified and sequenced in their original locations inside the cells. Finally, the samples are ground up and put into a sequencer, which sequences all of the cells’ DNA about 300 letters at a time. By finding the location-identified short sequences within those longer segments, the method pinpoints each one’s position within the three-dimensional structure of the cell.

Sequencing inside the cells is done more or less the same way DNA is sequenced inside a standard next-generation sequencer, Boyden explains, by watching under a microscope as a DNA strand is copied using fluorescently labeled building blocks. As in a traditional sequencer, each of DNA’s four building blocks, or nucleotides, is tagged with a different color so that they can be visually identified as they are added to a growing DNA strand.

A collaborative effort

Boyden, Buenrostro, and Chen, who began their collaboration several years ago, say the new technology represents a heroic effort on the part of MIT and Harvard graduate students Andrew Payne, Zachary Chiang, and Paul Reginato, who took the lead in developing and integrating its many technical steps and computational analyses. That involved both recapitulating the methods used in commercial sequencers and introducing several key innovations. “Some advances on the technology side have taken this from impossible to do to being possible,” Chen says.

The team has already used the method to visualize a genome as it reorganizes itself during the earliest moments of life. Brightly colored representations of DNA that they sequenced inside a mouse embryo show how genetic information inherited from each parent remains distinct and compartmentalized immediately after fertilization, then gradually intertwines as development progresses. Their sequencing also reveals how patterns of genome organization, which very early in life vary from cell to cell, are passed on as cells divide, generating a memory of each cell’s developmental origins. Being able to watch these processes unfold across entire cells instead of piecing them together through less direct means offered a dramatic new view of development, the researchers say.

While the team continues to improve the spatial resolution of the technique and adapt it to a broader range of cell types, they have made their method and associated software freely available to other labs. The researchers hope this new approach to DNA sequencing will change the way people think about studying the structure of the genome and will help illuminate patterns and consequences of genome organization across a variety of contexts.

Powered by viruses

View the interactive version of this story in our Winter 2021 issue of Brain Scan.

Viruses are notoriously adept invaders. The efficiency with which these unseen threats infiltrate tissues, evade immune systems, and occupy the cells of their hosts can be alarming — but it’s exactly why most McGovern neuroscientists keep a stash of viruses in the freezer.

In the hands of neuroscientists, viruses become vital tools for delivering cargo to cells.

With a bit of genetic manipulation, they can instruct neurons to produce proteins that illuminate complex circuitry, report on activity, or place certain cells under scientists’ control. They can even deliver therapies designed to correct genetic defects in patients.

“We rely on the virus to deliver whatever we want,” says McGovern Investigator Guoping Feng. “This is one of the most important technologies in neuroscience.”

Tracing connections

In Ian Wickersham’s lab, researchers are adapting a virus that, in its natural form, is devastating to the mammalian nervous system. Once it gains access to a neuron, the rabies virus spreads to connected cells, killing them within weeks. “That makes it a very dangerous pathogen, but also a very powerful tool for neuroscience,” says Wickersham, a Principal Research Scientist at the Institute.

Taking advantage of its pernicious spread, neuroscientists use a modified version of the rabies virus to introduce a fluorescent protein to infected cells and visualize their connections (above). As a graduate student in Edward Callaway’s lab at the Salk Institute for Biological Studies, Wickersham figured out how to limit the virus’s passage through the nervous system, allowing it to access cells that are directly connected to the neuron it initially infects, but go no further. Rabies virus travels across synapses in the opposite direction of neuronal signals, so researchers can deliver it to a single cell or set of cells, then see exactly where those cells’ inputs are coming from.

Labs around the world use Wickersham’s modified rabies virus to trace neuronal anatomy in the brains of mice. While his team tinkers to make the virus even more powerful, his collaborators have deployed it to map a variety of essential connections, offering clues into how the brain controls movement, detects odors, and retrieves memories.

With the newest tracing tool from the Wickersham lab, moving from anatomical studies to experiments that reveal circuit function is seamless, because the lab has further modified their virus so that it cannot kill cells. Researchers can label connected cells, then proceed to monitor their signals or manipulate their activity in the same animals.

Researchers usually conduct these experiments in genetically modified mice to control the subset of cells that activate the tracing system. It’s the same approach used to restrict most virally-delivered tools to specific neurons, which is crucial, Feng says. When introducing a fluorescent protein for imaging, for example, “we don’t want the gene we deliver to be activated everywhere, otherwise the whole brain will be lighting up,” he says.

Selective targets

In Feng’s lab, research scientist Martin Wienisch is working to make it easier to control this aspect of delivery. Rather than relying on the genetic makeup of an entire animal to determine where a virally-transported gene is switched on, instructions can be programmed directly into the virus, borrowing regulatory sequences that cells already know how to interpret.

Wienisch is scouring the genomes of individual neurons to identify short segments of regulatory DNA called enhancers. He’s focused on those that selectively activate gene expression in just one of hundreds of different neuron types, particularly in animal models that are not very amenable to genetic engineering. “In the real brain, many elements interact to drive cell specific expression. But amazingly sometimes a single enhancer is all we need to get the same effect,” he says.

Researchers are already using enhancers to confine viral tools to select groups of cells, but Wienisch, who is collaborating with Fenna Krienen in Steve McCarroll’s lab at Harvard University, aims to create a comprehensive library. The enhancers they identify will be paired with a variety of genetically-encoded tools and packaged into adeno-associated viruses (AAV), the most widely used vectors in neuroscience. The Feng lab plans to use these tools to better understand the striatum, a part of the primate brain involved in motivation and behavioral choices. “Ideally, we would have a set of AAVs with enhancers that would give us selective access to all the different cell types in the striatum,” Wienisch says.

Enhancers will also be useful for delivering potential gene therapies to patients, Wienisch says. For many years, the Feng lab has been studying how a missing copy of a gene called Shank3 impairs neurons’ ability to communicate, leading to autism and intellectual disability. Now, they are investigating whether they can overcome these deficits by delivering a functional copy of Shank3 to the brain cells that need it. Widespread activation of the therapeutic gene might do more harm than good, but incorporating the right enhancer could ensure it is delivered to the appropriate cells at the right dose, Wienisch says.

Like most gene therapies in development, the therapeutic Shank3, which is currently being tested in animal models, is packaged into an AAV. AAVs safely and efficiently infect human cells, and by selecting the right type, therapies can be directed to specific cells. But AAVs are small viruses, capable of carrying only small genes. Xian Gao, a postdoctoral researcher in the Feng lab, has pared Shank3 down to its most essential components, creating a “minigene” that can be packaged inside the virus, but some things are difficult to fit inside an AAV. Therapies that aim to correct mutations using the CRISPR gene editing system, for example, often exceed the carrying capacity of an AAV.

Expanding options

“There’s been a lot of really phenomenal advances in our gene editing toolkit,” says Victoria Madigan, a postdoctoral researcher in McGovern Investigator Feng Zhang’s lab, where researchers are developing enzymes to more precisely modify DNA. “One of the main limitations of employing these enzymes clinically has been their delivery.”

To open up new options for gene therapy, Zhang and Madigan are working with a group of viruses called densoviruses. Densoviruses and AAVs belong to the same family, but about 50 percent more DNA can be packed inside the outer shell of some densoviruses.

A molecular model of Galleria mellonella densovirus. Image: Victoria Madigan / Zhang Lab

They are an esoteric group of viruses, Madigan says, infecting only insects and crustaceans and perhaps best known for certain members’ ability to devastate shrimp farms. While densoviruses haven’t received a lot of attention from scientists, their similarities to AAV have given the team clues about how to alter their outer capsids to enable them to enter human cells, and even direct them to particular cell types. The fact that they don’t naturally infect people also makes densoviruses promising candidates for clinical use, Madigan says, because patients’ immune systems are unlikely to be primed to reject them. AAV infections, in contrast, are so common that patients are often excluded from clinical trials for AAV-based therapies due to the presence of neutralizing antibodies against the vector.

Ultimately, densoviruses could enable major advances in gene therapy, making it possible to safely deliver sophisticated gene editing systems to patients’ cells, Madigan says — and that’s good reason for scientists to continue exploring the vast diversity in the viral world. “There’s something to be said for looking into viruses that are understudied as new tools,” she says. “There’s a lot of interesting stuff out there — a lot of diversity and thousands of years of evolution.”

To the brain, reading computer code is not the same as reading language

In some ways, learning to program a computer is similar to learning a new language. It requires learning new symbols and terms, which must be organized correctly to instruct the computer what to do. The computer code must also be clear enough that other programmers can read and understand it.

In spite of those similarities, MIT neuroscientists have found that reading computer code does not activate the regions of the brain that are involved in language processing. Instead, it activates a distributed network called the multiple demand network, which is also recruited for complex cognitive tasks such as solving math problems or crossword puzzles.

However, although reading computer code activates the multiple demand network, it appears to rely more on different parts of the network than math or logic problems do, suggesting that coding does not precisely replicate the cognitive demands of mathematics either.

“Understanding computer code seems to be its own thing. It’s not the same as language, and it’s not the same as math and logic,” says Anna Ivanova, an MIT graduate student and the lead author of the study.

Evelina Fedorenko, the Frederick A. and Carole J. Middleton Career Development Associate Professor of Neuroscience and a member of the McGovern Institute for Brain Research, is the senior author of the paper, which appears today in eLife. Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory and Tufts University were also involved in the study.

Language and cognition

McGovern Investivator Ev Fedorenko in the Martinos Imaging Center at MIT. Photo: Caitlin Cunningham

A major focus of Fedorenko’s research is the relationship between language and other cognitive functions. In particular, she has been studying the question of whether other functions rely on the brain’s language network, which includes Broca’s area and other regions in the left hemisphere of the brain. In previous work, her lab has shown that music and math do not appear to activate this language network.

“Here, we were interested in exploring the relationship between language and computer programming, partially because computer programming is such a new invention that we know that there couldn’t be any hardwired mechanisms that make us good programmers,” Ivanova says.

There are two schools of thought regarding how the brain learns to code, she says. One holds that in order to be good at programming, you must be good at math. The other suggests that because of the parallels between coding and language, language skills might be more relevant. To shed light on this issue, the researchers set out to study whether brain activity patterns while reading computer code would overlap with language-related brain activity.

The two programming languages that the researchers focused on in this study are known for their readability — Python and ScratchJr, a visual programming language designed for children age 5 and older. The subjects in the study were all young adults proficient in the language they were being tested on. While the programmers lay in a functional magnetic resonance (fMRI) scanner, the researchers showed them snippets of code and asked them to predict what action the code would produce.

The researchers saw little to no response to code in the language regions of the brain. Instead, they found that the coding task mainly activated the so-called multiple demand network. This network, whose activity is spread throughout the frontal and parietal lobes of the brain, is typically recruited for tasks that require holding many pieces of information in mind at once, and is responsible for our ability to perform a wide variety of mental tasks.

“It does pretty much anything that’s cognitively challenging, that makes you think hard,” says Ivanova, who was also named one of the McGovern Institute’s rising stars in neuroscience.

Previous studies have shown that math and logic problems seem to rely mainly on the multiple demand regions in the left hemisphere, while tasks that involve spatial navigation activate the right hemisphere more than the left. The MIT team found that reading computer code appears to activate both the left and right sides of the multiple demand network, and ScratchJr activated the right side slightly more than the left. This finding goes against the hypothesis that math and coding rely on the same brain mechanisms.

Effects of experience

The researchers say that while they didn’t identify any regions that appear to be exclusively devoted to programming, such specialized brain activity might develop in people who have much more coding experience.

“It’s possible that if you take people who are professional programmers, who have spent 30 or 40 years coding in a particular language, you may start seeing some specialization, or some crystallization of parts of the multiple demand system,” Fedorenko says. “In people who are familiar with coding and can efficiently do these tasks, but have had relatively limited experience, it just doesn’t seem like you see any specialization yet.”

In a companion paper appearing in the same issue of eLife, a team of researchers from Johns Hopkins University also reported that solving code problems activates the multiple demand network rather than the language regions.

The findings suggest there isn’t a definitive answer to whether coding should be taught as a math-based skill or a language-based skill. In part, that’s because learning to program may draw on both language and multiple demand systems, even if — once learned — programming doesn’t rely on the language regions, the researchers say.

“There have been claims from both camps — it has to be together with math, it has to be together with language,” Ivanova says. “But it looks like computer science educators will have to develop their own approaches for teaching code most effectively.”

The research was funded by the National Science Foundation, the Department of the Brain and Cognitive Sciences at MIT, and the McGovern Institute for Brain Research.

New clues to brain changes in Huntington’s disease

Huntington’s disease is a fatal inherited disorder that strikes most often in middle age with mood disturbances, uncontrollable limb movements, and cognitive decline. Years before symptom onset, brain imaging shows degeneration of the striatum, a brain region important for the rapid selection of behavioral actions. As the striatal neurons degenerate, their “identity” proteins, the building blocks that give particular cell types their unique function, are gradually turned off.

A new study from the lab of Institute Professor Ann Graybiel has found a surprising exception to this rule. The researchers discovered that in mouse models of Huntington’s disease, the cell identity protein MOR1, named as the Mu type Opioid Receptor, actually becomes more abundant as the striatal neurons degenerate.

“This is one of the most striking immunohistochemical change that I have ever seen in the literature of Huntington’s disease model animals,” says Ryoma Morigaki, a research scientist in the Graybiel laboratory and lead author of the report, who worked with Tomoko Yoshida and others in the Graybiel lab.

Immunohistochemical stainings using anti-mu-opioid receptor antibody. Wild type mouse striatum (left) and Q175 Huntington’s disease model mouse striatum (right) at 19 months old. Image: Ryoma Morigaki

More opioid receptors

MOR1 is a receptor on the surface of neurons that binds to opioids that are produced by the body or those taken for pain relief, such as morphine. The natural opioid in the brain is a small molecule called enkephalin, and it is normally produced by the same striatal neurons that degenerate in the earliest stages of Huntington’s disease.

The research team speculates that the striatum increases the quantity of MOR1 receptors in Huntington’s disease models to compensate for plummeting levels of enkephalin, but they also believe this upregulation may play a role in the perception of reward.

Previous work suggests that MOR1 has distinct signaling mechanisms related to its function in pain perception and its function in drug-seeking. These distinct mechanisms might be related to the fact that MOR1 is produced as multiple “isoforms,” slight variations of a protein that can be read out from the same gene. The MOR1 isoform that is found in the striatum is thought to be more important for drug-seeking behaviors than for pain perception. This in turn means that MOR1 might play a role in a key striatal function, which is to learn what actions are most likely to lead to reward.

“It is now recognized that mood disturbances can pre-date the overt motor abnormalities of Huntington’s patients by many years. These can even be the most disturbing symptoms for patients and their families. The finding that this receptor for opioids becomes so elevated in mood-related sites of the striatum, at least in a mouse model of the disorder, may give a hint to the underlying circuit dysfunction leading to these problems,” says Ann Graybiel.

Clues for treatment

MOR1 is used as a standard to identify subsets of neurons that are located within small clusters of neurons in the striatum that were previously discovered by Ann Graybiel and named striosomes.

“The most exciting point for me is the involvement of striatal compartments [striosomes] in the pathogenesis of Huntington’s disease,” says Morigaki, who has now moved to the University of Fukoshima in Japan and is a practicing neurosurgeon who treats movement disorders.

MOR1-positive striosomal neurons are of high interest in part because they have direct connections to the same dopamine-producing neurons that are thought to degenerate in Parkinson’s disease. Whereas Parkinson’s disease is characterized by a loss of dopamine and loss of movement, Huntington’s disease is characterized by ups and downs in dopamine and excessive movements. In fact, the only drugs that are FDA-approved to treat Huntington’s disease are drugs that minimize dopamine release, thereby working to dampen the abnormal movements. But these treatments come with potentially severe side-effects such as depression and suicide.

This latest discovery might provide mechanistic clues to dopamine fluctuations in Huntington’s disease and provide avenues for more specific treatments.

This research was funded by the CHDI Foundation (A-5552), Broderick Fund for Phytocannabinoid Research at MIT, NIH/NIMH R01 MH060379, the Saks Kavanaugh Foundation, JSPS KAKENHI Grants #16KK0182, 17K10899 and 20K17932 , Dr. Tenley Albright, Kathleen Huber, and Dr. Stephan and Mrs. Anne Kott.

Storytelling brings MIT neuroscience community together

When the coronavirus pandemic shut down offices, labs, and classrooms across the MIT campus last spring, many members of the MIT community found it challenging to remain connected to one another in meaningful ways. Motivated by a desire to bring the neuroscience community back together, the McGovern Institute hosted a virtual storytelling competition featuring a selection of postdocs, grad students, and staff from across the institute.

“This has been an unprecedented year for us all,” says McGovern Institute Director Robert Desimone. “It has been twenty years since Pat and Lore McGovern founded the McGovern Institute, and despite the challenges this anniversary year has brought to our community, I have been inspired by the strength and perseverance demonstrated by our faculty, postdocs, students and staff. The resilience of this neuroscience community – and MIT as a whole – is indeed something to celebrate.”

The McGovern Institute had initially planned to hold a large 20th anniversary celebration in the atrium of Building 46 in the fall of 2020, but the pandemic made a gathering of this size impossible. The institute instead held a series of virtual events, including the November 12 story slam on the theme of resilience.

Neuroscientists find a way to improve object-recognition models

Computer vision models known as convolutional neural networks can be trained to recognize objects nearly as accurately as humans do. However, these models have one significant flaw: Very small changes to an image, which would be nearly imperceptible to a human viewer, can trick them into making egregious errors such as classifying a cat as a tree.

A team of neuroscientists from MIT, Harvard University, and IBM have developed a way to alleviate this vulnerability, by adding to these models a new layer that is designed to mimic the earliest stage of the brain’s visual processing system. In a new study, they showed that this layer greatly improved the models’ robustness against this type of mistake.

A grid showing the visualization of many common image corruption types. First row, original image, followed by the noise corruptions; second row, blur corruptions; third row, weather corruptions; fourth row, digital corruptions.
Credits: Courtesy of the researchers.

“Just by making the models more similar to the brain’s primary visual cortex, in this single stage of processing, we see quite significant improvements in robustness across many different types of perturbations and corruptions,” says Tiago Marques, an MIT postdoc and one of the lead authors of the study.

Convolutional neural networks are often used in artificial intelligence applications such as self-driving cars, automated assembly lines, and medical diagnostics. Harvard graduate student Joel Dapello, who is also a lead author of the study, adds that “implementing our new approach could potentially make these systems less prone to error and more aligned with human vision.”

“Good scientific hypotheses of how the brain’s visual system works should, by definition, match the brain in both its internal neural patterns and its remarkable robustness. This study shows that achieving those scientific gains directly leads to engineering and application gains,” says James DiCarlo, the head of MIT’s Department of Brain and Cognitive Sciences, an investigator in the Center for Brains, Minds, and Machines and the McGovern Institute for Brain Research, and the senior author of the study.

The study, which is being presented at the NeurIPS conference this month, is also co-authored by MIT graduate student Martin Schrimpf, MIT visiting student Franziska Geiger, and MIT-IBM Watson AI Lab Director David Cox.

Mimicking the brain

Recognizing objects is one of the visual system’s primary functions. In just a small fraction of a second, visual information flows through the ventral visual stream to the brain’s inferior temporal cortex, where neurons contain information needed to classify objects. At each stage in the ventral stream, the brain performs different types of processing. The very first stage in the ventral stream, V1, is one of the most well-characterized parts of the brain and contains neurons that respond to simple visual features such as edges.

“It’s thought that V1 detects local edges or contours of objects, and textures, and does some type of segmentation of the images at a very small scale. Then that information is later used to identify the shape and texture of objects downstream,” Marques says. “The visual system is built in this hierarchical way, where in early stages neurons respond to local features such as small, elongated edges.”

For many years, researchers have been trying to build computer models that can identify objects as well as the human visual system. Today’s leading computer vision systems are already loosely guided by our current knowledge of the brain’s visual processing. However, neuroscientists still don’t know enough about how the entire ventral visual stream is connected to build a model that precisely mimics it, so they borrow techniques from the field of machine learning to train convolutional neural networks on a specific set of tasks. Using this process, a model can learn to identify objects after being trained on millions of images.

Many of these convolutional networks perform very well, but in most cases, researchers don’t know exactly how the network is solving the object-recognition task. In 2013, researchers from DiCarlo’s lab showed that some of these neural networks could not only accurately identify objects, but they could also predict how neurons in the primate brain would respond to the same objects much better than existing alternative models. However, these neural networks are still not able to perfectly predict responses along the ventral visual stream, particularly at the earliest stages of object recognition, such as V1.

These models are also vulnerable to so-called “adversarial attacks.” This means that small changes to an image, such as changing the colors of a few pixels, can lead the model to completely confuse an object for something different — a type of mistake that a human viewer would not make.

A comparison of adversarial images with different perturbation strengths.
Credits: Courtesy of the researchers.

As a first step in their study, the researchers analyzed the performance of 30 of these models and found that models whose internal responses better matched the brain’s V1 responses were also less vulnerable to adversarial attacks. That is, having a more brain-like V1 seemed to make the model more robust. To further test and take advantage of that idea, the researchers decided to create their own model of V1, based on existing neuroscientific models, and place it at the front of convolutional neural networks that had already been developed to perform object recognition.

When the researchers added their V1 layer, which is also implemented as a convolutional neural network, to three of these models, they found that these models became about four times more resistant to making mistakes on images perturbed by adversarial attacks. The models were also less vulnerable to misidentifying objects that were blurred or distorted due to other corruptions.

“Adversarial attacks are a big, open problem for the practical deployment of deep neural networks. The fact that adding neuroscience-inspired elements can improve robustness substantially suggests that there is still a lot that AI can learn from neuroscience, and vice versa,” Cox says.

Better defense

Currently, the best defense against adversarial attacks is a computationally expensive process of training models to recognize the altered images. One advantage of the new V1-based model is that it doesn’t require any additional training. It is also better able to handle a wide range of distortions, beyond adversarial attacks.

The researchers are now trying to identify the key features of their V1 model that allows it to do a better job resisting adversarial attacks, which could help them to make future models even more robust. It could also help them learn more about how the human brain is able to recognize objects.

“One big advantage of the model is that we can map components of the model to particular neuronal populations in the brain,” Dapello says. “We can use this as a tool for novel neuroscientific discoveries, and also continue developing this model to improve its performance under this challenging task.”

The research was funded by the PhRMA Foundation Postdoctoral Fellowship in Informatics, the Semiconductor Research Corporation, DARPA, the MIT Shoemaker Fellowship, the U.S. Office of Naval Research, the Simons Foundation, and the MIT-IBM Watson AI Lab.

A large-scale tool to investigate the function of autism spectrum disorder genes

Scientists at Harvard University, the Broad Institute of MIT and Harvard, and MIT have developed a technology to investigate the function of many different genes in many different cell types at once, in a living organism. They applied the large-scale method to study dozens of genes that are associated with autism spectrum disorder, identifying how specific cell types in the developing mouse brain are impacted by mutations.

The “Perturb-Seq” method, published in the journal Science, is an efficient way to identify potential biological mechanisms underlying autism spectrum disorder, which is an important first step toward developing treatments for the complex disease. The method is also broadly applicable to other organs, enabling scientists to better understand a wide range of disease and normal processes.

“For many years, genetic studies have identified a multitude of risk genes that are associated with the development of autism spectrum disorder. The challenge in the field has been to make the connection between knowing what the genes are, to understanding how the genes actually affect cells and ultimately behavior,” said co-senior author Paola Arlotta, the Golub Family Professor of Stem Cell and Regenerative Biology at Harvard. “We applied the Perturb-Seq technology to an intact developing organism for the first time, showing the potential of measuring gene function at scale to better understand a complex disorder.”

The study was also led by co-senior authors Aviv Regev, who was a core member of the Broad Institute during the study and is currently Executive Vice President of Genentech Research and Early Development, and Feng Zhang, a core member of the Broad Institute and an investigator at MIT’s McGovern Institute.

To investigate gene function at a large scale, the researchers combined two powerful genomic technologies. They used CRISPR-Cas9 genome editing to make precise changes, or perturbations, in 35 different genes linked to autism spectrum disorder risk. Then, they analyzed changes in the developing mouse brain using single-cell RNA sequencing, which allowed them to see how gene expression changed in over 40,000 individual cells.

By looking at the level of individual cells, the researchers could compare how the risk genes affected different cell types in the cortex — the part of the brain responsible for complex functions including cognition and sensation. They analyzed networks of risk genes together to find common effects.

“We found that both neurons and glia — the non-neuronal cells in the brain — are directly affected by different sets of these risk genes,” said Xin Jin, lead author of the study and a Junior Fellow of the Harvard Society of Fellows. “Genes and molecules don’t generate cognition per se — they need to impact specific cell types in the brain to do so. We are interested in understanding how these different cell types can contribute to the disorder.”

To get a sense of the model’s potential relevance to the disorder in humans, the researchers compared their results to data from post-mortem human brains. In general, they found that in the post-mortem human brains with autism spectrum disorder, some of the key genes with altered expression were also affected in the Perturb-seq data.

“We now have a really rich dataset that allows us to draw insights, and we’re still learning a lot about it every day,” Jin said. “As we move forward with studying disease mechanisms in more depth, we can focus on the cell types that may be really important.”

“The field has been limited by the sheer time and effort that it takes to make one model at a time to test the function of single genes. Now, we have shown the potential of studying gene function in a developing organism in a scalable way, which is an exciting first step to understanding the mechanisms that lead to autism spectrum disorder and other complex psychiatric conditions, and to eventually develop treatments for these devastating conditions,” said Arlotta, who is also an institute member of the Broad Institute and part of the Broad’s Stanley Center for Psychiatric Research. “Our work also paves the way for Perturb-Seq to be applied to organs beyond the brain, to enable scientists to better understand the development or function of different tissue types, as well as pathological conditions.”

“Through genome sequencing efforts, a very large number of genes have been identified that, when mutated, are associated with human diseases. Traditionally, understanding the role of these genes would involve in-depth studies of each gene individually. By developing Perturb-seq for in vivo applications, we can start to screen all of these genes in animal models in a much more efficient manner, enabling us to understand mechanistically how mutations in these genes can lead to disease,” said Zhang, who is also the James and Patricia Poitras Professor of Neuroscience at MIT and a professor of brain and cognitive sciences and biological engineering at MIT.

This study was funded by the Stanley Center for Psychiatric Research at the Broad Institute, the National Institutes of Health, the Brain and Behavior Research Foundation’s NARSAD Young Investigator Grant, Harvard University’s William F. Milton Fund, the Klarman Cell Observatory, the Howard Hughes Medical Institute, a Center for Cell Circuits grant from the National Human Genome Research Institute’s Centers of Excellence in Genomic Science, the New York Stem Cell Foundation, the Mathers Foundation, the Poitras Center for Psychiatric Disorders Research at MIT, the Hock E. Tan and K. Lisa Yang Center for Autism Research at MIT, and J. and P. Poitras.

How humans use objects in novel ways to solve problems

Human beings are naturally creative tool users. When we need to drive in a nail but don’t have a hammer, we easily realize that we can use a heavy, flat object like a rock in its place. When our table is shaky, we quickly find that we can put a stack of paper under the table leg to stabilize it. But while these actions seem so natural to us, they are believed to be a hallmark of great intelligence — only a few other species use objects in novel ways to solve their problems, and none can do so as flexibly as people. What provides us with these powerful capabilities for using objects in this way?

In a new paper published in the Proceedings of the National Academy of Sciences describing work conducted at MIT’s Center for Brains, Minds and Machines, researchers Kelsey Allen, Kevin Smith, and Joshua Tenenbaum study the cognitive components that underlie this sort of improvised tool use. They designed a novel task, the Virtual Tools game, that taps into tool-use abilities: People must select one object from a set of “tools” that they can place in a two-dimensional, computerized scene to accomplish a goal, such as getting a ball into a certain container. Solving the puzzles in this game requires reasoning about a number of physical principles, including launching, blocking, or supporting objects.

The team hypothesized that there are three capabilities that people rely on to solve these puzzles: a prior belief that guides people’s actions toward those that will make a difference in the scene, the ability to imagine the effect of their actions, and a mechanism to quickly update their beliefs about what actions are likely to provide a solution. They built a model that instantiated these principles, called the “Sample, Simulate, Update,” or “SSUP,” model, and had it play the same game as people. They found that SSUP solved each puzzle at similar rates and in similar ways as people did. On the other hand, a popular deep learning model that could play Atari games well but did not have the same object and physical structures was unable to generalize its knowledge to puzzles it was not directly trained on.

This research provides a new framework for studying and formalizing the cognition that supports human tool use. The team hopes to extend this framework to not just study tool use, but also how people can create innovative new tools for new problems, and how humans transmit this information to build from simple physical tools to complex objects like computers or airplanes that are now part of our daily lives.

Kelsey Allen, a PhD student in the Computational Cognitive Science Lab at MIT, is excited about how the Virtual Tools game might support other cognitive scientists interested in tool use: “There is just so much more to explore in this domain. We have already started collaborating with researchers across multiple different institutions on projects ranging from studying what it means for games to be fun, to studying how embodiment affects disembodied physical reasoning. I hope that others in the cognitive science community will use the game as a tool to better understand how physical models interact with decision-making and planning.”

Joshua Tenenbaum, professor of computational cognitive science at MIT, sees this work as a step toward understanding not only an important aspect of human cognition and culture, but also how to build more human-like forms of intelligence in machines. “Artificial Intelligence researchers have been very excited about the potential for reinforcement learning (RL) algorithms to learn from trial-and-error experience, as humans do, but the real trial-and-error learning that humans benefit from unfolds over just a handful of trials — not millions or billions of experiences, as in today’s RL systems,” Tenenbaum says. “The Virtual Tools game allows us to study this very rapid and much more natural form of trial-and-error learning in humans, and the fact that the SSUP model is able to capture the fast learning dynamics we see in humans suggests it may also point the way towards new AI approaches to RL that can learn from their successes, their failures, and their near misses as quickly and as flexibly as people do.”