Ed Boyden elected to National Academy of Sciences

Ed Boyden has been elected to join the National Academy of Sciences (NAS). The organization, established by an act of Congress during the height of the Civil War, was founded to provide independent and objective advice on scientific matters to the nation, and is actively engaged in furthering science in the United States. Each year NAS members recognize fellow scientists through election to the academy based on their distinguished and continuing achievements in original research.

“I’m very honored and grateful to have been elected to the NAS,” says Boyden. “This is a testament to the work of many graduate students, postdoctoral scholars, research scientists, and staff at MIT who have worked with me over the years, and many collaborators and friends at MIT and around the world who have helped our group on this mission to advance neuroscience through new tools and ways of thinking.”

Boyden’s research creates and applies technologies that aim to expand our understanding of the brain. He notably co-invented optogenetics as an independent side collaboration, conducted in parallel to his PhD studies, a game-changing technology that has revolutionized neurobiology. This technology uses targeted expression of light-sensitive channels and pumps to activate or suppress neuronal activity in vivo using light. Optogenetics quickly swept the field of neurobiology and has been leveraged to understand how specific neurons and brain regions contribute to behavior and to disease.

His research since has an overarching focus on understanding the brain. To this end, he and his lab have the ambitious goal of developing technologies that can map, record, and manipulate the brain. This has led, as selected examples, to the invention of expansion microscopy, a super-resolution imaging technology that can capture neuron’s microstructures and reveal their complex connections, even across large-scale neural circuits; voltage-sensitive fluorescent reporters that allow neural activity to be monitored in vivo; and temporal interference stimulation, a non-invasive brain stimulation technique that allows selective activation of subcortical brain regions.

“We are all incredibly happy to see Ed being elected to the academy,” says Robert Desimone, director of the McGovern Institute for Brain Research at MIT. “He has been consistently innovative, inventing new ways of manipulating and observing neurons that are revolutionizing the field of neuroscience.”

This year the NAS, an organization that includes over 500 Nobel Laureates, elected 100 new members and 25 foreign associates. Three MIT professors were elected this year, with Paula T. Hammond (David H. Koch (1962) Professor of Engineering and Department Head, Chemical Engineering) and Aviv Regev (HHMI Investigator and Professor in the Department of Biology) being elected alongside Boyden. Boyden becomes the seventh member of the McGovern Institute faculty to join the National Academy of Sciences.

The formal induction ceremony for new NAS members, during which they sign the ledger whose first signatory is Abraham Lincoln, will be held at the Academy’s annual meeting in Washington D.C. next spring.

 

 

 

 

 

 

 

 

Alumnus gives MIT $4.5 million to study effects of cannabis on the brain

The following news is adapted from a press release issued in conjunction with Harvard Medical School.

Charles R. Broderick, an alumnus of MIT and Harvard University, has made gifts to both alma maters to support fundamental research into the effects of cannabis on the brain and behavior.

The gifts, totaling $9 million, represent the largest donation to date to support independent research on the science of cannabinoids. The donation will allow experts in the fields of neuroscience and biomedicine at MIT and Harvard Medical School to conduct research that may ultimately help unravel the biology of cannabinoids, illuminate their effects on the human brain, catalyze treatments, and inform evidence-based clinical guidelines, societal policies, and regulation of cannabis.

Lagging behind legislation

With the increasing use of cannabis both for medicinal and recreational purposes, there is a growing concern about critical gaps in knowledge.

In 2017, the National Academies of Sciences, Engineering, and Medicine issued a report calling upon philanthropic organizations, private companies, public agencies and others to develop a “comprehensive evidence base” on the short- and long-term health effects — both beneficial and harmful — of cannabis use.

“Our desire is to fill the research void that currently exists in the science of cannabis,” says Broderick, who was an early investor in Canada’s medical marijuana market.

Broderick is the founder of Uji Capital LLC, a family office focused on quantitative opportunities in global equity capital markets. Identifying the growth of the Canadian legal cannabis market as a strategic investment opportunity, Broderick took equity positions in Tweed Marijuana Inc. and Aphria Inc., which have since grown into two of North America’s most successful cannabis companies. Subsequently, Broderick made a private investment in and served as a board member for Tokyo Smoke, a cannabis brand portfolio, which merged in 2017 to create Hiku Brands, where he served as chairman. Hiku Brands was acquired by Canopy Growth Corp. in 2018.

Through the Broderick gifts to Harvard Medical School and MIT’s School of Science through the Picower Institute for Learning and Memory and the McGovern Institute for Brain Research, the Broderick funds will support independent studies of the neurobiology of cannabis; its effects on brain development, various organ systems and overall health, including treatment and therapeutic contexts; and cognitive, behavioral and social ramifications.

“I want to destigmatize the conversation around cannabis — and, in part, that means providing facts to the medical community, as well as the general public,” says Broderick, who argues that independent research needs to form the basis for policy discussions, regardless of whether it is good for business. “Then we’re all working from the same information. We need to replace rhetoric with research.”

MIT: Focused on brain health and function

The gift to MIT from Broderick will provide $4.5 million over three years to support independent research for four scientists at the McGovern and Picower institutes.

Two of these researchers — John Gabrieli, the Grover Hermann Professor of Health Sciences and Technology, a professor of brain and cognitive sciences, and a member of MIT’s McGovern Institute for Brain Research; and Myriam Heiman, the Latham Family Associate Professor of Neuroscience at the Picower Institute — will separately explore the relationship between cannabis and schizophrenia.

Gabrieli, who directs the Martinos Imaging Center at MIT, will monitor any potential therapeutic value of cannabis for adults with schizophrenia using fMRI scans and behavioral studies.

“The ultimate goal is to improve brain health and wellbeing,” says Gabrieli. “And we have to make informed decisions on the way to this goal, wherever the science leads us. We need more data.”

Heiman, who is a molecular neuroscientist, will study how chronic exposure to phytocannabinoid molecules THC and CBD may alter the developmental molecular trajectories of cell types implicated in schizophrenia.

“Our lab’s research may provide insight into why several emerging lines of evidence suggest that adolescent cannabis use can be associated with adverse outcomes not seen in adults,” says Heiman.

In addition to these studies, Gabrieli also hopes to investigate whether cannabis can have therapeutic value for autism spectrum disorders, and Heiman plans to look at whether cannabis can have therapeutic value for Huntington’s disease.

MIT Institute Professor Ann Graybiel has proposed to study the cannabinoid 1 (CB1) receptor, which mediates many of the effects of cannabinoids. Her team recently found that CB1 receptors are tightly linked to dopamine — a neurotransmitter that affects both mood and motivation. Graybiel, who is also a member of the McGovern Institute, will examine how CB1 receptors in the striatum, a deep brain structure implicated in learning and habit formation, may influence dopamine release in the brain. These findings will be important for understanding the effects of cannabis on casual users, as well as its relationship to addictive states and neuropsychiatric disorders.

Earl Miller, Picower Professor of Neuroscience at the Picower Institute, will study effects of cannabinoids on both attention and working memory. His lab has recently formulated a model of working memory and unlocked how anesthetics reduce consciousness, showing in both cases a key role in the brain’s frontal cortex for brain rhythms, or the synchronous firing of neurons. He will observe how these rhythms may be affected by cannabis use — findings that may be able to shed light on tasks like driving where maintenance of attention is especially crucial.

Harvard Medical School: Mobilizing basic scientists and clinicians to solve an acute biomedical challenge 

The Broderick gift provides $4.5 million to establish the Charles R. Broderick Phytocannabinoid Research Initiative at Harvard Medical School, funding basic, translational and clinical research across the HMS community to generate fundamental insights about the effects of cannabinoids on brain function, various organ systems, and overall health.

The research initiative will span basic science and clinical disciplines, ranging from neurobiology and immunology to psychiatry and neurology, taking advantage of the combined expertise of some 30 basic scientists and clinicians across the school and its affiliated hospitals.

The epicenter of these research efforts will be the Department of Neurobiology under the leadership of Bruce Bean and Wade Regehr.

“I am excited by Bob’s commitment to cannabinoid science,” says Regehr, professor of neurobiology in the Blavatnik Institute at Harvard Medical School. “The research efforts enabled by Bob’s vision set the stage for unraveling some of the most confounding mysteries of cannabinoids and their effects on the brain and various organ systems.”

Bean, Regehr, and fellow neurobiologists Rachel Wilson and Bernardo Sabatini, for example, focus on understanding the basic biology of the cannabinoid system, which includes hundreds of plant and synthetic compounds as well as naturally occurring cannabinoids made in the brain.

Cannabinoid compounds activate a variety of brain receptors, and the downstream biological effects of this activation are astoundingly complex, varying by age and sex, and complicated by a person’s physiologic condition and overall health. This complexity and high degree of variability in individual biology has hampered scientific understanding of the positive and negative effects of cannabis on the human body. Bean, Regehr, and colleagues have already made critical insights showing how cannabinoids influence cell-to-cell communication in the brain.

“Even though cannabis products are now widely available, and some used clinically, we still understand remarkably little about how they influence brain function and neuronal circuits in the brain,” says Bean, the Robert Winthrop Professor of Neurobiology in the Blavatnik Institute at HMS. “This gift will allow us to conduct critical research into the neurobiology of cannabinoids, which may ultimately inform new approaches for the treatment of pain, epilepsy, sleep and mood disorders, and more.”

To propel research findings from lab to clinic, basic scientists from HMS will partner with clinicians from Harvard-affiliated hospitals, bringing together clinicians and scientists from disciplines including cardiology, vascular medicine, neurology, and immunology in an effort to glean a deeper and more nuanced understanding of cannabinoids’ effects on various organ systems and the body as a whole, rather than just on isolated organs.

For example, Bean and colleague Gary Yellen, who are studying the mechanisms of action of antiepileptic drugs, have become interested in the effects of cannabinoids on epilepsy, an interest they share with Elizabeth Thiele, director of the pediatric epilepsy program at Massachusetts General Hospital. Thiele is a pioneer in the use of cannabidiol for the treatment of drug-resistant forms of epilepsy. Despite proven clinical efficacy and recent FDA approval for rare childhood epilepsies, researchers still do not know exactly how cannabidiol quiets the misfiring brain cells of patients with the seizure disorder. Understanding its mechanism of action could help in developing new agents for treating other forms of epilepsy and other neurologic disorders.

Algorithms of intelligence

The following post is adapted from a story featured in a recent Brain Scan newsletter.

Machine vision systems are more and more common in everyday life, from social media to self-driving cars, but training artificial neural networks to “see” the world as we do—distinguishing cyclists from signposts—remains challenging. Will artificial neural networks ever decode the world as exquisitely as humans? Can we refine these models and influence perception in a person’s brain just by activating individual, selected neurons? The DiCarlo lab, including CBMM postdocs Kohitij Kar and Pouya Bashivan, are finding that we are surprisingly close to answering “yes” to such questions, all in the context of accelerated insights into artificial intelligence at the McGovern Institute for Brain Research, CBMM, and the Quest for Intelligence at MIT.

Precision Modeling

Beyond light hitting the retina, the recognition process that unfolds in the visual cortex is key to truly “seeing” the surrounding world. Information is decoded through the ventral visual stream, cortical brain regions that progressively build a more accurate, fine-grained, and accessible representation of the objects around us. Artificial neural networks have been modeled on these elegant cortical systems, and the most successful models, deep convolutional neural networks (DCNNs), can now decode objects at levels comparable to the primate brain. However, even leading DCNNs have problems with certain challenging images, presumably due to shadows, clutter, and other visual noise. While there’s no simple feature that unites all challenging images, the quest is on to tackle such images to attain precise recognition at a level commensurate with human object recognition.

“One next step is to couple this new precision tool with our emerging understanding of how neural patterns underlie object perception. This might allow us to create arrangements of pixels that look nothing like, for example, a cat, but that can fool the brain into thinking it’s seeing a cat.”- James DiCarlo

In a recent push, Kar and DiCarlo demonstrated that adding feedback connections, currently missing in most DCNNs, allows the system to better recognize objects in challenging situations, even those where a human can’t articulate why recognition is an issue for feedforward DCNNs. They also found that this recurrent circuit seems critical to primate success rates in performing this task. This is incredibly important for systems like self-driving cars, where the stakes for artificial visual systems are high, and faithful recognition is a must.

Now you see it

As artificial object recognition systems have become more precise in predicting neural activity, the DiCarlo lab wondered what such precision might allow: could they use their system to not only predict, but to control specific neuronal activity?

To demonstrate the power of their models, Bashivan, Kar, and colleagues zeroed in on targeted neurons in the brain. In a paper published in Science, they used an artificial neural network to generate a random-looking group of pixels that, when shown to an animal, activated the team’s target, a target they called “one hot neuron.” In other words, they showed the brain a synthetic pattern, and the pixels in the pattern precisely activated targeted neurons while other neurons remained relatively silent.

These findings show how the knowledge in today’s artificial neural network models might one day be used to noninvasively influence brain states with neural resolution. Such precise systems would be useful as we look to the future, toward visual prosthetics for the blind. Such a precise model of the ventral visual stream would have been incon-ceivable not so long ago, and all eyes are on where McGovern researchers will take these technologies in the coming years.

Recurrent architecture enhances object recognition in brain and AI

Your ability to recognize objects is remarkable. If you see a cup under unusual lighting or from unexpected directions, there’s a good chance that your brain will still compute that it is a cup. Such precise object recognition is one holy grail for AI developers, such as those improving self-driving car navigation. While modeling primate object recognition in the visual cortex has revolutionized artificial visual recognition systems, current deep learning systems are simplified, and fail to recognize some objects that are child’s play for primates such as humans. In findings published in Nature Neuroscience, McGovern Investigator James DiCarlo and colleagues have found evidence that feedback improves recognition of hard-to-recognize objects in the primate brain, and that adding feedback circuitry also improves the performance of artificial neural network systems used for vision applications.

Deep convolutional neural networks (DCNN) are currently the most successful models for accurately recognizing objects on a fast timescale (<100 ms) and have a general architecture inspired by the primate ventral visual stream, cortical regions that progressively build an accessible and refined representation of viewed objects. Most DCNNs are simple in comparison to the primate ventral stream however.

“For a long period of time, we were far from an model-based understanding. Thus our field got started on this quest by modeling visual recognition as a feedforward process,” explains senior author DiCarlo, who is also the head of MIT’s Department of Brain and Cognitive Sciences and Research Co-Leader in the Center for Brains, Minds, and Machines (CBMM). “However, we know there are recurrent anatomical connections in brain regions linked to object recognition.”

Think of feedforward DCNNs and the portion of the visual system that first attempts to capture objects as a subway line that runs forward through a series of stations. The extra, recurrent brain networks are instead like the streets above, interconnected and not unidirectional. Because it only takes about 200 ms for the brain to recognize an object quite accurately, it was unclear if these recurrent interconnections in the brain had any role at all in core object recognition. For example, perhaps those recurrent connections are only in place to keep the visual system in tune over long periods of time. For example, the return gutters of the streets help slowly clear it of water and trash, but are not strictly needed to quickly move people from one end of town to the other. DiCarlo, along with lead author and CBMM postdoc Kohitij Kar, set out to test whether a subtle role of recurrent operations in rapid visual object recognition was being overlooked.

Challenging recognition

The authors first needed to identify objects that are trivially decoded by the primate brain, but are challenging for artificial systems. Rather than trying to guess why deep learning was having problems recognizing an object (is it due to clutter in the image? a misleading shadow?), the authors took an unbiased approach that turned out to be critical.

Kar explained further that “we realized that AI-models actually don’t have problems with every image where an object is occluded or in clutter. Humans trying to guess why AI models were challenged turned out to be holding us back.”

Instead, the authors presented the deep learning system, as well as monkeys and humans, with images, homing in on “challenge images” where the primates could easily recognize the objects in those images, but a feed forward DCNN ran into problems. When they, and others, added appropriate recurrent processing to these DCNNs, object recognition in challenge images suddenly became a breeze.

Processing times

Kar used neural recording methods with very high spatial and temporal precision to whether these images were really so trivial for primates. Remarkably, they found that though challenge images had initially appeared to be child’s play to the human brain, they actually involve extra neural processing time (about additional 30 milliseconds), suggesting that recurrent loops operate in our brain too.

 “What the computer vision community has recently achieved by stacking more and more layers onto artificial neural networks, evolution has achieved through a brain architecture with recurrent connections.” — Kohitij Kar

Diane Beck, Professor of Psychology and Co-chair of the Intelligent Systems Theme at the Beckman Institute and not an author on the study, explained further. “Since entirely feed forward deep convolutional nets are now remarkably good at predicting primate brain activity, it raised questions about the role of feedback connections in the primate brain. This study shows that, yes, feedback connections are very likely playing a role in object recognition after all.”

What does this mean for a self-driving car? It shows that deep learning architectures involved in object recognition need recurrent components if they are to match the primate brain, and also indicates how to operationalize this procedure for the next generation of intelligent machines.

“Recurrent models offer predictions of neural activity and behavior over time,” says Kar. “We may now be able to model more involved tasks. Perhaps one day, the systems will not only recognize an object, such as a person, but also perform cognitive tasks that the human brain so easily manages, such as understanding the emotions of other people.”

This work was supported by the Office of Naval Research grant MURI-114407 (J.J.D.). Center for Brains, Minds, and Machines (CBMM) funded by NSF STC award CCF-1231216 (K.K.).

Why is the brain shaped like it is?

The human brain has a very striking shape, and one feature stands out large and clear: the cerebral cortex with its stereotyped pattern of gyri (folds and convolutions) and sulci (fissures and depressions). This characteristic folded shape of the cortex is a major innovation in evolution that allowed an increase in the size and complexity of the human brain.

How the brain adopts these complex folds is surprisingly unclear, but probably involves both shape changes and movement of cells. Mechanical constraints within the overall tissue, and imposed by surrounding tissues also contribute to the ultimate shape: the brain has to fit into the skull after all. McGovern postdoc Jonathan Wilde has a long-term interest in studying how the brain develops, and explained to us how the shape of the brain initially arises.

In the case of humans, our historical reliance upon intelligence has driven a massive expansion of the cerebral cortex.

“Believe it or not, all vertebrate brains begin as a flat sheet of epithelial cells that folds upon itself to form a tube,” explains Wilde. “This neural tube is made up of a single layer of neural stem cells that go through a rapid and highly orchestrated process of expansion and differentiation, giving rise to all of the neurons in the brain. Throughout the first steps of development, the brains of most vertebrates are indistinguishable from one another, but the final shape of the brain is highly dependent upon the organism and primarily reflects that organism’s lifestyle, environment, and cognitive demands.”

So essentially, the brain starts off as a similar shape for creatures with spinal cords. But why is the human brain such a distinct shape?

“In the case of humans,” explains Wilde, “our historical reliance upon intelligence has driven a massive expansion of the cerebral cortex, which is the primary brain structure responsible for critical thinking and higher cognitive abilities. Accordingly, the human cortex is strikingly large and covered in a labyrinth of folds that serve to increase its surface area and computational power.”

The anatomical shape of the human brain is striking, but it also helps researchers to map a hidden functional atlas: specific brain regions that selectively activate in fMRI when you see a face, scene, hear music and a variety of other tasks. I asked former McGovern graduate student, and current postdoc at Boston Children’s Hospital, Hilary Richardson, for her perspective on this more hidden structure in the brain and how it relates to brain shape.

Illustration of person rappelling into the brain's sylvian fissure.
The Sylvian fissure is a prominent groove on each side of the brain that separates the frontal and parietal lobes from the temporaal lobe. McGovern researchers are studying a region near the right Sylvian fissure, called the rTPJ, which is involved in thinking about what another person is thinking. Image: Joe Laney

“One of the most fascinating aspects of brain shape is how similar it is across individuals, even very young infants and children,” explains Richardson. “Despite the dramatic cognitive changes that happen across childhood, the shape of the brain is remarkably consistent. Given this, one open question is what kinds of neural changes support cognitive development. For example, while the anatomical shape and size of the rTPJ seems to stay the same across childhood, its response becomes more specialized to information about mental states – beliefs, desires, and emotions – as children get older. One intriguing hypothesis is that this specialization helps support social development in childhood.”

We’ll end with an ode to a prominent feature of brain shape: the “Sylvian fissure,” a prominent groove on each side of the brain that separates the frontal and parietal lobes from the temporal lobe. Such landmarks in brain shape help orient researchers, and the Sylvian fissure was recently immortalized in this image, from a postcard by illustrator Joe Laney.

______

Do you have a question for The Brain? Ask it here.

 

Neuroscientists reverse some behavioral symptoms of Williams Syndrome

Williams Syndrome, a rare neurodevelopmental disorder that affects about 1 in 10,000 babies born in the United States, produces a range of symptoms including cognitive impairments, cardiovascular problems, and extreme friendliness, or hypersociability.

In a study of mice, MIT neuroscientists have garnered new insight into the molecular mechanisms that underlie this hypersociability. They found that loss of one of the genes linked to Williams Syndrome leads to a thinning of the fatty layer that insulates neurons and helps them conduct electrical signals in the brain.

The researchers also showed that they could reverse the symptoms by boosting production of this coating, known as myelin. This is significant, because while Williams Syndrome is rare, many other neurodevelopmental disorders and neurological conditions have been linked to myelination deficits, says Guoping Feng, the James W. and Patricia Poitras Professor of Neuroscience and a member of MIT’s McGovern Institute for Brain Research.

“The importance is not only for Williams Syndrome,” says Feng, who is one of the senior authors of the study. “In other neurodevelopmental disorders, especially in some of the autism spectrum disorders, this could be potentially a new direction to look into, not only the pathology but also potential treatments.”

Zhigang He, a professor of neurology and ophthalmology at Harvard Medical School, is also a senior author of the paper, which appears in the April 22 issue of Nature Neuroscience. Former MIT postdoc Boaz Barak, currently a principal investigator at Tel Aviv University in Israel, is the lead author and a senior author of the paper.

Impaired myelination

Williams Syndrome, which is caused by the loss of one of the two copies of a segment of chromosome 7, can produce learning impairments, especially for tasks that require visual and motor skills, such as solving a jigsaw puzzle. Some people with the disorder also exhibit poor concentration and hyperactivity, and they are more likely to experience phobias.

In this study, the researchers decided to focus on one of the 25 genes in that segment, known as Gtf2i. Based on studies of patients with a smaller subset of the genes deleted, scientists have linked the Gtf2i gene to the hypersociability seen in Williams Syndrome.

Working with a mouse model, the researchers devised a way to knock out the gene specifically from excitatory neurons in the forebrain, which includes the cortex, the hippocampus, and the amygdala (a region important for processing emotions). They found that these mice did show increased levels of social behavior, measured by how much time they spent interacting with other mice. The mice also showed deficits in fine motor skills and increased nonsocial related anxiety, which are also symptoms of Williams Syndrome.

Next, the researchers sequenced the messenger RNA from the cortex of the mice to see which genes were affected by loss of Gtf2i. Gtf2i encodes a transcription factor, so it controls the expression of many other genes. The researchers found that about 70 percent of the genes with significantly reduced expression levels were involved in the process of myelination.

“Myelin is the insulation layer that wraps the axons that extend from the cell bodies of neurons,” Barak says. “When they don’t have the right properties, it will lead to faster or slower electrical signal transduction, which affects the synchronicity of brain activity.”

Further studies revealed that the mice had only about half the normal number of mature oligodendrocytes — the brain cells that produce myelin. However, the number of oligodendrocyte precursor cells was normal, so the researchers suspect that the maturation and differentiation processes of these cells are somehow impaired when Gtf2i is missing in the neurons.

This was surprising because Gtf2i was not knocked out in oligodendrocytes or their precursors. Thus, knocking out the gene in neurons may somehow influence the maturation process of oligodendrocytes, the researchers suggest. It is still unknown how this interaction might work.

“That’s a question we are interested in, but we don’t know whether it’s a secreted factor, or another kind of signal or activity,” Feng says.

In addition, the researchers found that the myelin surrounding axons of the forebrain was significantly thinner than in normal mice. Furthermore, electrical signals were smaller, and took more time to cross the brain in mice with Gtf2i missing.

The study is an example of pioneering research into the contribution of glial cells, which include oligodendrocytes, to neuropsychiatric disorders, says Doug Fields, chief of the nervous system development and plasticity section of the Eunice Kennedy Shriver National Institute of Child Health and Human Development.

“Traditionally myelin was only considered in the context of diseases that destroy myelin, such as multiple sclerosis, which prevents transmission of neural impulses. More recently it has become apparent that more subtle defects in myelin can impair neural circuit function, by causing delays in communication between neurons,” says Fields, who was not involved in the research.

Symptom reversal

It remains to be discovered precisely how this reduction in myelination leads to hypersociability. The researchers suspect that the lack of myelin affects brain circuits that normally inhibit social behaviors, making the mice more eager to interact with others.

“That’s probably the explanation, but exactly which circuits and how does it work, we still don’t know,” Feng says.

The researchers also found that they could reverse the symptoms by treating the mice with drugs that improve myelination. One of these drugs, an FDA-approved antihistamine called clemastine fumarate, is now in clinical trials to treat multiple sclerosis, which affects myelination of neurons in the brain and spinal cord. The researchers believe it would be worthwhile to test these drugs in Williams Syndrome patients because they found thinner myelin and reduced numbers of mature oligodendrocytes in brain samples from human subjects who had Williams Syndrome, compared to typical human brain samples.

“Mice are not humans, but the pathology is similar in this case, which means this could be translatable,” Feng says. “It could be that in these patients, if you improve their myelination early on, it could at least improve some of the conditions. That’s our hope.”

Such drugs would likely help mainly the social and fine-motor issues caused by Williams Syndrome, not the symptoms that are produced by deletion of other genes, the researchers say. They may also help treat other disorders, such as autism spectrum disorders, in which myelination is impaired in some cases, Feng says.

“We think this can be expanded into autism and other neurodevelopmental disorders. For these conditions, improved myelination may be a major factor in treatment,” he says. “We are now checking other animal models of neurodevelopmental disorders to see whether they have myelination defects, and whether improved myelination can improve some of the pathology of the defects.”

The research was funded by the Simons Foundation, the Poitras Center for Affective Disorders Research at MIT, the Stanley Center for Psychiatric Research at the Broad Institute of MIT and Harvard, and the Simons Center for the Social Brain at MIT.

How our gray matter tackles gray areas

When Katie O’Nell’s high school biology teacher showed a NOVA video on epigenetics after the AP exam, he was mostly trying to fill time. But for O’Nell, the video sparked a whole new area of curiosity.

She was fascinated by the idea that certain genes could be turned on and off, controlling what traits or processes were expressed without actually editing the genetic code itself. She was further excited about what this process could mean for the human mind.

But upon starting at MIT, she realized that she was less interested in the cellular level of neuroscience and more fascinated by bigger questions, such as, what makes certain people generous toward certain others? What’s the neuroscience behind morality?

“College is a time you can learn about anything you want, and what I want to know is why humans are really, really wacky,” she says. “We’re dumb, we make super irrational decisions, it makes no sense. Sometimes it’s beautiful, sometimes it’s awful.”

O’Nell, a senior majoring in brain and cognitive sciences, is one of five MIT students to have received a Marshall Scholarship this year. Her quest to understand the intricacies of the wacky human brain will not be limited to any one continent. She will be using the funding to earn her master’s in experimental psychology at Oxford University.

Chocolate milk and the mouse brain

O’Nell’s first neuroscience-related research experience at MIT took place during her sophomore and junior year, in the lab of Institute Professor Ann Graybiel at the McGovern Institute.

The research studied the neurological components of risk-vs-reward decision making, using a key ingredient: chocolate milk. In the experiments, mice were given two options — they could go toward the richer, sweeter chocolate milk, but they would also have to endure a brighter light. Or, they could go toward a more watered-down chocolate milk, with the benefit of a softer light. All the while, a fluorescence microscope tracked when certain cell types were being activated.

“I think that’s probably the closest thing I’ve ever had to a spiritual experience … watching this mouse in this maze deciding what to do, and watching the cells light up on the screen. You can see single-cell evidence of cognition going on. That’s just the coolest thing.”

In her junior spring, O’Nell delved even deeper into questions of morality in the lab of Professor Rebecca Saxe. Her research there centers on how the human brain parses people’s identities and emotional states from their faces alone, and how those computations are related to each other. Part of what interests O’Nell is the fact that we are constantly making decisions, about ourselves and others, with limited information.

“We’re always solving under uncertainty,” she says. “And our brain does it so well, in so many ways.”

International intrigue

Outside of class, O’Nell has no shortage of things to do. For starters, she has been serving as an associate advisor for a first-year seminar since the fall of her sophomore year.

“Basically it’s my job to sit in on a seminar and bully them into not taking seven classes at a time, and reminding them that yes, your first 8.01 exam is tomorrow,” she says with a laugh.

She has also continued an activity she was passionate about in high school — Model United Nations. One of the most fun parts for her is serving on the Historical Crisis Committee, in which delegates must try to figure out a way to solve a real historical problem, like the Cuban Missile Crisis or the French and Indian War.

“This year they failed and the world was a nuclear wasteland,” she says. “Last year, I don’t entirely know how this happened, but France decided that they wanted to abandon the North American theater entirely and just took over all of Britain’s holdings in India.”

She’s also part of an MIT program called the Addir Interfaith Fellowship, in which a small group of people meet each week and discuss a topic related to religion and spirituality. Before joining, she didn’t think it was something she’d be interested in — but after being placed in a first-year class about science and spirituality, she has found discussing religion to be really stimulating. She’s been a part of the group ever since.

O’Nell has also been heavily involved in writing and producing a Mystery Dinner Theater for Campus Preview Weekend, on behalf of her living group J Entry, in MacGregor House. The plot, generally, is MIT-themed — a physics professor might get killed by a swarm of CRISPR nanobots, for instance. When she’s not cooking up murder mysteries, she might be running SAT classes for high school students, playing piano, reading, or spending time with friends. Or, when she needs to go grocery shopping, she’ll be stopping by the Trader Joe’s on Boylston Avenue, as an excuse to visit the Boston Public Library across the street.

Quite excited for the future

O’Nell is excited that the Marshall Scholarship will enable her to live in the country that produced so many of the books she cherished as a kid, like “The Hobbit.” She’s also thrilled to further her research there. However, she jokes that she still needs to get some of the lingo down.

“I need to learn how to use the word ‘quite’ correctly. Because I overuse it in the American way,” she says.

Her master’s research will largely expand on the principles she’s been examining in the Saxe lab. Questions of morality, processing, and social interaction are where she aims to focus her attention.

“My master’s project is going to be basically taking a look at whether how difficult it is for you to determine someone else’s facial expression changes how generous you are with people,” she explains.

After that, she hopes to follow the standard research track of earning a PhD, doing postdoctoral research, and then entering academia as a professor and researcher. Teaching and researching, she says, are two of her favorite things — she’s excited to have the chance to do both at the same time. But that’s a few years ahead. Right now, she hopes to use her time in England to learn all she can about the deeper functions of the brain, with or without chocolate milk.

3Q: The interface between art and neuroscience

CBMM postdoc Sarah Schwettman

Computational neuroscientist Sarah Schwettmann, who works in the Center for Brains, Minds, and Machines at the McGovern Institute, is one of three instructors behind the cross-disciplinary course 9.S52/9.S916 (Vision in Art and Neuroscience), which introduces students to core concepts in visual perception through the lenses of art and neuroscience.

Supported by a faculty grant from the Center for Art, Science and Technology at MIT (CAST) for the past two years, the class is led by Pawan Sinha, a professor of vision and computational neuroscience in the Department of Brain and Cognitive Sciences. They are joined in the course by Seth Riskin SM ’89, a light artist and the manager of the MIT Museum Studio and Compton Gallery, where the course is taught. Schwettman discussed the combination of art and science in an educational setting.

Q: How have the three of you approached this cross-disciplinary class in art and neuroscience?

A: Discussions around this intersection often consider what each field has to offer the other. We take a different approach, one I refer to as occupying the gap, or positioning ourselves between the two fields and asking what essential questions underlie them both. One question addresses the nature of the human relationship to the world. The course suggests one answer: This relationship is fundamentally creative, from the brain’s interpretation of incoming sensory data in perception, to the explicit construction of experiential worlds in art.

Neuroscience and art, therefore, each provide a set of tools for investigating different levels of the constructive process. Through neuroscience, we develop a specific understanding of the models of the world that the brain uses to make sense of incoming visual data. With articulation of those models, we can engineer types of inputs that interact with visual processing architecture in particularly exquisite ways, and do so reliably, giving artists a toolkit for remixing and modulating experience. In the studio component of the course, we experiment with this toolkit and collectively move it forward.

While designing the course, Pawan, Seth, and I found that we were each addressing a similar set of questions, the same that motivate the class, through our own research and practice. In parallel to computational vision research, Professor Sinha leads a humanitarian initiative called Project Prakash, which provides treatment to blind children in India and explores the development of vision following the restoration of sight. Where does structure in perception originate? As an artist in the MIT Museum Studio, Seth works with articulated light to sculpt structured visual worlds out of darkness. I also live on this interface where the brain meets the world — my research in the Department of Brain and Cognitive Sciences examines the neural basis of mental models for simulating physics. Linking our work in the course is an experiment in synthesis.

Q: What current research in vision, neuroscience, and art are being explored at MIT, and how does the class connect it to hands-on practice?

A: Our brains build a rich world of experience and expectation from limited and noisy sensory data with infinite potential interpretations. In perception research, we seek to discover how the brain finds more meaning in incoming data than is explained by the signal alone. Work being done at MIT around generative models addresses this, for instance in the labs of Josh Tenenbaum and Josh McDermott in the Department of Brain and Cognitive Sciences. Researchers present an ambiguous visual or auditory stimulus and by probing someone’s perceptual interpretation, they get a handle on the structures that the mind generates to interpret incoming data, and they can begin to build computational models of the process.

In Vision in Art and Neuroscience, we focus on the experiential as well as the experimental, probing the perceiver’s experience of structure-generating process—perceiving perception itself.

As instructors, we face the pedagogical question: what exercises, in the studio, can evoke so striking an experience of students’ own perception that cutting edge research takes on new meaning, understood in the immediacy of seeing? Later in the semester, students face a similar question as artists: How can one create visual environments where viewers experience their own perceptual processing at work? Done well, this experience becomes the artwork itself. Early in the course, students explore the Ganzfeld effect, popularized by artist James Turrell, where the viewer is exposed to an unstructured visual field of uniform illumination. In this experience, one feels the mind struggling to fit models of the world to unstructured input, and attempting this over and over again — an interpretation process which often goes unnoticed when input structure is expected by visual processing architecture. The progression of the course modules follows the hierarchy of visual processing in the brain, which builds increasingly complex interpretations of visual inputs, from brightness and edges to depth, color, and recognizable form.

MIT students first encounter those concepts in the seminar component of the course at the beginning of each week. Later in the week, students translate findings into experimental approaches in the studio. We work with light directly, from introducing a single pinpoint of light into an otherwise completely dark room, to building intricate environments using programmable electronics. Students begin to take this work into their own hands, in small groups and individually, culminating in final projects for exhibition. These exhibitions are truly a highlight of the course. They’re often one of the first times that students have built and shown artworks. That’s been a gift to share with the broader MIT community, and a great learning experience for students and instructors alike.

Q: How has that approach been received by the MIT community?

A: What we’re doing has resonated across disciplines: In addition to neuroscience, we have students and researchers joining us from computer science, mechanical engineering, mathematics, the Media Lab, and ACT (the Program in Art, Culture, and Technology). The course is growing into something larger, a community of practice interested in applying the scientific methodology we develop to study the world, to probe experience, and to articulate models for its generation and replication.

With a mix of undergraduates, graduates, faculty, and artists, we’ve put together installations and symposia — including three on campus so far. The first of these, “Perceiving Perception,” also led to a weekly open studio night where students and collaborators convene for project work. Our second exhibition, “Dessert of the Real,” is on display this spring in the Compton Gallery. This April we’re organizing a symposium in the studio featuring neuroscientists, computer scientists, artists and researchers from MIT and Harvard. We’re reaching beyond campus as well, through off-site installations, collaborations with museums — including the Metropolitan Museum of Art and the Peabody Essex Museum — and a partnership with the ZERO Group in Germany.

We’re eager to involve a broad network of collaborators. It’s an exciting moment in the fields of neuroscience and computing; there is great energy to build technologies that perceive the world like humans do. We stress on the first day of class that perception is a fundamentally creative act. We see the potential for models of perception to themselves be tools for scaling and translating creativity across domains, and for building a deeply creative relationship to our environment.

Guoping Feng elected to American Academy of Arts and Sciences

Four MIT faculty members are among more than 200 leaders from academia, business, public affairs, the humanities, and the arts elected to the American Academy of Arts and Sciences, the academy announced today.

One of the nation’s most prestigious honorary societies, the academy is also a leading center for independent policy research. Members contribute to academy publications, as well as studies of science and technology policy, energy and global security, social policy and American institutions, the humanities and culture, and education.

Those elected from MIT this year are:

  • Dimitri A. Antoniadis, Ray and Maria Stata Professor of Electrical Engineering;
  • Anantha P. Chandrakasan, dean of the School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science;
  • Guoping Feng, the James W. (1963) and Patricia T. Poitras Professor of Brain and Cognitive Sciences; and
  • David R. Karger, professor of electrical engineering.

“We are pleased to recognize the excellence of our new members, celebrate their compelling accomplishments, and invite them to join the academy and contribute to its work,” said David W. Oxtoby, president of the American Academy of Arts and Sciences. “With the election of these members, the academy upholds the ideals of research and scholarship, creativity and imagination, intellectual exchange and civil discourse, and the relentless pursuit of knowledge in all its forms.”

The new class will be inducted at a ceremony in October in Cambridge, Massachusetts.

Since its founding in 1780, the academy has elected leading “thinkers and doers” from each generation, including George Washington and Benjamin Franklin in the 18th century, Maria Mitchell and Daniel Webster in the 19th century, and Toni Morrison and Albert Einstein in the 20th century. The current membership includes more than 200 Nobel laureates and 100 Pulitzer Prize winners.

Halassa named Max Planck Fellow

Michael Halassa was just appointed as one of the newest Max Planck Fellows. His appointment comes through the Max Planck Florida Institute for Neuroscience (MPFI), which aims to forge collaborations between exceptional neuroscientists from around the world to answer fundamental questions about brain development and function. The Max Planck Society selects cutting edge, active researchers from other institutions to fellow positions for a five-year period to promote interactions and synergies. While the program is a longstanding feature of the Max Planck Society, Halassa, and fellow appointee Yi Guo of the University of California, Santa Cruz, are the first selected fellows that are based at U.S. institutions.

Michael Halassa is an associate investigator at the McGovern Institute and an assistant professor in the Department of Brain and Cognitive Sciences at MIT. Halassa’s research focuses on the neural architectures that underlie complex cognitive processes. He is particularly interested in goal-directed attention, our ability to rapidly switch attentional focus based on high level objectives. For example, when you are in a roomful of colleagues, the mention of your name in a distant conversation can quickly trigger your ‘mind’s ear’ to eavesdrop into that conversation. This contrasts with hearing a name that sounds like yours on television, which does not usually grab your attention in the same way. In certain mental disorders such as schizophrenia, the ability to generate such high-level objectives, while also accounting for context, is perturbed. Recent evidence strongly suggests that impaired function of the prefrontal cortex and its interactions with a region of the brain called the thalamus may be altered in such disorders. It is this thalamocortical network that Halassa has been studying in mice, where his group has uncovered how the thalamus supports the ability of the prefrontal cortex to generate context-appropriate attentional signals.

The fellowship will support extending Halassa’s work into the tree shrew (Tupaia belangeri), which has been shown to have advanced cognitive abilities compared to mice while also offering many of the circuit-interrogation tools that make the mouse an attractive experimental model.

The Max Planck Florida Institute for Neuroscience (MPFI), a not-for-profit research organization, is part of the world-renowned Max Planck Society, Germany’s most successful research organization. The Max Planck Society was founded in 1911, and comprises 84 institutes and research facilities. While primarily located in Germany, there are 4 institutes and one research facility located aboard, including the Florida Institute that Halassa will collaborate with. The fellow positions were created with the goal of increasing interactions between the Max Planck Society and its institutes with faculty engaged in active research at other universities and institutions, which with this appointment now include MIT.