How motion conveys emotion in the face

While a static emoji can stand in for emotion, in real life we are constantly reading into the feelings of others through subtle facial movements. The lift of an eyebrow, the flicker around the lips as a smile emerges, a subtle change around the eyes (or the sudden rolling of the eyes), are all changes that feed into our ability to understand the emotional state, and the attitude, of others towards us. Ben Deen and Rebecca Saxe have now monitored changes in brain activity as subjects followed face movements in movies of avatars. Their findings argue that we can generalize across individual face part movements in other people, but that a particular cortical region, the face-responsive superior temporal sulcus (fSTS), is also responding to isolated movements of individual face parts. Indeed, the fSTS seems to be tied to kinematics, individual face part movement, more than the implied emotional cause of that movement.

We know that the brain responds to dynamic changes in facial expression, and that these are associated with activity in the fSTS, but how do calculations of these movements play out in the brain?

Do we understand emotional changes by adding up individual features (lifting of eyebrows + rounding of mouth= surprise), or are we assessing the entire face in a more holistic way that results in more generalized representations? McGovern Investigator Rebecca Saxe and her graduate student Ben Deen set out to answer this question using behavioral analysis and brain imaging, specifically fMRI.

“We had a good sense of what stimuli the fSTS responds strongly to,” explains Ben Deen, “but didn’t really have any sense of how those inputs are processed in the region – what sorts of features are represented, whether the representation is more abstract or more tied to visual features, etc. The hope was to use multivoxel pattern analysis, which has proven to be a remarkably useful method for characterizing representational content, to address these questions and get a better sense of what the region is actually doing.”

Facial movements were conveyed to subjects using animated “avatars.” By presenting avatars that made isolated eye and eyebrow movements (brow raise, eye closing, eye roll, scowl) or mouth movements (smile, frown, mouth opening, snarl), as well as composites of these movements, the researchers were able to assess whether our interpretation of the latter is distinct from the sum of its parts. To do this, Deen and Saxe first took a behavioral approach where people reported on what combinations of eye and mouth movements in a whole avatar face, or one where the top and bottom parts of the face were misaligned. What they found was that movement in the mouth region can influence perception of movement in the eye region, arguably due to some level of holistic processing. The authors then asked whether there were cortical differences upon viewing isolated vs. combined face part movements. They found that changes in fSTS, but not other brain regions, had patterns of activity that seemed to discriminate between different facial movements. Indeed, they could decode which part of the avatar’s face is being perceived as moving from fSTS activity. The researchers could even model the fSTS response to combined features linearly based on the response to individual face parts. In short, though the behavorial data indicate that there is holistic processing of complex facial movement, it is also clear that isolated parts-based representations are also present, a sort of intermediate state.

As part of this work, Deen and Saxe took the important step of pre-registering their experimental parameters, before collecting any data, at the Open Science Framework. This step allows others to more easily reproduce the analysis they conducted, since all parameters (the task that subjects are carrying out, the number of subjects needed, the rationale for this number, and the scripts used to analyze data) are openly available.

“Preregistration had a big impact on our workflow for the study,” explained Deen. “More of the work was done up front, in coming up with all of the analysis details and agonizing over whether we were choosing the right strategy, before seeing any of the data. When you tie your hands by making these decisions up front, you start thinking much more carefully about them.”

Pre-registration does remove post-hoc researcher subjectivity from the analysis. As an example, because Deen and Saxe predicted that the people would be accurately able to discriminate between faces per se, they decided ahead of the experiment to focus on analyzing reaction time, rather than looking at the collected data and deciding to focus on this number after the fact. This adds to the overall objectivity of the experiment and is increasingly seen as a robust way to conduct such experiments.

How do neurons communicate (so quickly)?

Neurons are the most fundamental unit of the nervous system, and yet, researchers are just beginning to understand how they perform the complex computations that underlie our behavior. We asked Boaz Barak, previously a postdoc in Guoping Feng’s lab at the McGovern Institute and now Senior Lecturer at the School of Psychological Sciences and Sagol School of Neuroscience at Tel Aviv University, to unpack the basics of neuron communication for us.

“Neurons communicate with each other through electrical and chemical signals,” explains Barak. “The electrical signal, or action potential, runs from the cell body area to the axon terminals, through a thin fiber called axon. Some of these axons can be very long and most of them are very short. The electrical signal that runs along the axon is based on ion movement. The speed of the signal transmission is influenced by an insulating layer called myelin,” he explains.

Myelin is a fatty layer formed, in the vertebrate central nervous system, by concentric wrapping of oligodendrocyte cell processes around axons. The term “myelin” was coined in 1854 by Virchow (whose penchant for Greek and for naming new structures also led to the terms amyloid, leukemia, and chromatin). In more modern images, the myelin sheath is beautifully visible as concentric spirals surrounding the “tube” of the axon itself. Neurons in the peripheral nervous system are also myelinated, but the cells responsible for myelination are Schwann cells, rather than oligodendrocytes.

“Neurons communicate with each other through electrical and chemical signals,” explains Boaz Barak.

“Myelin’s main purpose is to insulate the neuron’s axon,” Barak says. “It speeds up conductivity and the transmission of electrical impulses. Myelin promotes fast transmission of electrical signals mainly by affecting two factors: 1) increasing electrical resistance, or reducing leakage of the electrical signal and ions along the axon, “trapping” them inside the axon and 2) decreasing membrane capacitance by increasing the distance between conducting materials inside the axon (intracellular fluids) and outside of it (extracellular fluids).”

Adjacent sections of axon in a given neuron are each surrounded by a distinct myelin sheath. Unmyelinated gaps between adjacent ensheathed regions of the axon are called Nodes of Ranvier, and are critical to fast transmission of action potentials, in what is termed “saltatory conduction.” A useful analogy is that if the axon itself is like an electrical wire, myelin is like insulation that surrounds it, speeding up impulse propagation, and overcoming the decrease in action potential size that would occur during transmission along a naked axon due to electrical signal leakage, how the myelin sheath promotes fast transmission that allows neurons to transmit information long distances in a timely fashion in the vertebrate nervous system.

Myelin seems to be critical to healthy functioning of the nervous system; in fact, disruptions in the myelin sheath have been linked to a variety of disorders.

Former McGovern postdoc, Boaz Barak. Photo: Justin Knight

“Abnormal myelination can arise from abnormal development caused by genetic alterations,” Barak explains further. “Demyelination can even occur, due to an autoimmune response, trauma, and other causes. In neurological conditions in which myelin properties are abnormal, as in the case of lesions or plaques, signal transmission can be affected. For example, defects in myelin can lead to lack of neuronal communication, as there may be a delay or reduction in transmission of electrical and chemical signals. Also, in cases of abnormal myelination, it is possible that the synchronicity of brain region activity might be affected, for example, leading to improper actions and behaviors.”

Researchers are still working to fully understand the role of myelin in disorders. Myelin has a long history of being evasive though, with its origins in the central nervous system being unclear for many years. For a period of time, the origin of myelin was thought to be the axon itself, and it was only after initial discovery (by Robertson, 1899), re-discovery (Del Rio-Hortega, 1919), and skepticism followed by eventual confirmation, that the role of oligodendrocytes in forming myelin became clear. With modern imaging and genetic tools, we should be able to increasingly understand its role in the healthy, as well as a compromised, nervous system.

Do you have a question for The Brain? Ask it here.

Ila Fiete joins the McGovern Institute

Ila Fiete, an associate professor in the Department of Brain and Cognitive Sciences at MIT recently joined the McGovern Institute as an associate investigator. Fiete is working to understand the circuits that underlie short-term memory, integration, and inference in the brain.

Think about the simple act of visiting a new town and getting to know its layout as you explore it. What places are reachable from others? Where are landmarks relative to each other? Where are you relative to these landmarks? How do you get from here to where you want to go next?

The process that occurs as your brain tries to transform the few routes you follow into a coherent map of the world is just one of myriad examples of hard computations that the brain is constantly performing. Fiete’s goal is to understand how the brain is able to carry out such computations, and she is developing and using multiple tools to this end. These approaches include pure theoretical approaches to examine neural codes, building numerical dynamical models of circuit operation, and techniques to extract information about the underlying circuit dynamics from neural data.

Spatial navigation is a particularly interesting nut to crack from a neural perspective: The mapping devices on your phone have access to global satellite data, previously constructed detailed maps of the town, various additional sensors, and excellent non-leaky memory. By contrast, the brain must build maps, plan routes, and determine goals all using noisy, local sensors, no externally provided maps, and with noisy, forgetful or leaky neurons. Fiete is particularly interested in elucidating how the brain deals with noisy and ambiguous cues from the world to arrive at robust estimates about the world that resolve ambiguity. She is also interested in how the networks that are important for memory and integration develop through plasticity, learning, and development in the brain.

Fiete earned a BS in mathematics and physics at the University of Michigan then obtained her PhD in 2004 at Harvard University in the Department of Physics. She held a postdoctoral appointment at the Kavli Institute for Theoretical Physics at the University of California, Santa Barbara from 2004 to 2006, while she was also a visiting member of the Center for Theoretical Biophysics at the University of California at San Diego. Fiete subsequently spent two years at Caltech as a Broad Fellow in brain circuitry, and in 2008 joined the faculty of the University of Texas at Austin. She is currently an HHMI faculty scholar.

Peering under the hood of fake-news detectors

New work from researchers at the McGovern Institute for Brain Research at MIT peers under the hood of an automated fake-news detection system, revealing how machine-learning models catch subtle but consistent differences in the language of factual and false stories. The research also underscores how fake-news detectors should undergo more rigorous testing to be effective for real-world applications.

Popularized as a concept in the United States during the 2016 presidential election, fake news is a form of propaganda created to mislead readers, in order to generate views on websites or steer public opinion.

Almost as quickly as the issue became mainstream, researchers began developing automated fake news detectors — so-called neural networks that “learn” from scores of data to recognize linguistic cues indicative of false articles. Given new articles to assess, these networks can, with fairly high accuracy, separate fact from fiction, in controlled settings.

One issue, however, is the “black box” problem — meaning there’s no telling what linguistic patterns the networks analyze during training. They’re also trained and tested on the same topics, which may limit their potential to generalize to new topics, a necessity for analyzing news across the internet.

In a paper presented at the Conference and Workshop on Neural Information Processing Systems, the researchers tackle both of those issues. They developed a deep-learning model that learns to detect language patterns of fake and real news. Part of their work “cracks open” the black box to find the words and phrases the model captures to make its predictions.

Additionally, they tested their model on a novel topic it didn’t see in training. This approach classifies individual articles based solely on language patterns, which more closely represents a real-world application for news readers. Traditional fake news detectors classify articles based on text combined with source information, such as a Wikipedia page or website.

“In our case, we wanted to understand what was the decision-process of the classifier based only on language, as this can provide insights on what is the language of fake news,” says co-author Xavier Boix, a postdoc in the lab of Eugene McDermott Professor Tomaso Poggio at the Center for Brains, Minds, and Machines (CBMM), a National Science Foundation-funded center housed within the McGovern Institute.

“A key issue with machine learning and artificial intelligence is that you get an answer and don’t know why you got that answer,” says graduate student and first author Nicole O’Brien ’17. “Showing these inner workings takes a first step toward understanding the reliability of deep-learning fake-news detectors.”

The model identifies sets of words that tend to appear more frequently in either real or fake news — some perhaps obvious, others much less so. The findings, the researchers say, points to subtle yet consistent differences in fake news — which favors exaggerations and superlatives — and real news, which leans more toward conservative word choices.

“Fake news is a threat for democracy,” Boix says. “In our lab, our objective isn’t just to push science forward, but also to use technologies to help society. … It would be powerful to have tools for users or companies that could provide an assessment of whether news is fake or not.”

The paper’s other co-authors are Sophia Latessa, an undergraduate student in CBMM; and Georgios Evangelopoulos, a researcher in CBMM, the McGovern Institute of Brain Research, and the Laboratory for Computational and Statistical Learning.

Limiting bias

The researchers’ model is a convolutional neural network that trains on a dataset of fake news and real news. For training and testing, the researchers used a popular fake news research dataset, called Kaggle, which contains around 12,000 fake news sample articles from 244 different websites. They also compiled a dataset of real news samples, using more than 2,000 from the New York Times and more than 9,000 from The Guardian.

In training, the model captures the language of an article as “word embeddings,” where words are represented as vectors — basically, arrays of numbers — with words of similar semantic meanings clustered closer together. In doing so, it captures triplets of words as patterns that provide some context — such as, say, a negative comment about a political party. Given a new article, the model scans the text for similar patterns and sends them over a series of layers. A final output layer determines the probability of each pattern: real or fake.

The researchers first trained and tested the model in the traditional way, using the same topics. But they thought this might create an inherent bias in the model, since certain topics are more often the subject of fake or real news. For example, fake news stories are generally more likely to include the words “Trump” and “Clinton.”

“But that’s not what we wanted,” O’Brien says. “That just shows topics that are strongly weighting in fake and real news. … We wanted to find the actual patterns in language that are indicative of those.”

Next, the researchers trained the model on all topics without any mention of the word “Trump,” and tested the model only on samples that had been set aside from the training data and that did contain the word “Trump.” While the traditional approach reached 93-percent accuracy, the second approach reached 87-percent accuracy. This accuracy gap, the researchers say, highlights the importance of using topics held out from the training process, to ensure the model can generalize what it has learned to new topics.

More research needed

To open the black box, the researchers then retraced their steps. Each time the model makes a prediction about a word triplet, a certain part of the model activates, depending on if the triplet is more likely from a real or fake news story. The researchers designed a method to retrace each prediction back to its designated part and then find the exact words that made it activate.

More research is needed to determine how useful this information is to readers, Boix says. In the future, the model could potentially be combined with, say, automated fact-checkers and other tools to give readers an edge in combating misinformation. After some refining, the model could also be the basis of a browser extension or app that alerts readers to potential fake news language.

“If I just give you an article, and highlight those patterns in the article as you’re reading, you could assess if the article is more or less fake,” he says. “It would be kind of like a warning to say, ‘Hey, maybe there is something strange here.’”

Joining the dots in large neural datasets

You might have played ‘join the dots’, a puzzle where numbers guide you to draw until a complete picture emerges. But imagine a complex underlying image with no numbers to guide the sequence of joining. This is a problem that challenges scientists who work with large amounts of neural data. Sometimes they can align data to a stereotyped behavior, and thus define a sequence of neuronal activity underlying navigation of a maze or singing of a song learned and repeated across generations of birds. But most natural behavior is not stereotyped, and when it comes to sleeping, imagining, and other higher order activities, there is not even a physical behavioral readout for alignment. Michale Fee and colleagues have now developed an algorithm, seqNMF, that can recognize relevant sequences of neural activity, even when there is no guide to align to, such as an overt sequence of behaviors or notes.

“This method allows you to extract structure from the internal life of the brain without being forced to make reference to inputs or output,” says Michale Fee, a neuroscientist at the McGovern Institute at MIT, Associate Department Head and Glen V. and Phyllis F. Dorflinger Professor of Neuroscience in the Department of Brain and Cognitive Sciences, and investigator with the Simons Collaboration on the Global Brain. Fee conducted the study in collaboration with Mark S. Goldman of the University of California, Davis.

In order to achieve this task, the authors of the study, co-led by Emily L. Mackevicius and Andrew H. Bahle of the McGovern Institute,  took a process called convolutional non-negative matrix factorization (NMF), a tool that allows extraction of sparse, but important, features from complex and noisy data, and developed it so that it can be used to extract sequences over time that are related to a learned behavior or song. The new algorithm also relies on repetition, but tell-tale repetitions of neural activity rather than simplistic repetitions in the animal’s behavior. seqNMF can follow repeated sequences of firing over time that are not tied to a specific external reference time framework, and can extract relevant sequences of neural firing in an unsupervised fashion without the researcher supplying prior information.

In the current study, the authors initially applied and honed the system on synthetic datasets. These datasets started to show them that the algorithm could “join the dots” without additional informational input. When seqNMF performed well in these tests, they applied it to available open source data from rats, finding that they could extract sequences of neural firing in the hippocampus that are relevant to finding a water reward in a maze.

Having passed these initial tests, the authors upped the ante and challenged seqNMF to find relevant neural activity sequences in a non-stereotypical behavior: improvised singing by zebra finches that have not learned the signature songs of their species (untutored birds). The authors analyzed neural data from the HVC, a region of the bird brain previously linked to song learning. Since normal adult bird songs are stereotyped, the researchers could align neural activity with features in the song itself for well-tutored birds. Fee and colleagues then turned to untutored birds and found that they still had repeated neural sequences related to the “improvised” song, that are reminiscent of the tutored birds, but the patterns are messier. Indeed, the brain of the untutored bird will even initiate two distinct neural signatures at the same time, but seqNMF is able to see past the resulting neural cacophony, and decipher that multiple patterns are present but overlapping. Being able to find these levels of order in such neural datasets is near impossible using previous methods of analysis.

seqNMF can be applied, potentially, to any neural activity, and the researchers are now testing whether the algorithm can indeed be generalized to extract information from other types of neural data. In other words, now that it’s clear that seqNMF can find a relevant sequence of neural activity for a non-stereotypical behavior, scientists can examine whether the neural basis of behaviors in other organisms and even for activities such as sleep and imagination can be extracted. Indeed, seqNMF is available on GitHub for researchers to apply to their own questions of interest.

Welcoming the first McGovern Fellows

We are delighted to kick off the new year by welcoming Omar Abuddayeh and Jonathan Gootenberg as the first members of our new McGovern Institute Fellows Program. The fellows program is a recently launched initiative that supports highly-talented and selected postdocs that are ready to initiate their own research program.

As McGovern Fellows, the pair will be given space, time, and support to help them follow scientific research directions of their own choosing. This provides an alternative to the traditional postdoctoral research route.

Abudayyeh and Gootenberg both defended their thesis in the fall of 2018, and graduated from the lab of Feng Zhang, who is the James and Patricia Poitras Professor of Neuroscience at MIT, a McGovern investigator and core member of the Broad Institute. During their time in the Zhang lab, Abudayyeh and Gootenberg worked on projects that sought and found new tools based on enzymes mined from bacterial CRISPR systems. Cas9 is the original programmable single-effector DNA-editing enzyme, and the new McGovern Fellows worked on teams that actively looked for CRISPR enzymes with properties distinct from and complementary to Cas9. In the course of their thesis work, they helped to identify RNA-guided RNA editing factors such as the Cas13 family. This work led to the development of the REPAIR system, which is capable of editing RNA, thus providing a CRISPR-based therapeutic avenue that is not based on permanent, heritable changes to the genome. In addition, they worked on a Cas13-based diagnostic system called SHERLOCK that can detect specific nucleic acid sequences. SHERLOCK is able to detect the presence of infectious agents such as Zika virus in an easily-deployable lateral flow format, similar to a pregnancy test.

We are excited to see the directions that the new McGovern Fellows take as they now arrive at the institute, and will keep you posted on scientific findings as they emerge from their labs.

 

What is CRISPR?

CRISPR (which stands for Clustered Regularly Interspaced Short Palindromic Repeats) is not actually a single entity, but shorthand for a set of bacterial systems that are found with a hallmarked arrangement in the bacterial genome.

When CRISPR is mentioned, most people are likely thinking of CRISPR-Cas9, now widely known for its capacity to be re-deployed to target sequences of interest in eukaryotic cells, including human cells. Cas9 can be programmed to target specific stretches of DNA, but other enzymes have since been discovered that are able to edit DNA, including Cpf1 and Cas12b. Other CRISPR enzymes, Cas13 family members, can be programmed to target RNA and even edit and change its sequence.

The common theme that makes CRISPR enzymes so powerful, is that scientists can supply them with a guide RNA for a chosen sequence. Since the guide RNA can pair very specifically with DNA, or for Cas13 family members, RNA, researchers can basically provide a given CRISPR enzyme with a way of homing in on any sequence of interest. Once a CRISPR protein finds its target, it can be used to edit that sequence, perhaps removing a disease-associated mutation.

In addition, CRISPR proteins have been engineered to modulate gene expression and even signal the presence of particular sequences, as in the case of the Cas13-based diagnostic, SHERLOCK.

Do you have a question for The Brain? Ask it here.

Tracking down changes in ADHD

Attention deficit hyperactivity disorder (ADHD) is marked by difficulty maintaining focus on tasks, and increased activity and impulsivity. These symptoms ultimately interfere with the ability to learn and function in daily tasks, but the source of the problem could lie at different levels of brain function, and it is hard to parse out exactly what is going wrong.

A new study co-authored by McGovern Institute Associate Investigator Michael Halassa has managed to develop tasks that dissociate lower from higher level brain functions so that disruption to these processes can be more specifically checked in ADHD. The results of this study, carried out in collaboration with co-corresponding authors Wei Ji Ma, Andra Mihali and researchers from New York University, illuminate how brain function is disrupted in ADHD, and highlights a role for perceptual deficits in this condition.

The underlying deficit in ADHD has largely been attributed to executive function — higher order processing and the ability of the brain to integrate information and focus attention. But there have been some hints, largely through reports from those with ADHD, that the very ability to accurately receive sensory information, might be altered. Some people with ADHD, for example, have reported impaired visual function and even changes in color processing. Cleanly separating these perceptual brain functions from the impact of higher order cognitive processes has proven difficult, however. It is not clear whether people with and without ADHD encode visual signals received by the eye in the same way.

“We realized that psychiatric diagnoses in general are based on clinical criteria and patient self-reporting,” says Halassa, who is also a board certified psychiatrist and an assistant professor in MIT’s Department of Brain and Cognitive Sciences. “Psychiatric diagnoses are imprecise, but neurobiology is progressing to the point where we can use well-controlled parameters to standardize criteria, and relate disorders to circuits,” he explains. “If there are problems with attention, is it the spotlight of attention itself that’s affected in ADHD, or the ability of a person to control where this spotlight is focused?”

To test how people with and without ADHD encode visual signals in the brain, Halassa, Ma, Mihali, and collaborators devised a perceptual encoding task in which subjects were asked to provide answers to simple questions about the orientation and color of lines and shapes on a screen. The simplicity of this test aimed to remove high-level cognitive input and provide a measure of accurate perceptual coding.

To measure higher-level executive function, the researchers provided subjects with rules about which features and screen areas were relevant to the task, and they switched relevance throughout the test. They monitored whether subjects cognitively adapted to the switch in rules – an indication of higher-order brain function. The authors also analyzed psychometric curve parameters, common in psychophysics, but not yet applied to ADHD.

“These psychometric parameters give us specific information about the parts of sensory processing that are being affected,” explains Halassa. “So, if you were to put on sunglasses, that would shift threshold, indicating that input is being affected, but this wouldn’t necessarily affect the slope of the psychometric function. If the slope is affected, this starts to reflect difficulty in seeing a line or color. In other words, these tests give us a finer readout of behavior, and how to map this onto particular circuits.”

The authors found that changes in visual perception were robustly associated with ADHD, and these changes were also correlated with cognitive function. Individuals with more clinically severe ADHD scored lower on executive function, and basic perception also tracked with these clinical records of disease severity. The authors could even sort ADHD from control subjects, based on their perceptual variability alone. All of this goes to say that changes in perception itself are clearly present in this ADHD cohort, and that they decline alongside changes in executive function.

“This was unexpected,” points out Halassa. “We didn’t expect so much to be explained by lower sensitivity to stimuli, and to see that these tasks become harder as cognitive pressure increases. It wasn’t clear that cognitive circuits might influence processing of stimuli.”

Understanding the true basis of changes in behavior in disorders such as ADHD can be hard to tease apart, but the study gives more insight into changes in the ADHD brain, and supports the idea that quantitative follow up on self-reporting by patients can drive a stronger understanding — and possible targeted treatment — of such disorders. Testing a larger number of ADHD patients and validating these measures on a larger scale is now the next research priority.

School of Science welcomes 10 professors

The MIT School of Science recently welcomed 10 new professors, including Ila Fiete in the departments of Brain and Cognitive Sciences, Chemistry, Biology, Physics, Mathematics, and Earth, Atmospheric and Planetary Sciences.

Ila Fiete uses computational and theoretical tools to better understand the dynamical mechanisms and coding strategies that underlie computation in the brain, with a focus on elucidating how plasticity and development shape networks to perform computation and why information is encoded the way that it is. Her recent focus is on error control in neural codes, rules for synaptic plasticity that enable neural circuit organization, and questions at the nexus of information and dynamics in neural systems, such as understand how coding and statistics fundamentally constrain dynamics and vice-versa.

Tristan Collins conducts research at the intersection of geometric analysis, partial differential equations, and algebraic geometry. In joint work with Valentino Tosatti, Collins described the singularity formation of the Ricci flow on Kahler manifolds in terms of algebraic data. In recent work with Gabor Szekelyhidi, he gave a necessary and sufficient algebraic condition for existence of Ricci-flat metrics, which play an important role in string theory and mathematical physics. This result lead to the discovery of infinitely many new Einstein metrics on the 5-dimensional sphere. With Shing-Tung Yau and Adam Jacob, Collins is currently studying the relationship between categorical stability conditions and existence of solutions to differential equations arising from mirror symmetry.

Collins earned his BS in mathematics at the University of British Columbia in 2009, after which he completed his PhD in mathematics at Columbia University in 2014 under the direction of Duong H. Phong. Following a four-year appointment as a Benjamin Peirce Assistant Professor at Harvard University, Collins joins MIT as an assistant professor in the Department of Mathematics.

Julien de Wit develops and applies new techniques to study exoplanets, their atmospheres, and their interactions with their stars. While a graduate student in the Sara Seager group at MIT, he developed innovative analysis techniques to map exoplanet atmospheres, studied the radiative and tidal planet-star interactions in eccentric planetary systems, and constrained the atmospheric properties and mass of exoplanets solely from transmission spectroscopy. He plays a critical role in the TRAPPIST/SPECULOOS project, headed by Université of Liège, leading the atmospheric characterization of the newly discovered TRAPPIST-1 planets, for which he has already obtained significant results with the Hubble Space Telescope. De Wit’s efforts are now also focused on expanding the SPECULOOS network of telescopes in the northern hemisphere to continue the search for new potentially habitable TRAPPIST-1-like systems.

De Wit earned a BEng in physics and mechanics from the Université de Liège in Belgium in 2008, an MS in aeronautic engineering and an MRes in astrophysics, planetology, and space sciences from the Institut Supérieur de l’Aéronautique et de l’Espace at the Université de Toulouse, France in 2010; he returned to the Université de Liège for an MS in aerospace engineering, completed in 2011. After finishing his PhD in planetary sciences in 2014 and a postdoc at MIT, both under the direction of Sara Seager, he joins the MIT faculty in the Department of Earth, Atmospheric and Planetary Sciences as an assistant professor.

After earning a BS in mathematics and physics at the University of Michigan, Fiete obtained her PhD in 2004 at Harvard University in the Department of Physics. While holding an appointment at the Kavli Institute for Theoretical Physics at the University of California, Santa Barbara from 2004 to 2006, she was also a visiting member of the Center for Theoretical Biophysics at the University of California at San Diego. Fiete subsequently spent two years at Caltech as a Broad Fellow in brain circuitry, and in 2008 joined the faculty of the University of Texas at Austin. She joins the MIT faculty in the Department of Brain and Cognitive Sciences as an associate professor with tenure.

Ankur Jain explores the biology of RNA aggregation. Several genetic neuromuscular disorders, such as myotonic dystrophy and amyotrophic lateral sclerosis, are caused by expansions of nucleotide repeats in their cognate disease genes. Such repeats cause the transcribed RNA to form pathogenic clumps or aggregates. Jain uses a variety of biophysical approaches to understand how the RNA aggregates form, and how they can be disrupted to restore normal cell function. Jain will also study the role of RNA-DNA interactions in chromatin organization, investigating whether the RNA transcribed from telomeres (the protective repetitive sequences that cap the ends of chromosomes) undergoes the phase separation that characterizes repeat expansion diseases.

Jain completed a bachelor’s of technology degree in biotechnology and biochemical engineering at the Indian Institute of Technology Kharagpur, India in 2007, followed by a PhD in biophysics and computational biology at the University of Illinois at Urbana-Champaign under the direction of Taekjip Ha in 2013. After a postdoc at the University of California at San Francisco, he joins the MIT faculty in the Department of Biology as an assistant professor with an appointment as a member of the Whitehead Institute for Biomedical Research.

Kiyoshi Masui works to understand fundamental physics and the evolution of the universe through observations of the large-scale structure — the distribution of matter on scales much larger than galaxies. He works principally with radio-wavelength surveys to develop new observational methods such as hydrogen intensity mapping and fast radio bursts. Masui has shown that such observations will ultimately permit precise measurements of properties of the early and late universe and enable sensitive searches for primordial gravitational waves. To this end, he is working with a new generation of rapid-survey digital radio telescopes that have no moving parts and rely on signal processing software running on large computer clusters to focus and steer, including work on the Canadian Hydrogen Intensity Mapping Experiment (CHIME).

Masui obtained a BSCE in engineering physics at Queen’s University, Canada in 2008 and a PhD in physics at the University of Toronto in 2013 under the direction of Ue-Li Pen. After postdoctoral appointments at the University of British Columbia as the Canadian Institute for Advanced Research Global Scholar and the Canadian Institute for Theoretical Astrophysics National Fellow, Masui joins the MIT faculty in the Department of Physics as an assistant professor.

Phiala Shanahan studies theoretical nuclear and particle physics, in particular the structure and interactions of hadrons and nuclei from the fundamental (quark and gluon) degrees of freedom encoded in the Standard Model of particle physics. Shanahan’s recent work has focused on the role of gluons, the force carriers of the strong interactions described by quantum chromodynamics (QCD), in hadron and nuclear structure by using analytic tools and high-performance supercomputing. She recently achieved the first calculation of the gluon structure of light nuclei, making predictions that will be testable in new experiments proposed at Jefferson National Accelerator Facility and at the planned Electron-Ion Collider. She has also undertaken extensive studies of the role of strange quarks in the proton and light nuclei that sharpen theory predictions for dark matter cross-sections in direct detection experiments. To overcome computational limitations in QCD calculations for hadrons and in particular for nuclei, Shanahan is pursuing a program to integrate modern machine learning techniques in computational nuclear physics studies.

Shanahan obtained her BS in 2012 and her PhD in 2015, both in physics, from the University of Adelaide. She completed postdoctoral work at MIT in 2017, then held a joint position as an assistant professor at the College of William and Mary and senior staff scientist at the Thomas Jefferson National Accelerator Facility until 2018. She returns to MIT in the Department of Physics as an assistant professor.

Nike Sun works in probability theory at the interface of statistical physics and computation. Her research focuses in particular on phase transitions in average-case (randomized) formulations of classical computational problems. Her joint work with Jian Ding and Allan Sly establishes the satisfiability threshold of random k-SAT for large k, and relatedly the independence ratio of random regular graphs of large degree. Both are long-standing open problems where heuristic methods of statistical physics yield detailed conjectures, but few rigorous techniques exist. More recently she has been investigating phase transitions of dense graph models.

Sun completed BA mathematics and MA statistics degrees at Harvard in 2009, and an MASt in mathematics at Cambridge in 2010. She received her PhD in statistics from Stanford University in 2014 under the supervision of Amir Dembo. She held a Schramm fellowship at Microsoft New England and MIT Mathematics in 2014-2015 and a Simons postdoctoral fellowship at the University of California at Berkeley in 2016, and joined the Berkeley Department of Statistics as an assistant professor in 2016. She returns to the MIT Department of Mathematics as an associate professor with tenure.

Alison Wendlandt focuses on the development of selective, catalytic reactions using the tools of organic and organometallic synthesis and physical organic chemistry. Mechanistic study plays a central role in the development of these new transformations. Her projects involve the design of new catalysts and catalytic transformations, identification of important applications for selective catalytic processes, and elucidation of new mechanistic principles to expand powerful existing catalytic reaction manifolds.

Wendlandt received a BS in chemistry and biological chemistry from the University of Chicago in 2007, an MS in chemistry from Yale University in 2009, and a PhD in chemistry from the University of Wisconsin at Madison in 2015 under the direction of Shannon S. Stahl. Following an NIH Ruth L. Krichstein Postdoctoral Fellowship at Harvard University, Wendlandt joins the MIT faculty in the Department of Chemistry as an assistant professor.

Chenyang Xu specializes in higher-dimensional algebraic geometry, an area that involves classifying algebraic varieties, primarily through the minimal model program (MMP). MMP was introduced by Fields Medalist S. Mori in the early 1980s to make advances in higher dimensional birational geometry. The MMP was further developed by Hacon and McKernan in the mid-2000s, so that the MMP could be applied to other questions. Collaborating with Hacon, Xu expanded the MMP to varieties of certain conditions, such as those of characteristic p, and, with Hacon and McKernan, proved a fundamental conjecture on the MMP, generating a great deal of follow-up activity. In collaboration with Chi Li, Xu proved a conjecture of Gang Tian concerning higher-dimensional Fano varieties, a significant achievement. In a series of papers with different collaborators, he successfully applied MMP to singularities.

Xu received his BS in 2002 and MS in 2004 in mathematics from Peking University, and completed his PhD at Princeton University under János Kollár in 2008. He came to MIT as a CLE Moore Instructor in 2008-2011, and was subsequently appointed assistant professor at the University of Utah. He returned to Peking University as a research fellow at the Beijing International Center of Mathematical Research in 2012, and was promoted to professor in 2013. Xu joins the MIT faculty as a full professor in the Department of Mathematics.

Zhiwei Yun’s research is at the crossroads between algebraic geometry, number theory, and representation theory. He studies geometric structures aiming at solving problems in representation theory and number theory, especially those in the Langlands program. While he was a CLE Moore Instructor at MIT, he started to develop the theory of rigid automorphic forms, and used it to answer an open question of J-P Serre on motives, which also led to a major result on the inverse Galois problem in number theory. More recently, in his joint work with Wei Zhang, they give geometric interpretation of higher derivatives of automorphic L- functions in terms of intersection numbers, which sheds new light on the geometric analogue of the Birch and Swinnerton-Dyer conjecture.

Yun earned his BS at Peking University in 2004, after which he completed his PhD at Princeton University in 2009 under the direction of Robert MacPherson. After appointments at the Institute for Advanced Study and as a CLE Moore Instructor at MIT, he held faculty appointments at Stanford and Yale. He returned to the MIT Department of Mathematics as a full professor in the spring of 2018.

Feng Zhang named winner of the 2018 Keio Medical Science Prize

Feng Zhang and Masashi Yanagisawa have been named the 2018 winners of the prestigious Keio Medical Science Prize. Zhang is being recognized for the groundbreaking development of CRISPR-Cas9-mediated genome engineering in cells and its application for medical science. Zhang is an HHMI Investigator and the James and Patricia Poitras Professor of Neuroscience at MIT, an associate professor in MIT’s Departments of Brain and Cognitive Sciences and Biological Engineering, an investigator at the McGovern Institute for Brain Research, and a core member of the Broad Institute of MIT and Harvard. Masashi Yanagisawa, Director of the International Institute for Integrative Sleep Medicine at the University of Tsukuba, is being recognized for his seminal work on sleep control mechanisms.

“We are delighted that Feng is now a Keio Prize laureate,” says McGovern Institute Director Robert Desimone. “This truly recognizes the remarkable achievements that he has made at such a young age.”

The Keio Medical Prize is awarded to a maximum of two scientists each year, and is now in its 23rd year. The prize is offered by Keio University, and the selection committee specifically looks for laureates that have made an outstanding contribution to medicine or the life sciences. The prize was initially endowed by Dr. Mitsunada Sakaguchi in 1994, with the express condition that it be used to commend outstanding science, promote medical advances in medicine and the life sciences, expand researcher networks, and contribute to the well-being of humankind. The winners receive a certificate of merit, medal, and a monetary award of 10 million yen.

Feng Zhang is a molecular biologist who has contributed to the development of multiple molecular tools to accelerate our understanding of human disease and create new therapeutic modalities. During his graduate work Zhang contributed to the development of optogenetics, a system for activating neurons using light, which has advanced our understanding of brain connectivity. Zhang went on to pioneer the deployment of the microbial CRISPR-Cas9 system for genome engineering in eukaryotic cells. The ease and specificity of the system has led to its widespread use across the life sciences and it has groundbreaking implications for disease therapeutics, biotechnology, and agriculture. Zhang has continued to mine bacterial CRISPR systems for additional enzymes with useful properties, leading to the discovery of Cas13, which targets RNA, rather than DNA, and may potentially be a way to treat genetic diseases without altering the genome. He has also developed a molecular detection system called SHERLOCK based on the Cas13 family, which can sense trace amounts of genetic material, including viruses and alterations in genes that might be linked to cancer.

“I am tremendously honored to have our work recognized by the Keio Medical Prize,” says Zhang. “It is an inspiration to us to continue our work to improve human health.”

The prize ceremony will be held on December 18th 2018 at Keio University in Tokyo, Japan.