Plugging into the brain

Driven by curiosity and therapeutic goals, Anikeeva leaves no scientific stone unturned in her drive to invent neurotechnology.

The audience sits utterly riveted as Polina Anikeeva highlights the gaps she sees in the landscape of neural tools. With a background in optoelectronics, she has a decidedly unique take on the brain.

“In neuroscience,” says Anikeeva, “we are currently applying silicon-based neural probes with the elastic properties of a knife to a delicate material with the consistency of chocolate pudding—the brain.”

A key problem, summarized by Anikeeva, is that these sharp probes damage tissue, making such interfaces unreliable and thwarting long term brain studies of processes including development and aging. The state of the art is even grimmer in the clinic. An avid climber, Anikeeva recalls a friend sustaining a spinal cord injury. “She made a remarkable recovery,” explains Anikeeva, “but seeing the technology being used to help her was shocking. Not even the simplest electronic tools were used, it was basically lots of screws and physical therapy.” This crude approach, compared to the elegant optoelectronic tools familiar to Anikeeva, sparked a drive to bring advanced materials technology to biological systems.

Outside the box

As the group breaks up after the seminar, the chatter includes boxes, more precisely, thinking outside of them. An associate professor in material sciences and engineering at MIT, Anikeeva’s interest in neuroscience recently led to a McGovern Institute appointment. She sees her journey to neurobiology as serendipitous, having earned her doctorate designing light-emitting devices at MIT.

“I wanted to work on tools that don’t exist, and neuroscience seemed like an obvious choice. Neurons communicate in part through membrane voltage changes and as an electronics designer, I felt that I should be able to use voltage.”

Comfort at the intersection of sciences requires, according to Anikeeva, clarity and focus, also important in her chief athletic pursuits, running and climbing. Through long distant running, Anikeeva finds solitary time (“assuming that no one can chase me”) and the clarity to consider complicated technical questions. Climbing hones something different, absolute focus in the face of the often-tangled information that comes with working at scientific intersections.

“When climbing, you can only think about one thing, your next move. Only the most important thoughts float up.”

This became particularly important when, in Yosemite National Park, she made the decision to go up, instead of down, during an impending thunderstorm. Getting out depended on clear focus, despite imminent hypothermia and being exposed “on one of the tallest features in the area, holding large quantities of metal.” Polina and her climbing partner made it out, but her summary of events echoes her research philosophy: “What you learn and develop is a strong mindset where you don’t do the comfortable thing, the easy thing. Instead you always find, and execute, the most logical strategy.”

In this vein, Anikeeva’s research pursues two very novel, but exceptionally logical, paths to brain research and therapeutics: fiber development and magnetic nanomaterials.

Drawing new fibers

Walking into Anikeeva’s lab, the eye is immediately drawn to a robust metal frame containing, upon closer scrutiny, recognizable parts: a large drill bit, a motor, a heating element. This custom-built machine applies principles from telecommunications to draw multifunctional fibers using more “brain-friendly” materials.

“We start out with a macroscopic model, a preform, of the device that we ultimately want,” explains Anikeeva.

This “preform” is a transparent block of polymers, composites, and soft low-melting temperature metals with optical and electrical properties needed in the final fiber. “So, this could include
electrodes for recording, optical channels for optogenetics, microfluidics for drug delivery, and one day even components that allow chemical or mechanical sensing.” After sitting in a vacuum to remove gases and impurities, the two-inch by one-inch preform arrives at the fiber-drawing tower.

“Then we heat it and pull it, and the macroscopic model becomes a kilometer-long fiber with a lateral dimension of microns, even nanometers,” explains Anikeeva. “Take one of your hairs, and imagine that inside there are electrodes for recording, there are microfluidic channels to infuse drugs, optical channels for stimulation. All of this is combined in a single miniature form
factor, and it can be quite flexible and even stretchable.”

Construction crew

Anikeeva’s lab comprises an eclectic mix of 21 researchers from over 13 different countries, and a range of expertises, including materials science, chemistry, electrical and mechanical engineering, and neuroscience. In 2011, Andres Canales, a materials scientist from Mexico, was the second person to join Anikeeva’s lab.

“There was only an idea, a diagram,” explains Canales. “I didn’t want to work on biology when I arrived at MIT, but talking to Polina, seeing the pictures, thinking about what it would entail, I became very excited by the methods and the potential applications she was thinking of.”

Despite the lack of preliminary models, Anikeeva’s ideas were compelling. Elegant as the fibers are, the road involved painstaking, iterative refinement. From a materials perspective, drawing a fiber containing a continuous conductive element was challenging, as was validation of its properties. But the resulting fiber can deliver optogenetics vectors, monitor expression, and then stimulate neuronal activity in a single surgery, removing the spatial and temporal guesswork usually involved in such an experiment.

Seongjun Park, an electrical engineering graduate student in the lab, explains one biological challenge. “For long term recording in the spinal cord, there was even an additional challenge as the fiber needed to be stretchable to respond to the spine’s movement. For this we developed a drawing process compatible with an elastomer.”

The resulting fibers can be deployed chronically without the scar tissue accumulation that usually prevents long-term optical manipulation and drug delivery, making them good candidates for the treatment of brain disorders. The lab’s current papers find that these implanted fibers are useful for three months, and material innovations make them confident that longer time periods are possible.

Magnetic moments

Another wing of Anikeeva’s research aims to develop entirely non-invasive modalities, and use magnetic nanoparticles to stimulate the brain and deliver therapeutics.

“Magnetic fields are probably the best modality for getting any kind of stimulus to deep tissues,” explains Anikeeva, “because biological systems, except for very specialized systems, do not perceive magnetic fields. They go through us unattenuated, and they don’t couple to our physiology.”

In other words, magnetic fields can safely reach deep tissues, including the brain. Upon reaching their tissue targets these fields can be used to stimulate magnetic nanoparticles, which might one day, for example, be used to deliver dopamine to the brains of Parkinson’s disease patients. The alternating magnetic fields being used in these experiments are tiny, 100-1000 times smaller than fields clinically approved for MRI-based brain imaging.

Tiny fields, but they can be used to powerful effect. By manipulating magnetic moments in these nanoparticles, the magnetic field can cause heat dissipation by the particle that can stimulate thermal receptors in the nervous system. These receptors naturally detect heat, chili peppers and vanilla, but Anikeeva’s magnetic nanoparticles act as tiny heaters that activate these receptors, and, in turn, local neurons. This principle has already been used to activate the brain’s reward center in freely moving mice.

Siyuan Rao, a postdoc who works on the magnetic nanoparticles in collaboration with McGovern Investigator Guoping Feng, is unhesitating when asked what most inspires her.

“As a materials scientist, it is really rewarding to see my materials at work. We can remotely modulate mouse behavior, even turn hopeless behavior into motivation.”

Pushing the boundaries

Such collaborations are valued by Anikeeva. Early on she worked with McGovern Investigator Emilio Bizzi to use the above fiber technology in the spinal cord. “It is important to us to not just make these devices,” explains Anikeeva, “but to use them and show ourselves, and our colleagues, the types of experiments that they can enable.”

Far from an assembly line, the researchers in Anikeeva’s lab follow projects from ideation to deployment. “The student that designs a fiber, performs their own behavioral experiments, and data analysis,” says Anikeeva. “Biology is unforgiving. You can trivially design the most brilliant electrophysiological recording probe, but unless you are directly working in the system, it is easy to miss important design considerations.”

Inspired by this, Anikeeva’s students even started a project with Gloria Choi’s group on their own initiative. This collaborative, can-do ethos spreads beyond the walls of the lab, inspiring people around MIT.

“We often work with a teaching instructor, David Bono, who is an expert on electronics and magnetic instruments,” explains Alex Senko, a senior graduate student in the lab. “In his spare time, he helps those of us who work on electrical engineering flavored projects to hunt down components needed to build our devices.”

These components extend to whatever is needed. When a low frequency source was needed, the Anikeeva lab drafted a guitar amplifier.

Queried about difficulties that she faces having chosen to navigate such a broad swath of fields, Anikeeva is focused, as ever, on the unknown, the boundaries of knowledge.

“Honestly, I really, really enjoy it. It keeps me engaged and not bored. Even when thinking about complicated physics and chemistry, I always have eyes on the prize, that this will allow us to address really interesting neuroscience questions.”

With such thinking, and by relentlessly seeking the tools needed to accomplish scientific goals, Anikeeva and her lab continue to avoid the comfortable route, instead using logical routes toward new technologies.

What is CRISPR?

CRISPR (which stands for Clustered Regularly Interspaced Short Palindromic Repeats) is not actually a single entity, but shorthand for a set of bacterial systems that are found with a hallmarked arrangement in the bacterial genome.

When CRISPR is mentioned, most people are likely thinking of CRISPR-Cas9, now widely known for its capacity to be re-deployed to target sequences of interest in eukaryotic cells, including human cells. Cas9 can be programmed to target specific stretches of DNA, but other enzymes have since been discovered that are able to edit DNA, including Cpf1 and Cas12b. Other CRISPR enzymes, Cas13 family members, can be programmed to target RNA and even edit and change its sequence.

The common theme that makes CRISPR enzymes so powerful, is that scientists can supply them with a guide RNA for a chosen sequence. Since the guide RNA can pair very specifically with DNA, or for Cas13 family members, RNA, researchers can basically provide a given CRISPR enzyme with a way of homing in on any sequence of interest. Once a CRISPR protein finds its target, it can be used to edit that sequence, perhaps removing a disease-associated mutation.

In addition, CRISPR proteins have been engineered to modulate gene expression and even signal the presence of particular sequences, as in the case of the Cas13-based diagnostic, SHERLOCK.

Do you have a question for The Brain? Ask it here.

SHERLOCK: A CRISPR tool to detect disease

This animation depicts how Cas13 — a CRISPR-associated protein — may be adapted to detect human disease. This new diagnostic tool, called SHERLOCK, targets RNA (rather than DNA), and has the potential to transform research and global public health.

 

Is it worth the risk?

During the Klondike Gold Rush, thousands of prospectors climbed Alaska’s dangerous Chilkoot Pass in search of riches. McGovern scientists are exploring how a once-overlooked part of the brain might be at the root of cost-benefit decisions like these. McGovern researchers are studying how the brain balances risk and reward to make decisions.

Is it worth speeding up on the highway to save a few minutes’ time? How about accepting a job that pays more, but requires longer hours in the office?

Scientists call these types of real-life situations cost-benefit conflicts. Choosing well is an essential survival ability—consider the animal that must decide when to expose itself to predation to gather more food.

Now, McGovern researchers are discovering that this fundamental capacity to make decisions may originate in the basal ganglia—a brain region once considered unimportant to the human
experience—and that circuits associated with this structure may play a critical role in determining our state of mind.

Anatomy of decision-making

A few years back, McGovern investigator Ann Graybiel noticed that in the brain imaging literature, a specific part of the cortex called the pregenual anterior cingulate cortex or pACC, was implicated in certain psychiatric disorders as well as tasks involving cost-benefit decisions. Thanks to her now classic neuroanatomical work defining the complex anatomy and function of the basal ganglia, Graybiel knew that the pACC projected back into the basal ganglia—including its largest cluster of neurons, the striatum.

The striatum sits beneath the cortex, with a mouse-like main body and curving tail. It seems to serve as a critical way-station, communicating with both the brain’s sensory and motor areas above, and the limbic system (linked to emotion and memory) below. Running through the striatum are striosomes, column-like neurochemical compartments. They wire down to a small, but important part of the brain called the substantia nigra, which houses the huge majority of the brain’s dopamine neurons—a key neurochemical heavily involved, much like the basal ganglia as a whole, in reward, learning, and movement. The pACC region related to mood control targeted these striosomes, setting up a communication line from the neocortex to the dopamine neurons.

Graybiel discovered these striosomes early in her career, and understood them to have distinct wiring from other compartments in the striatum, but picking out these small, hard-to-find striosomes posed a technological challenge—so it was exciting to have this intriguing link to the pACC and mood disorders.

Working with Ken-ichi Amemori, then a research scientist in her lab, she adapted a common human cost-benefit conflict test for macaque monkeys. The monkeys could elect to receive a food treat, but the treat would always be accompanied by an annoying puff of air to the eyes. Before they decided, a visual cue told them exactly how much treat they could get, and exactly how strong the air puff would be, so they could choose if the treat was worth it.

Normal monkeys varied their choices in a fairly rational manner, rejecting the treat whenever it seemed like the air puff was too strong, or the treat too small to be worth it—and this corresponded with activity in the pACC neurons. Interestingly, they found that some pACC neurons respond more when animals approach the combined offers, while other pACC neurons
fire more when the animals avoid the offers. “It is as though there are two opposing armies. And the one that wins, controls the state of the animal.” Moreover, when Graybiel’s team electrically stimulated these pACC neurons, the animals begin to avoid the offers, even offers that they normally would approach. “It is as though when the stimulation is on, they think the future is worse than it really is,” Graybiel says.

Intriguingly, this effect only worked in situations where the animal had to weigh the value of a cost against a benefit. It had no effect on a decision between two negatives or two positives, like two different sizes of treats. The anxiety drug diazepam also reversed the stimulatory effect, but again, only on cost-benefit choices. “This particular kind of mood-influenced cost-benefit
decision-making occurs not only under conflict conditions but in our regular day to day lives. For example: I know that if I eat too much chocolate, I might get fat, but I love it, I want it.”

Glass half empty

Over the next few years, Graybiel, with another research scientist in her lab, Alexander Friedman, unraveled the circuit behind the macaques’ choices. They adapted the test for rats and mice,
so that they could more easily combine the cellular and molecular technologies needed to study striosomes, such as optogenetics and mouse engineering.

They found that the cortex (specifically, the pre-limbic region of the prefrontal cortex in rodents) wires onto both striosomes and fast-acting interneurons that also target the striosomes. In a
healthy circuit, these interneurons keep the striosomes in check by firing off fast inhibitory signals, hitting the brakes before the striosome can get started. But if the researchers broke that corticalstriatal connection with optogenetics or chronic stress, the animals became reckless, going for the high-risk, high-reward arm of the maze like a gambler throwing caution to the wind. If they amplified this inhibitory interneuron activity, they saw the opposite effect. With these techniques, they could block the effects of prior chronic stress.

This summer, Graybiel and Amemori published another paper furthering the story and returning to macaques. It was still too difficult to hit striosomes, and the researchers could only stimulate the striatum more generally. However, they replicated the effects in past studies.

Many electrodes had no effect, a small number made the monkeys choose the reward more often. Nearly a quarter though made the monkeys more avoidant—and this effect correlated with a change in the macaques’ brainwaves in a manner reminiscent of patients with depression.

But the surprise came when the avoidant-producing stimulation was turned off, the effects lasted unexpectedly long, only returning to normal on the third day.

Graybiel was stunned. “This is very important, because changes in the brain can get set off and have a life of their own,” she says. “This is true for some individuals who have had a terrible experience, and then live with the aftermath, even to the point of suffering from post-traumatic stress disorder.”

She suspects that this persistent state may actually be a form of affect, or mood. “When we change this decision boundary, we’re changing the mood, such that the animal overestimates cost, relative to benefit,” she explains. “This might be like a proxy state for pessimistic decision-making experienced during anxiety and depression, but may also occur, in a milder form, in you and me.”

Graybiel theorizes that this may tie back into the dopamine neurons that the striosomes project to: if this avoidance behavior is akin to avoidance observed in rodents, then they are stimulating a circuit that ultimately projects to dopamine neurons of the substantia nigra. There, she believes, they could act to suppress these dopamine neurons, which in turn project to the rest of the brain, creating some sort of long-term change in their neural activity. Or, put more simply, stimulation of these circuits creates a depressive funk.

Bottom up

Three floors below the Graybiel lab, postdoc Will Menegas is in the early stages of his own work untangling the role of dopamine and the striatum in decision-making. He joined Guoping Feng’s lab this summer after exploring the understudied “tail of the striatum” at Harvard University.

While dopamine pathways influence many parts of the brain, examination of connections to the striatum have largely focused on the frontmost part of the striatum, associated with valuations.

But as Menegas showed while at Harvard, dopamine neurons that project to the rear of the striatum are different. Those neurons get their input from parts of the brain associated with general arousal and sensation—and instead of responding to rewards, they respond to novelty and intense stimuli, like air puffs and loud noises.

In a new study published in Nature Neuroscience, Menegas used a neurotoxin to disrupt the dopamine projection from the substantia nigra to the posterior striatum to see how this circuit influences behavior. Normal mice approach novel items cautiously and back away after sniffing at them, but the mice in Menegas’ study failed to back away. They stopped avoiding a port that gave an air puff to the face and they didn’t behave like normal mice when Menegas dropped a strange or new object—say, a lego—into their cage. Disrupting the nigral-posterior striatum
seemed to turn off their avoidance habit.

“These neurons reinforce avoidance the same way that canonical dopamine neurons reinforce approach,” Menegas explains. It’s a new role for dopamine, suggesting that there may be two different and distinct systems of reinforcement, led by the same neuromodulator in different parts of the striatum.

This research, and Graybiel’s discoveries on cost-benefit decision circuits, share clear parallels, though the precise links between the two phenomena are yet to be fully determined. Menegas plans to extend this line of research into social behavior and related disorders like autism in marmoset monkeys.

“Will wants to learn the methods that we use in our lab to work on marmosets,” Graybiel says. “I think that working together, this could become a wonderful story, because it would involve social interactions.”

“This a very new angle, and it could really change our views of how the reward system works,” Feng says. “And we have very little understanding of social circuits so far and especially in higher organisms, so I think this would be very exciting. Whatever we learn, it’s going to be new.”

Human choices

Based on their preexisting work, Graybiel’s and Menegas’ projects are well-developed—but they are far from the only McGovern-based explorations into ways this brain region taps into our behaviors. Maiya Geddes, a visiting scientist in John Gabrieli’s lab, has recently published a paper exploring the little-known ways that aging affects the dopamine-based nigral-striatum-hippocampus learning and memory systems.

In Rebecca Saxe’s lab, postdoc Livia Tomova just kicked off a new pilot project using brain imaging to uncover dopamine-striatal circuitry behind social craving in humans and the urge to rejoin peers. “Could there be a craving response similar to hunger?” Tomova wonders. “No one has looked yet at the neural mechanisms of this.”

Graybiel also hopes to translate her findings into humans, beginning with collaborations at the Pizzagalli lab at McLean Hospital in Belmont. They are using fMRI to study whether patients
with anxiety and depression show some of the same dysfunctions in the cortico-striatal circuitry that she discovered in her macaques.

If she’s right about tapping into mood states and affect, it would be an expanded role for the striatum—and one with significant potential therapeutic benefits. “Affect state” colors many psychological functions and disorders, from memory and perception, to depression, chronic stress, obsessive-compulsive disorder, and PTSD.

For a region of the brain once dismissed as inconsequential, McGovern researchers have shown the basal ganglia to influence not only our choices but our state of mind—suggesting that this “primitive” brain region may actually be at the heart of the human experience.

 

 

Tracking down changes in ADHD

Attention deficit hyperactivity disorder (ADHD) is marked by difficulty maintaining focus on tasks, and increased activity and impulsivity. These symptoms ultimately interfere with the ability to learn and function in daily tasks, but the source of the problem could lie at different levels of brain function, and it is hard to parse out exactly what is going wrong.

A new study co-authored by McGovern Institute Associate Investigator Michael Halassa has managed to develop tasks that dissociate lower from higher level brain functions so that disruption to these processes can be more specifically checked in ADHD. The results of this study, carried out in collaboration with co-corresponding authors Wei Ji Ma, Andra Mihali and researchers from New York University, illuminate how brain function is disrupted in ADHD, and highlights a role for perceptual deficits in this condition.

The underlying deficit in ADHD has largely been attributed to executive function — higher order processing and the ability of the brain to integrate information and focus attention. But there have been some hints, largely through reports from those with ADHD, that the very ability to accurately receive sensory information, might be altered. Some people with ADHD, for example, have reported impaired visual function and even changes in color processing. Cleanly separating these perceptual brain functions from the impact of higher order cognitive processes has proven difficult, however. It is not clear whether people with and without ADHD encode visual signals received by the eye in the same way.

“We realized that psychiatric diagnoses in general are based on clinical criteria and patient self-reporting,” says Halassa, who is also a board certified psychiatrist and an assistant professor in MIT’s Department of Brain and Cognitive Sciences. “Psychiatric diagnoses are imprecise, but neurobiology is progressing to the point where we can use well-controlled parameters to standardize criteria, and relate disorders to circuits,” he explains. “If there are problems with attention, is it the spotlight of attention itself that’s affected in ADHD, or the ability of a person to control where this spotlight is focused?”

To test how people with and without ADHD encode visual signals in the brain, Halassa, Ma, Mihali, and collaborators devised a perceptual encoding task in which subjects were asked to provide answers to simple questions about the orientation and color of lines and shapes on a screen. The simplicity of this test aimed to remove high-level cognitive input and provide a measure of accurate perceptual coding.

To measure higher-level executive function, the researchers provided subjects with rules about which features and screen areas were relevant to the task, and they switched relevance throughout the test. They monitored whether subjects cognitively adapted to the switch in rules – an indication of higher-order brain function. The authors also analyzed psychometric curve parameters, common in psychophysics, but not yet applied to ADHD.

“These psychometric parameters give us specific information about the parts of sensory processing that are being affected,” explains Halassa. “So, if you were to put on sunglasses, that would shift threshold, indicating that input is being affected, but this wouldn’t necessarily affect the slope of the psychometric function. If the slope is affected, this starts to reflect difficulty in seeing a line or color. In other words, these tests give us a finer readout of behavior, and how to map this onto particular circuits.”

The authors found that changes in visual perception were robustly associated with ADHD, and these changes were also correlated with cognitive function. Individuals with more clinically severe ADHD scored lower on executive function, and basic perception also tracked with these clinical records of disease severity. The authors could even sort ADHD from control subjects, based on their perceptual variability alone. All of this goes to say that changes in perception itself are clearly present in this ADHD cohort, and that they decline alongside changes in executive function.

“This was unexpected,” points out Halassa. “We didn’t expect so much to be explained by lower sensitivity to stimuli, and to see that these tasks become harder as cognitive pressure increases. It wasn’t clear that cognitive circuits might influence processing of stimuli.”

Understanding the true basis of changes in behavior in disorders such as ADHD can be hard to tease apart, but the study gives more insight into changes in the ADHD brain, and supports the idea that quantitative follow up on self-reporting by patients can drive a stronger understanding — and possible targeted treatment — of such disorders. Testing a larger number of ADHD patients and validating these measures on a larger scale is now the next research priority.

Mark Harnett’s “Holy Grail” experiment

Neurons in the human brain receive electrical signals from thousands of other cells, and long neural extensions called dendrites play a critical role in incorporating all of that information so the cells can respond appropriately.

Using hard-to-obtain samples of human brain tissue, McGovern neuroscientist Mark Harnett has now discovered that human dendrites have different electrical properties from those of other species. Their studies reveal that electrical signals weaken more as they flow along human dendrites, resulting in a higher degree of electrical compartmentalization, meaning that small sections of dendrites can behave independently from the rest of the neuron.

These differences may contribute to the enhanced computing power of the human brain, the researchers say.

Fujitsu Laboratories and MIT’s Center for Brains, Minds and Machines broaden partnership

Fujitsu Laboratories Ltd. and MIT’s Center for Brains, Minds and Machines (CBMM) has announced a multi-year philanthropic partnership focused on advancing the science and engineering of intelligence while supporting the next generation of researchers in this emerging field. The new commitment follows on several years of collaborative research among scientists at the two organizations.

Founded in 1968, Fujitsu Laboratories has conducted a wide range of basic and applied research in the areas of next-generation services, computer servers, networks, electronic devices, and advanced materials. CBMM, a multi-institutional, National Science Foundation funded science and technology center focusing on the interdisciplinary study of intelligence, was established in 2013 and is headquartered at MIT’s McGovern Institute for Brain Research. CBMM is also the foundation of “The Core” of the MIT Quest for Intelligence launched earlier this year. The partnership between the two organizations started in March 2017 when Fujitsu Laboratories sent a visiting scientist to CBMM.

“A fundamental understanding of how humans think, feel, and make decisions is critical to developing revolutionary technologies that will have a real impact on societal problems,” said Shigeru Sasaki, CEO of Fujitsu Laboratories. “The partnership between MIT’s Center for Brains, Minds and Machines and Fujitsu Laboratories will help advance critical R&D efforts in both human intelligence and the creation of next-generation technologies that will shape our lives,” he added.

The new Fujitsu Laboratories Co-Creation Research Fund, established with a philanthropic gift from Fujitsu Laboratories, will fuel new, innovative and challenging projects in areas of interest to both Fujitsu and CBMM, including the basic study of computations underlying visual recognition and language processing, creation of new machine learning methods, and development of the theory of deep learning. Alongside funding for research projects, Fujitsu Laboratories will also fund fellowships for graduate students attending CBMM’s summer course from 2019 to contribute to the future of research and society on a long term basis. The intensive three-week course gives advanced students from universities worldwide a “deep end” introduction to the problem of intelligence. These students will later have the opportunity to travel to Fujitsu Laboratories in Japan or its overseas locations in the U.S., Canada, U.K., Spain, and China to meet with Fujitsu researchers.

“CBMM faculty, students, and fellows are excited for the opportunity to work alongside scientists from Fujitsu to make advances in complex problems of intelligence, both real and artificial,” said CBMM’s director Tomaso Poggio, who is also an investigator at the McGovern Institute and the Eugene McDermott Professor in MIT’s Department of Brain and Cognitive Sciences. “Both Fujitsu Laboratories and MIT are committed to creating revolutionary tools and systems that will transform many industries, and to do that we are first looking to the extraordinary computations made by the human mind in everyday life.”

As part of the partnership, Poggio will be a featured keynote speaker at the Fujitsu Laboratories Advanced Technology Symposium on Oct. 9. In addition, Tomotake Sasaki, a former visiting scientist and current research affiliate in the Poggio Lab, will continue to collaborate with CBMM scientists and engineers on reinforcement learning and deep learning research projects. Moyuru Yamada, a visiting scientist in the Lab of Professor Josh Tenenbaum, is also studying the computational model of human cognition and exploring its industrial applications. Moreover, Fujitsu Laboratories is planning to invite CBMM researchers to Japan or overseas offices and arrange internships for interested students.

School of Science welcomes 10 professors

The MIT School of Science recently welcomed 10 new professors, including Ila Fiete in the departments of Brain and Cognitive Sciences, Chemistry, Biology, Physics, Mathematics, and Earth, Atmospheric and Planetary Sciences.

Ila Fiete uses computational and theoretical tools to better understand the dynamical mechanisms and coding strategies that underlie computation in the brain, with a focus on elucidating how plasticity and development shape networks to perform computation and why information is encoded the way that it is. Her recent focus is on error control in neural codes, rules for synaptic plasticity that enable neural circuit organization, and questions at the nexus of information and dynamics in neural systems, such as understand how coding and statistics fundamentally constrain dynamics and vice-versa.

Tristan Collins conducts research at the intersection of geometric analysis, partial differential equations, and algebraic geometry. In joint work with Valentino Tosatti, Collins described the singularity formation of the Ricci flow on Kahler manifolds in terms of algebraic data. In recent work with Gabor Szekelyhidi, he gave a necessary and sufficient algebraic condition for existence of Ricci-flat metrics, which play an important role in string theory and mathematical physics. This result lead to the discovery of infinitely many new Einstein metrics on the 5-dimensional sphere. With Shing-Tung Yau and Adam Jacob, Collins is currently studying the relationship between categorical stability conditions and existence of solutions to differential equations arising from mirror symmetry.

Collins earned his BS in mathematics at the University of British Columbia in 2009, after which he completed his PhD in mathematics at Columbia University in 2014 under the direction of Duong H. Phong. Following a four-year appointment as a Benjamin Peirce Assistant Professor at Harvard University, Collins joins MIT as an assistant professor in the Department of Mathematics.

Julien de Wit develops and applies new techniques to study exoplanets, their atmospheres, and their interactions with their stars. While a graduate student in the Sara Seager group at MIT, he developed innovative analysis techniques to map exoplanet atmospheres, studied the radiative and tidal planet-star interactions in eccentric planetary systems, and constrained the atmospheric properties and mass of exoplanets solely from transmission spectroscopy. He plays a critical role in the TRAPPIST/SPECULOOS project, headed by Université of Liège, leading the atmospheric characterization of the newly discovered TRAPPIST-1 planets, for which he has already obtained significant results with the Hubble Space Telescope. De Wit’s efforts are now also focused on expanding the SPECULOOS network of telescopes in the northern hemisphere to continue the search for new potentially habitable TRAPPIST-1-like systems.

De Wit earned a BEng in physics and mechanics from the Université de Liège in Belgium in 2008, an MS in aeronautic engineering and an MRes in astrophysics, planetology, and space sciences from the Institut Supérieur de l’Aéronautique et de l’Espace at the Université de Toulouse, France in 2010; he returned to the Université de Liège for an MS in aerospace engineering, completed in 2011. After finishing his PhD in planetary sciences in 2014 and a postdoc at MIT, both under the direction of Sara Seager, he joins the MIT faculty in the Department of Earth, Atmospheric and Planetary Sciences as an assistant professor.

After earning a BS in mathematics and physics at the University of Michigan, Fiete obtained her PhD in 2004 at Harvard University in the Department of Physics. While holding an appointment at the Kavli Institute for Theoretical Physics at the University of California, Santa Barbara from 2004 to 2006, she was also a visiting member of the Center for Theoretical Biophysics at the University of California at San Diego. Fiete subsequently spent two years at Caltech as a Broad Fellow in brain circuitry, and in 2008 joined the faculty of the University of Texas at Austin. She joins the MIT faculty in the Department of Brain and Cognitive Sciences as an associate professor with tenure.

Ankur Jain explores the biology of RNA aggregation. Several genetic neuromuscular disorders, such as myotonic dystrophy and amyotrophic lateral sclerosis, are caused by expansions of nucleotide repeats in their cognate disease genes. Such repeats cause the transcribed RNA to form pathogenic clumps or aggregates. Jain uses a variety of biophysical approaches to understand how the RNA aggregates form, and how they can be disrupted to restore normal cell function. Jain will also study the role of RNA-DNA interactions in chromatin organization, investigating whether the RNA transcribed from telomeres (the protective repetitive sequences that cap the ends of chromosomes) undergoes the phase separation that characterizes repeat expansion diseases.

Jain completed a bachelor’s of technology degree in biotechnology and biochemical engineering at the Indian Institute of Technology Kharagpur, India in 2007, followed by a PhD in biophysics and computational biology at the University of Illinois at Urbana-Champaign under the direction of Taekjip Ha in 2013. After a postdoc at the University of California at San Francisco, he joins the MIT faculty in the Department of Biology as an assistant professor with an appointment as a member of the Whitehead Institute for Biomedical Research.

Kiyoshi Masui works to understand fundamental physics and the evolution of the universe through observations of the large-scale structure — the distribution of matter on scales much larger than galaxies. He works principally with radio-wavelength surveys to develop new observational methods such as hydrogen intensity mapping and fast radio bursts. Masui has shown that such observations will ultimately permit precise measurements of properties of the early and late universe and enable sensitive searches for primordial gravitational waves. To this end, he is working with a new generation of rapid-survey digital radio telescopes that have no moving parts and rely on signal processing software running on large computer clusters to focus and steer, including work on the Canadian Hydrogen Intensity Mapping Experiment (CHIME).

Masui obtained a BSCE in engineering physics at Queen’s University, Canada in 2008 and a PhD in physics at the University of Toronto in 2013 under the direction of Ue-Li Pen. After postdoctoral appointments at the University of British Columbia as the Canadian Institute for Advanced Research Global Scholar and the Canadian Institute for Theoretical Astrophysics National Fellow, Masui joins the MIT faculty in the Department of Physics as an assistant professor.

Phiala Shanahan studies theoretical nuclear and particle physics, in particular the structure and interactions of hadrons and nuclei from the fundamental (quark and gluon) degrees of freedom encoded in the Standard Model of particle physics. Shanahan’s recent work has focused on the role of gluons, the force carriers of the strong interactions described by quantum chromodynamics (QCD), in hadron and nuclear structure by using analytic tools and high-performance supercomputing. She recently achieved the first calculation of the gluon structure of light nuclei, making predictions that will be testable in new experiments proposed at Jefferson National Accelerator Facility and at the planned Electron-Ion Collider. She has also undertaken extensive studies of the role of strange quarks in the proton and light nuclei that sharpen theory predictions for dark matter cross-sections in direct detection experiments. To overcome computational limitations in QCD calculations for hadrons and in particular for nuclei, Shanahan is pursuing a program to integrate modern machine learning techniques in computational nuclear physics studies.

Shanahan obtained her BS in 2012 and her PhD in 2015, both in physics, from the University of Adelaide. She completed postdoctoral work at MIT in 2017, then held a joint position as an assistant professor at the College of William and Mary and senior staff scientist at the Thomas Jefferson National Accelerator Facility until 2018. She returns to MIT in the Department of Physics as an assistant professor.

Nike Sun works in probability theory at the interface of statistical physics and computation. Her research focuses in particular on phase transitions in average-case (randomized) formulations of classical computational problems. Her joint work with Jian Ding and Allan Sly establishes the satisfiability threshold of random k-SAT for large k, and relatedly the independence ratio of random regular graphs of large degree. Both are long-standing open problems where heuristic methods of statistical physics yield detailed conjectures, but few rigorous techniques exist. More recently she has been investigating phase transitions of dense graph models.

Sun completed BA mathematics and MA statistics degrees at Harvard in 2009, and an MASt in mathematics at Cambridge in 2010. She received her PhD in statistics from Stanford University in 2014 under the supervision of Amir Dembo. She held a Schramm fellowship at Microsoft New England and MIT Mathematics in 2014-2015 and a Simons postdoctoral fellowship at the University of California at Berkeley in 2016, and joined the Berkeley Department of Statistics as an assistant professor in 2016. She returns to the MIT Department of Mathematics as an associate professor with tenure.

Alison Wendlandt focuses on the development of selective, catalytic reactions using the tools of organic and organometallic synthesis and physical organic chemistry. Mechanistic study plays a central role in the development of these new transformations. Her projects involve the design of new catalysts and catalytic transformations, identification of important applications for selective catalytic processes, and elucidation of new mechanistic principles to expand powerful existing catalytic reaction manifolds.

Wendlandt received a BS in chemistry and biological chemistry from the University of Chicago in 2007, an MS in chemistry from Yale University in 2009, and a PhD in chemistry from the University of Wisconsin at Madison in 2015 under the direction of Shannon S. Stahl. Following an NIH Ruth L. Krichstein Postdoctoral Fellowship at Harvard University, Wendlandt joins the MIT faculty in the Department of Chemistry as an assistant professor.

Chenyang Xu specializes in higher-dimensional algebraic geometry, an area that involves classifying algebraic varieties, primarily through the minimal model program (MMP). MMP was introduced by Fields Medalist S. Mori in the early 1980s to make advances in higher dimensional birational geometry. The MMP was further developed by Hacon and McKernan in the mid-2000s, so that the MMP could be applied to other questions. Collaborating with Hacon, Xu expanded the MMP to varieties of certain conditions, such as those of characteristic p, and, with Hacon and McKernan, proved a fundamental conjecture on the MMP, generating a great deal of follow-up activity. In collaboration with Chi Li, Xu proved a conjecture of Gang Tian concerning higher-dimensional Fano varieties, a significant achievement. In a series of papers with different collaborators, he successfully applied MMP to singularities.

Xu received his BS in 2002 and MS in 2004 in mathematics from Peking University, and completed his PhD at Princeton University under János Kollár in 2008. He came to MIT as a CLE Moore Instructor in 2008-2011, and was subsequently appointed assistant professor at the University of Utah. He returned to Peking University as a research fellow at the Beijing International Center of Mathematical Research in 2012, and was promoted to professor in 2013. Xu joins the MIT faculty as a full professor in the Department of Mathematics.

Zhiwei Yun’s research is at the crossroads between algebraic geometry, number theory, and representation theory. He studies geometric structures aiming at solving problems in representation theory and number theory, especially those in the Langlands program. While he was a CLE Moore Instructor at MIT, he started to develop the theory of rigid automorphic forms, and used it to answer an open question of J-P Serre on motives, which also led to a major result on the inverse Galois problem in number theory. More recently, in his joint work with Wei Zhang, they give geometric interpretation of higher derivatives of automorphic L- functions in terms of intersection numbers, which sheds new light on the geometric analogue of the Birch and Swinnerton-Dyer conjecture.

Yun earned his BS at Peking University in 2004, after which he completed his PhD at Princeton University in 2009 under the direction of Robert MacPherson. After appointments at the Institute for Advanced Study and as a CLE Moore Instructor at MIT, he held faculty appointments at Stanford and Yale. He returned to the MIT Department of Mathematics as a full professor in the spring of 2018.

Feng Zhang named winner of the 2018 Keio Medical Science Prize

Feng Zhang and Masashi Yanagisawa have been named the 2018 winners of the prestigious Keio Medical Science Prize. Zhang is being recognized for the groundbreaking development of CRISPR-Cas9-mediated genome engineering in cells and its application for medical science. Zhang is an HHMI Investigator and the James and Patricia Poitras Professor of Neuroscience at MIT, an associate professor in MIT’s Departments of Brain and Cognitive Sciences and Biological Engineering, an investigator at the McGovern Institute for Brain Research, and a core member of the Broad Institute of MIT and Harvard. Masashi Yanagisawa, Director of the International Institute for Integrative Sleep Medicine at the University of Tsukuba, is being recognized for his seminal work on sleep control mechanisms.

“We are delighted that Feng is now a Keio Prize laureate,” says McGovern Institute Director Robert Desimone. “This truly recognizes the remarkable achievements that he has made at such a young age.”

The Keio Medical Prize is awarded to a maximum of two scientists each year, and is now in its 23rd year. The prize is offered by Keio University, and the selection committee specifically looks for laureates that have made an outstanding contribution to medicine or the life sciences. The prize was initially endowed by Dr. Mitsunada Sakaguchi in 1994, with the express condition that it be used to commend outstanding science, promote medical advances in medicine and the life sciences, expand researcher networks, and contribute to the well-being of humankind. The winners receive a certificate of merit, medal, and a monetary award of 10 million yen.

Feng Zhang is a molecular biologist who has contributed to the development of multiple molecular tools to accelerate our understanding of human disease and create new therapeutic modalities. During his graduate work Zhang contributed to the development of optogenetics, a system for activating neurons using light, which has advanced our understanding of brain connectivity. Zhang went on to pioneer the deployment of the microbial CRISPR-Cas9 system for genome engineering in eukaryotic cells. The ease and specificity of the system has led to its widespread use across the life sciences and it has groundbreaking implications for disease therapeutics, biotechnology, and agriculture. Zhang has continued to mine bacterial CRISPR systems for additional enzymes with useful properties, leading to the discovery of Cas13, which targets RNA, rather than DNA, and may potentially be a way to treat genetic diseases without altering the genome. He has also developed a molecular detection system called SHERLOCK based on the Cas13 family, which can sense trace amounts of genetic material, including viruses and alterations in genes that might be linked to cancer.

“I am tremendously honored to have our work recognized by the Keio Medical Prize,” says Zhang. “It is an inspiration to us to continue our work to improve human health.”

The prize ceremony will be held on December 18th 2018 at Keio University in Tokyo, Japan.

Can the brain recover after paralysis?

Why is it that motor skills can be gained after paralysis but vision cannot recover in similar ways? – Ajay, Puppala

Thank you so much for this very important question, Ajay. To answer, I asked two local experts in the field, Pawan Sinha who runs the vision research lab at MIT, and Xavier Guell, a postdoc in John Gabrieli’s lab at the McGovern Institute who also works in the ataxia unit at Massachusetts General Hospital.

“Simply stated, the prospects of improvement, whether in movement or in vision, depend on the cause of the impairment,” explains Sinha. “Often, the cause of paralysis is stroke, a reduction in blood supply to a localized part of the brain, resulting in tissue damage. Fortunately, the brain has some ability to rewire itself, allowing regions near the damaged one to take on some of the lost functionality. This rewiring manifests itself as improvements in movement abilities after an initial period of paralysis. However, if the paralysis is due to spinal-cord transection (as was the case following Christopher Reeve’s tragic injury in 1995), then prospects for improvement are diminished.”

“Turning to the domain of sight,” continues Sinha, “stroke can indeed cause vision loss. As with movement control, these losses can dissipate over time as the cortex reorganizes via rewiring. However, if the blindness is due to optic nerve transection, then the condition is likely to be permanent. It is also worth noting that many cases of blindness are due to problems in the eye itself. These include corneal opacities, cataracts and retinal damage. Some of these conditions (corneal opacities and cataracts) are eminently treatable while others (typically those associated with the retina and optic nerve) still pose challenges to medical science.”

You might be wondering what makes lesions in the eye and spinal cord hard to overcome. Some systems (the blood, skin, and intestine are good examples) contain a continuously active stem cell population in adults. These cells can divide and replenish lost cells in damaged regions. While “adult-born” neurons can arise, elements of a degenerating or damaged retina, optic nerve, or spinal cord cannot be replaced as easily lost skin cells can. There is currently a very active effort in the stem cell community to understand how we might be able to replace neurons in all cases of neuronal degeneration and injury using stem cell technologies. To further explore lesions that specifically affect the brain, and how these might lead to a different outcome in the two systems, I turned to Xavier Guell.

“It might be true that visual deficits in the population are less likely to recover when compared to motor deficits in the population. However, the scientific literature seems to indicate that our body has a similar capacity to recover from both motor and visual injuries,” explains Guell. “The reason for this apparent contradiction is that visual lesions are usually not in the cerebral cortex (but instead in other places such as the retina or the lens), while motor lesions in the cerebral cortex are more common. In fact, a large proportion of people who suffer a stroke will have damage in the motor aspects of the cerebral cortex, but no damage in the visual aspects of the cerebral cortex. Crucially, recovery of neurological functions is usually seen when lesions are in the cerebral cortex or in other parts of the cerebrum or cerebellum. In this way, while our body has a similar capacity to recover from both motor and visual injuries, motor injuries are more frequently located in the parts of our body that have a better capacity to regain function (specifically, the cerebral cortex).”

In short, some cells cannot be replaced in either system, but stem cell research provides hope there. That said, there is remarkable plasticity in the brain, so when the lesion is located there, we can see recovery with training.

Do you have a question for The Brain? Ask it here.