Ed Boyden

Engineering the Brain

Ed Boyden develops advanced technologies for analyzing, engineering, and simulating brain circuits to reveal and repair the fundamental mechanisms behind complex brain processes.

Boyden may be best known for pioneering optogenetics, a powerful method that enables scientists to control neurons using light. He also led the team that created expansion microscopy, which expands nanoscale features in a cell to make them visible using conventional microscopes. In addition, his lab develops methods so that many signals can be imaged in living cells at the same time. He continues to invent new tools, and works to systematically integrate these tools to enable biologically accurate computer simulations of the brain.

How the brain switches between different sets of rules

Cognitive flexibility — the brain’s ability to switch between different rules or action plans depending on the context — is key to many of our everyday activities. For example, imagine you’re driving on a highway at 65 miles per hour. When you exit onto a local street, you realize that the situation has changed and you need to slow down.

When we move between different contexts like this, our brain holds multiple sets of rules in mind so that it can switch to the appropriate one when necessary. These neural representations of task rules are maintained in the prefrontal cortex, the part of the brain responsible for planning action.

A new study from MIT has found that a region of the thalamus is key to the process of switching between the rules required for different contexts. This region, called the mediodorsal thalamus, suppresses representations that are not currently needed. That suppression also protects the representations as a short-term memory that can be reactivated when needed.

“It seems like a way to toggle between irrelevant and relevant contexts, and one advantage is that it protects the currently irrelevant representations from being overwritten,” says Michael Halassa, an assistant professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research.

Halassa is the senior author of the paper, which appears in the Nov. 19 issue of Nature Neuroscience. The paper’s first author is former MIT graduate student Rajeev Rikhye, who is now a postdoc in Halassa’s lab. Aditya Gilra, a postdoc at the University of Bonn, is also an author.

Changing the rules

Previous studies have found that the prefrontal cortex is essential for cognitive flexibility, and that a part of the thalamus called the mediodorsal thalamus also contributes to this ability. In a 2017 study published in Nature, Halassa and his colleagues showed that the mediodorsal thalamus helps the prefrontal cortex to keep a thought in mind by temporarily strengthening the neuronal connections in the prefrontal cortex that encode that particular thought.

In the new study, Halassa wanted to further investigate the relationship between the mediodorsal thalamus and the prefrontal cortex. To do that, he created a task in which mice learn to switch back and forth between two different contexts — one in which they must follow visual instructions and one in which they must follow auditory instructions.

In each trial, the mice are given both a visual target (flash of light to the right or left) and an auditory target (a tone that sweeps from high to low pitch, or vice versa). These targets offer conflicting instructions. One tells the mouse to go to the right to get a reward; the other tells it to go left. Before each trial begins, the mice are given a cue that tells them whether to follow the visual or auditory target.

“The only way for the animal to solve the task is to keep the cue in mind over the entire delay, until the targets are given,” Halassa says.

The researchers found that thalamic input is necessary for the mice to successfully switch from one context to another. When they suppressed the mediodorsal thalamus during the cuing period of a series of trials in which the context did not change, there was no effect on performance. However, if they suppressed the mediodorsal thalamus during the switch to a different context, it took the mice much longer to switch.

By recording from neurons of the prefrontal cortex, the researchers found that when the mediodorsal thalamus was suppressed, the representation of the old context in the prefrontal cortex could not be turned off, making it much harder to switch to the new context.

In addition to helping the brain switch between contexts, this process also appears to help maintain the neural representation of the context that is not currently being used, so that it doesn’t get overwritten, Halassa says. This allows it to be activated again when needed. The mice could maintain these representations over hundreds of trials, but the next day, they had to relearn the rules associated with each context.

Sabine Kastner, a professor of psychology at the Princeton Neuroscience Institute, described the study as a major leap forward in the field of cognitive neuroscience.

“This is a tour-de-force from beginning to end, starting with a sophisticated behavioral design, state-of-the-art methods including causal manipulations, exciting empirical results that point to cell-type specific differences and interactions in functionality between thalamus and cortex, and a computational approach that links the neuroscience results to the field of artificial intelligence,” says Kastner, who was not involved in the research.

Multitasking AI

The findings could help guide the development of better artificial intelligence algorithms, Halassa says. The human brain is very good at learning many different kinds of tasks — singing, walking, talking, etc. However, neural networks (a type of artificial intelligence based on interconnected nodes similar to neurons) usually are good at learning only one thing. These networks are subject to a phenomenon called “catastrophic forgetting” — when they try to learn a new task, previous tasks become overwritten.

Halassa and his colleagues now hope to apply their findings to improve neural networks’ ability to store previously learned tasks while learning to perform new ones.

The research was funded by the National Institutes of Health, the Brain and Behavior Foundation, the Klingenstein Foundation, the Pew Foundation, the Simons Foundation, the Human Frontiers Science Program, and the German Ministry of Education.

Is it worth the risk?

During the Klondike Gold Rush, thousands of prospectors climbed Alaska’s dangerous Chilkoot Pass in search of riches. McGovern scientists are exploring how a once-overlooked part of the brain might be at the root of cost-benefit decisions like these. McGovern researchers are studying how the brain balances risk and reward to make decisions.

Is it worth speeding up on the highway to save a few minutes’ time? How about accepting a job that pays more, but requires longer hours in the office?

Scientists call these types of real-life situations cost-benefit conflicts. Choosing well is an essential survival ability—consider the animal that must decide when to expose itself to predation to gather more food.

Now, McGovern researchers are discovering that this fundamental capacity to make decisions may originate in the basal ganglia—a brain region once considered unimportant to the human
experience—and that circuits associated with this structure may play a critical role in determining our state of mind.

Anatomy of decision-making

A few years back, McGovern investigator Ann Graybiel noticed that in the brain imaging literature, a specific part of the cortex called the pregenual anterior cingulate cortex or pACC, was implicated in certain psychiatric disorders as well as tasks involving cost-benefit decisions. Thanks to her now classic neuroanatomical work defining the complex anatomy and function of the basal ganglia, Graybiel knew that the pACC projected back into the basal ganglia—including its largest cluster of neurons, the striatum.

The striatum sits beneath the cortex, with a mouse-like main body and curving tail. It seems to serve as a critical way-station, communicating with both the brain’s sensory and motor areas above, and the limbic system (linked to emotion and memory) below. Running through the striatum are striosomes, column-like neurochemical compartments. They wire down to a small, but important part of the brain called the substantia nigra, which houses the huge majority of the brain’s dopamine neurons—a key neurochemical heavily involved, much like the basal ganglia as a whole, in reward, learning, and movement. The pACC region related to mood control targeted these striosomes, setting up a communication line from the neocortex to the dopamine neurons.

Graybiel discovered these striosomes early in her career, and understood them to have distinct wiring from other compartments in the striatum, but picking out these small, hard-to-find striosomes posed a technological challenge—so it was exciting to have this intriguing link to the pACC and mood disorders.

Working with Ken-ichi Amemori, then a research scientist in her lab, she adapted a common human cost-benefit conflict test for macaque monkeys. The monkeys could elect to receive a food treat, but the treat would always be accompanied by an annoying puff of air to the eyes. Before they decided, a visual cue told them exactly how much treat they could get, and exactly how strong the air puff would be, so they could choose if the treat was worth it.

Normal monkeys varied their choices in a fairly rational manner, rejecting the treat whenever it seemed like the air puff was too strong, or the treat too small to be worth it—and this corresponded with activity in the pACC neurons. Interestingly, they found that some pACC neurons respond more when animals approach the combined offers, while other pACC neurons
fire more when the animals avoid the offers. “It is as though there are two opposing armies. And the one that wins, controls the state of the animal.” Moreover, when Graybiel’s team electrically stimulated these pACC neurons, the animals begin to avoid the offers, even offers that they normally would approach. “It is as though when the stimulation is on, they think the future is worse than it really is,” Graybiel says.

Intriguingly, this effect only worked in situations where the animal had to weigh the value of a cost against a benefit. It had no effect on a decision between two negatives or two positives, like two different sizes of treats. The anxiety drug diazepam also reversed the stimulatory effect, but again, only on cost-benefit choices. “This particular kind of mood-influenced cost-benefit
decision-making occurs not only under conflict conditions but in our regular day to day lives. For example: I know that if I eat too much chocolate, I might get fat, but I love it, I want it.”

Glass half empty

Over the next few years, Graybiel, with another research scientist in her lab, Alexander Friedman, unraveled the circuit behind the macaques’ choices. They adapted the test for rats and mice,
so that they could more easily combine the cellular and molecular technologies needed to study striosomes, such as optogenetics and mouse engineering.

They found that the cortex (specifically, the pre-limbic region of the prefrontal cortex in rodents) wires onto both striosomes and fast-acting interneurons that also target the striosomes. In a
healthy circuit, these interneurons keep the striosomes in check by firing off fast inhibitory signals, hitting the brakes before the striosome can get started. But if the researchers broke that corticalstriatal connection with optogenetics or chronic stress, the animals became reckless, going for the high-risk, high-reward arm of the maze like a gambler throwing caution to the wind. If they amplified this inhibitory interneuron activity, they saw the opposite effect. With these techniques, they could block the effects of prior chronic stress.

This summer, Graybiel and Amemori published another paper furthering the story and returning to macaques. It was still too difficult to hit striosomes, and the researchers could only stimulate the striatum more generally. However, they replicated the effects in past studies.

Many electrodes had no effect, a small number made the monkeys choose the reward more often. Nearly a quarter though made the monkeys more avoidant—and this effect correlated with a change in the macaques’ brainwaves in a manner reminiscent of patients with depression.

But the surprise came when the avoidant-producing stimulation was turned off, the effects lasted unexpectedly long, only returning to normal on the third day.

Graybiel was stunned. “This is very important, because changes in the brain can get set off and have a life of their own,” she says. “This is true for some individuals who have had a terrible experience, and then live with the aftermath, even to the point of suffering from post-traumatic stress disorder.”

She suspects that this persistent state may actually be a form of affect, or mood. “When we change this decision boundary, we’re changing the mood, such that the animal overestimates cost, relative to benefit,” she explains. “This might be like a proxy state for pessimistic decision-making experienced during anxiety and depression, but may also occur, in a milder form, in you and me.”

Graybiel theorizes that this may tie back into the dopamine neurons that the striosomes project to: if this avoidance behavior is akin to avoidance observed in rodents, then they are stimulating a circuit that ultimately projects to dopamine neurons of the substantia nigra. There, she believes, they could act to suppress these dopamine neurons, which in turn project to the rest of the brain, creating some sort of long-term change in their neural activity. Or, put more simply, stimulation of these circuits creates a depressive funk.

Bottom up

Three floors below the Graybiel lab, postdoc Will Menegas is in the early stages of his own work untangling the role of dopamine and the striatum in decision-making. He joined Guoping Feng’s lab this summer after exploring the understudied “tail of the striatum” at Harvard University.

While dopamine pathways influence many parts of the brain, examination of connections to the striatum have largely focused on the frontmost part of the striatum, associated with valuations.

But as Menegas showed while at Harvard, dopamine neurons that project to the rear of the striatum are different. Those neurons get their input from parts of the brain associated with general arousal and sensation—and instead of responding to rewards, they respond to novelty and intense stimuli, like air puffs and loud noises.

In a new study published in Nature Neuroscience, Menegas used a neurotoxin to disrupt the dopamine projection from the substantia nigra to the posterior striatum to see how this circuit influences behavior. Normal mice approach novel items cautiously and back away after sniffing at them, but the mice in Menegas’ study failed to back away. They stopped avoiding a port that gave an air puff to the face and they didn’t behave like normal mice when Menegas dropped a strange or new object—say, a lego—into their cage. Disrupting the nigral-posterior striatum
seemed to turn off their avoidance habit.

“These neurons reinforce avoidance the same way that canonical dopamine neurons reinforce approach,” Menegas explains. It’s a new role for dopamine, suggesting that there may be two different and distinct systems of reinforcement, led by the same neuromodulator in different parts of the striatum.

This research, and Graybiel’s discoveries on cost-benefit decision circuits, share clear parallels, though the precise links between the two phenomena are yet to be fully determined. Menegas plans to extend this line of research into social behavior and related disorders like autism in marmoset monkeys.

“Will wants to learn the methods that we use in our lab to work on marmosets,” Graybiel says. “I think that working together, this could become a wonderful story, because it would involve social interactions.”

“This a very new angle, and it could really change our views of how the reward system works,” Feng says. “And we have very little understanding of social circuits so far and especially in higher organisms, so I think this would be very exciting. Whatever we learn, it’s going to be new.”

Human choices

Based on their preexisting work, Graybiel’s and Menegas’ projects are well-developed—but they are far from the only McGovern-based explorations into ways this brain region taps into our behaviors. Maiya Geddes, a visiting scientist in John Gabrieli’s lab, has recently published a paper exploring the little-known ways that aging affects the dopamine-based nigral-striatum-hippocampus learning and memory systems.

In Rebecca Saxe’s lab, postdoc Livia Tomova just kicked off a new pilot project using brain imaging to uncover dopamine-striatal circuitry behind social craving in humans and the urge to rejoin peers. “Could there be a craving response similar to hunger?” Tomova wonders. “No one has looked yet at the neural mechanisms of this.”

Graybiel also hopes to translate her findings into humans, beginning with collaborations at the Pizzagalli lab at McLean Hospital in Belmont. They are using fMRI to study whether patients
with anxiety and depression show some of the same dysfunctions in the cortico-striatal circuitry that she discovered in her macaques.

If she’s right about tapping into mood states and affect, it would be an expanded role for the striatum—and one with significant potential therapeutic benefits. “Affect state” colors many psychological functions and disorders, from memory and perception, to depression, chronic stress, obsessive-compulsive disorder, and PTSD.

For a region of the brain once dismissed as inconsequential, McGovern researchers have shown the basal ganglia to influence not only our choices but our state of mind—suggesting that this “primitive” brain region may actually be at the heart of the human experience.

 

 

Tracking down changes in ADHD

Attention deficit hyperactivity disorder (ADHD) is marked by difficulty maintaining focus on tasks, and increased activity and impulsivity. These symptoms ultimately interfere with the ability to learn and function in daily tasks, but the source of the problem could lie at different levels of brain function, and it is hard to parse out exactly what is going wrong.

A new study co-authored by McGovern Institute Associate Investigator Michael Halassa has managed to develop tasks that dissociate lower from higher level brain functions so that disruption to these processes can be more specifically checked in ADHD. The results of this study, carried out in collaboration with co-corresponding authors Wei Ji Ma, Andra Mihali and researchers from New York University, illuminate how brain function is disrupted in ADHD, and highlights a role for perceptual deficits in this condition.

The underlying deficit in ADHD has largely been attributed to executive function — higher order processing and the ability of the brain to integrate information and focus attention. But there have been some hints, largely through reports from those with ADHD, that the very ability to accurately receive sensory information, might be altered. Some people with ADHD, for example, have reported impaired visual function and even changes in color processing. Cleanly separating these perceptual brain functions from the impact of higher order cognitive processes has proven difficult, however. It is not clear whether people with and without ADHD encode visual signals received by the eye in the same way.

“We realized that psychiatric diagnoses in general are based on clinical criteria and patient self-reporting,” says Halassa, who is also a board certified psychiatrist and an assistant professor in MIT’s Department of Brain and Cognitive Sciences. “Psychiatric diagnoses are imprecise, but neurobiology is progressing to the point where we can use well-controlled parameters to standardize criteria, and relate disorders to circuits,” he explains. “If there are problems with attention, is it the spotlight of attention itself that’s affected in ADHD, or the ability of a person to control where this spotlight is focused?”

To test how people with and without ADHD encode visual signals in the brain, Halassa, Ma, Mihali, and collaborators devised a perceptual encoding task in which subjects were asked to provide answers to simple questions about the orientation and color of lines and shapes on a screen. The simplicity of this test aimed to remove high-level cognitive input and provide a measure of accurate perceptual coding.

To measure higher-level executive function, the researchers provided subjects with rules about which features and screen areas were relevant to the task, and they switched relevance throughout the test. They monitored whether subjects cognitively adapted to the switch in rules – an indication of higher-order brain function. The authors also analyzed psychometric curve parameters, common in psychophysics, but not yet applied to ADHD.

“These psychometric parameters give us specific information about the parts of sensory processing that are being affected,” explains Halassa. “So, if you were to put on sunglasses, that would shift threshold, indicating that input is being affected, but this wouldn’t necessarily affect the slope of the psychometric function. If the slope is affected, this starts to reflect difficulty in seeing a line or color. In other words, these tests give us a finer readout of behavior, and how to map this onto particular circuits.”

The authors found that changes in visual perception were robustly associated with ADHD, and these changes were also correlated with cognitive function. Individuals with more clinically severe ADHD scored lower on executive function, and basic perception also tracked with these clinical records of disease severity. The authors could even sort ADHD from control subjects, based on their perceptual variability alone. All of this goes to say that changes in perception itself are clearly present in this ADHD cohort, and that they decline alongside changes in executive function.

“This was unexpected,” points out Halassa. “We didn’t expect so much to be explained by lower sensitivity to stimuli, and to see that these tasks become harder as cognitive pressure increases. It wasn’t clear that cognitive circuits might influence processing of stimuli.”

Understanding the true basis of changes in behavior in disorders such as ADHD can be hard to tease apart, but the study gives more insight into changes in the ADHD brain, and supports the idea that quantitative follow up on self-reporting by patients can drive a stronger understanding — and possible targeted treatment — of such disorders. Testing a larger number of ADHD patients and validating these measures on a larger scale is now the next research priority.

Meeting of the minds

In the summer of 2006, before their teenage years began, Mahdi Ramadan and Alexi Choueiri were spirited from their homes amid political unrest in Lebanon. Evacuated on short notice by the U.S. Marines, they were among 2,000 refugees transported to the U.S. on the aircraft carrier USS Nashville.

The two never met in their homeland, nor on the transatlantic journey, and after arriving in the U.S. they went their separate ways. Ramadan and his family moved to Seattle, Washington. Choueiri’s family settled in Chandler, Arizona, where they already had some extended family.

Yet their paths converged 11 years later as graduate students in MIT’s Department of Brain and Cognitive Sciences (BCS). One day last fall, on a walk across campus, Ramadan and Choueiri slowly unraveled their connection. With increasing excitement, they narrowed it down by year, by month, and eventually, by boat, to discover just how closely their lives had once come to one another.

Lebanon, the only Middle Eastern country without a desert, enjoys a lush, Mediterranean climate. Amid this natural beauty, though, the country struggles under the weight of deep political and cultural divides that sometimes erupt into conflict.

Despite different Lebanese cultural backgrounds — Ramadan’s family is Muslim and Choueiri’s Christian — they have had remarkably similar experiences as refugees from Lebanon. Both credit those experiences with motivating their interest in neuroscience. Questions about human behavior — How do people form beliefs about the world? Can those beliefs really change? — led them to graduate work at MIT.

In pursuit of knowledge

When they first immigrated to the U.S., school symbolized survival for Ramadan and Choueiri. Not only was education a mode of improving their lives and supporting their families, it was a search for objectivity in their recently upended worlds.

As the family’s primary English speaker, Ramadan became a bulwark for his family in their new country, especially in medical matters; his little sister, Ghida, has cerebral palsy. Though his family has limited financial resources, he emphasizes that both he and his sister have been constantly supported by their parents in pursuit of their educations.

In fact, Ramadan feels motivated by Ghida’s determination to complete her degree in occupational therapy: “That to me is really inspirational, her resilience in the face of her disability and in the face of assumptions that people make about capability. She’s really sassy, she’s really witty, she’s really funny, she’s really intelligent, and she doesn’t see her disability as a disability. She actually thinks it’s an advantage — it actually motivated her to pursue [her education] even more.”

Ramadan hopes his own educational journey, from a low-income evacuee to a neuroscience PhD, can show others like him that success is possible.

Choueiri also relied on academics to adapt to his new world in Arizona. Even in Lebanon, he remembers taking solace from a chaotic world in his education, and once in the U.S., he dove headfirst into his studies.

Choueiri’s hometown in Arizona sometimes felt homogenous, so coming to MIT has been a staggering — and welcome — experience. “The diversity here is phenomenal: meeting people from different cultures, upbringings, countries,” he says. “I love making friends from all over and learning their stories. Being a neuroscientist, I like to know how they were brought up and how their ideas were formed. … It’s like Disneyland for me. I feel like I’m coming to Disneyland every day and high-fiving Mickey Mouse.”

At home at MIT

Ramadan and Choueiri revel in the freedom of thought they have found in their academic home here. They say they feel taken seriously as students and, more importantly, as thinkers. The BCS department values interdisciplinary thought, and cultivates extracurricular student activities like philosophy discussion groups, the development of neuroscience podcasts, and independent, student-led lectures on myriad neuroscience-adjacent topics.

Both students were drawn to neuroscience not only by their experiences as Lebanese-Americans, but by trying to make sense of what happened to them at a young age.

Ramadan became interested in neuroplasticity through self-observation. “You know that feeling of childhood you have where everything is magical and you’re not really aware of things around you? I feel like when I immigrated to the U.S., that feeling went away and I had to become extra-aware of everything because I had to adapt so quickly. So, something that intrigued me about neuroscience is how the brain is able to adapt so quickly and how different experiences can shape and rewire your brain.”

Now in his second year, Ramadan plans to pursue his interest in neuroplasticity in Professor Mehrdad Jazayeri’s lab at the McGovern Institute by investigating how learning changes the brain’s underlying neural circuits; understanding the physical mechanism of plasticity has application to both disease states and artificial intelligence.

Choueiri, a third-year student in the program, is a member of Professor Ed Boyden’s lab at the McGovern Institute. While his interest in neuroscience was similarly driven by his experience as an evacuee, his approach is outward-looking, focused on making sense of people’s choices. Ultimately, the brain controls human ability to perceive, learn, and choose through physiological changes; Choueiri wants to understand not just the human brain, but also the human condition — and to use that understanding to alleviate pain and suffering.

“Growing up in Lebanon, with different religions and war … I became fundamentally interested in human behavior, irrationality, and conflict, and how can we resolve those things … and maybe there’s an objective way to really make sense of where these differences are coming from,” he says. In the Synthetic Neurobiology Group, Choueiri’s research involves developing neurotechnologies to map the molecular interactions of the brain, to reveal the fundamental mechanisms of brain function and repair dysfunction.

Shared identities

As evacuees, Ramadan and Choueiri left their country without notice and without saying goodbye. However, in other ways, their experience was not unlike an immigrant experience. This sometimes makes identifying as a refugee in the current political climate complex, as refugees from Syria and other war-ravaged regions struggle to make a home in the U.S. Still, both believe that sharing their personal experience may help others in difficult positions to see that they do belong in the U.S., and at MIT.

Despite their American identity, Ramadan and Choueiri also share a palpable love for Lebanese culture. They extol the diversity of Lebanese cuisine, which is served mezze-style, making meals an experience full of variety, grilled food, and yogurt dishes. The Lebanese diaspora is another source of great pride for them. Though the population of Lebanon is less than 5 million, as many as 14 million live abroad.

It’s all the more remarkable, then, that Ramadan and Choueiri intersected at MIT, some 6,000 miles from their homeland. The bond they have forged since, through their common heritage, experiences, and interests, is deeply meaningful to both of them.

“I was so happy to find another student who has this story because it allows me to reflect back on those experiences and how they changed me,” says Ramadan. “It’s like a mirror image. … Was it a coincidence, or were our lives so similar that they led to this point?”

This story was written by Bridget E. Begg at MIT’s Office of Graduate Education.

Study reveals how the brain overcomes its own limitations

Imagine trying to write your name so that it can be read in a mirror. Your brain has all of the visual information you need, and you’re a pro at writing your own name. Still, this task is very difficult for most people. That’s because it requires the brain to perform a mental transformation that it’s not familiar with: using what it sees in the mirror to accurately guide your hand to write backward.

MIT neuroscientists have now discovered how the brain tries to compensate for its poor performance in tasks that require this kind of complicated transformation. As it also does in other types of situations where it has little confidence in its own judgments, the brain attempts to overcome its difficulties by relying on previous experiences.

“If you’re doing something that requires a harder mental transformation, and therefore creates more uncertainty and more variability, you rely on your prior beliefs and bias yourself toward what you know how to do well, in order to compensate for that variability,” says Mehrdad Jazayeri, the Robert A. Swanson Career Development Professor of Life Sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

This strategy actually improves overall performance, the researchers report in their study, which appears in the Oct. 24 issue of the journal Nature Communications. Evan Remington, a McGovern Institute postdoc, is the paper’s lead author, and technical assistant Tiffany Parks is also an author on the paper.

Noisy computations

Neuroscientists have known for many decades that the brain does not faithfully reproduce exactly what the eyes see or what the ears hear. Instead, there is a great deal of “noise” — random fluctuations of electrical activity in the brain, which can come from uncertainty or ambiguity about what we are seeing or hearing. This uncertainty also comes into play in social interactions, as we try to interpret the motivations of other people, or when recalling memories of past events.

Previous research has revealed many strategies that help the brain to compensate for this uncertainty. Using a framework known as Bayesian integration, the brain combines multiple, potentially conflicting pieces of information and values them according to their reliability. For example, if given information by two sources, we’ll rely more on the one that we believe to be more credible.

In other cases, such as making movements when we’re uncertain exactly how to proceed, the brain will rely on an average of its past experiences. For example, when reaching for a light switch in a dark, unfamiliar room, we’ll move our hand toward a certain height and close to the doorframe, where past experience suggests a light switch might be located.

All of these strategies have been previously shown to work together to increase bias toward a particular outcome, which makes our overall performance better because it reduces variability, Jazayeri says.

Noise can also occur in the mental conversion of sensory information into a motor plan. In many cases, this is a straightforward task in which noise plays a minimal role — for example, reaching for a mug that you can see on your desk. However, for other tasks, such as the mirror-writing exercise, this conversion is much more complicated.

“Your performance will be variable, and it’s not because you don’t know where your hand is, and it’s not because you don’t know where the image is,” Jazayeri says. “It involves an entirely different form of uncertainty, which has to do with processing information. The act of performing mental transformations of information clearly induces variability.”

That type of mental conversion is what the researchers set out to explore in the new study. To do that, they asked subjects to perform three different tasks. For each one, they compared subjects’ performance in a version of the task where mapping sensory information to motor commands was easy, and a version where an extra mental transformation was required.

In one example, the researchers first asked participants to draw a line the same length as a line they were shown, which was always between 5 and 10 centimeters. In the more difficult version, they were asked to draw a line 1.5 times longer than the original line.

The results from this set of experiments, as well as the other two tasks, showed that in the version that required difficult mental transformations, people altered their performance using the same strategies that they use to overcome noise in sensory perception and other realms. For example, in the line-drawing task, in which the participants had to draw lines ranging from 7.5 to 15 centimeters, depending on the length of the original line, they tended to draw lines that were closer to the average length of all the lines they had previously drawn. This made their responses overall less variable and also more accurate.

“This regression to the mean is a very common strategy for making performance better when there is uncertainty,” Jazayeri says.

Noise reduction

The new findings led the researchers to hypothesize that when people get very good at a task that requires complex computation, the noise will become smaller and less detrimental to overall performance. That is, people will trust their computations more and stop relying on averages.

“As it gets easier, our prediction is the bias will go away, because that computation is no longer a noisy computation,” Jazayeri says. “You believe in the computation; you know the computation is working well.”

The researchers now plan to further study whether people’s biases decrease as they learn to perform a complicated task better. In the experiments they performed for the Nature Communications study, they found some preliminary evidence that trained musicians performed better in a task that involved producing time intervals of a specific duration.

The research was funded by the Alfred P. Sloan Foundation, the Esther A. and Joseph Klingenstein Fund, the Simons Foundation, the McKnight Endowment Fund for Neuroscience, and the McGovern Institute.

Mark Harnett’s “Holy Grail” experiment

Neurons in the human brain receive electrical signals from thousands of other cells, and long neural extensions called dendrites play a critical role in incorporating all of that information so the cells can respond appropriately.

Using hard-to-obtain samples of human brain tissue, McGovern neuroscientist Mark Harnett has now discovered that human dendrites have different electrical properties from those of other species. Their studies reveal that electrical signals weaken more as they flow along human dendrites, resulting in a higher degree of electrical compartmentalization, meaning that small sections of dendrites can behave independently from the rest of the neuron.

These differences may contribute to the enhanced computing power of the human brain, the researchers say.

Recognizing the partially seen

When we open our eyes in the morning and take in that first scene of the day, we don’t give much thought to the fact that our brain is processing the objects within our field of view with great efficiency and that it is compensating for a lack of information about our surroundings — all in order to allow us to go about our daily functions. The glass of water you left on the nightstand when preparing for bed is now partially blocked from your line of sight by your alarm clock, yet you know that it is a glass.

This seemingly simple ability for humans to recognize partially occluded objects — defined in this situation as the effect of one object in a 3-D space blocking another object from view — has been a complicated problem for the computer vision community. Martin Schrimpf, a graduate student in the DiCarlo lab in the Department of Brain and Cognitive Sciences at MIT, explains that machines have become increasingly adept at recognizing whole items quickly and confidently, but when something covers part of that item from view, this task becomes increasingly difficult for the models to accurately recognize the article.

“For models from computer vision to function in everyday life, they need to be able to digest occluded objects just as well as whole ones — after all, when you look around, most objects are partially hidden behind another object,” says Schrimpf, co-author of a paper on the subject that was recently published in the Proceedings of the National Academy of Sciences (PNAS).

In the new study, he says, “we dug into the underlying computations in the brain and then used our findings to build computational models. By recapitulating visual processing in the human brain, we are thus hoping to also improve models in computer vision.”

How are we as humans able to repeatedly do this everyday task without putting much thought and energy into this action, identifying whole scenes quickly and accurately after injesting just pieces? Researchers in the study started with the human visual cortex as a model for how to improve the performance of machines in this setting, says Gabriel Kreiman, an affiliate of the MIT Center for Brains, Minds, and Machines. Kreinman is a professor of ophthalmology at Boston Children’s Hospital and Harvard Medical School and was lead principal investigator for the study.

In their paper, “Recurrent computations for visual pattern completion,” the team showed how they developed a computational model, inspired by physiological and anatomical constraints, that was able to capture the behavioral and neurophysiological observations during pattern completion. In the end, the model provided useful insights towards understanding how to make inferences from minimal information.

Work for this study was conducted at the Center for Brains, Minds and Machines within the McGovern Institute for Brain Research at MIT.

School of Science welcomes 10 professors

The MIT School of Science recently welcomed 10 new professors, including Ila Fiete in the departments of Brain and Cognitive Sciences, Chemistry, Biology, Physics, Mathematics, and Earth, Atmospheric and Planetary Sciences.

Ila Fiete uses computational and theoretical tools to better understand the dynamical mechanisms and coding strategies that underlie computation in the brain, with a focus on elucidating how plasticity and development shape networks to perform computation and why information is encoded the way that it is. Her recent focus is on error control in neural codes, rules for synaptic plasticity that enable neural circuit organization, and questions at the nexus of information and dynamics in neural systems, such as understand how coding and statistics fundamentally constrain dynamics and vice-versa.

Tristan Collins conducts research at the intersection of geometric analysis, partial differential equations, and algebraic geometry. In joint work with Valentino Tosatti, Collins described the singularity formation of the Ricci flow on Kahler manifolds in terms of algebraic data. In recent work with Gabor Szekelyhidi, he gave a necessary and sufficient algebraic condition for existence of Ricci-flat metrics, which play an important role in string theory and mathematical physics. This result lead to the discovery of infinitely many new Einstein metrics on the 5-dimensional sphere. With Shing-Tung Yau and Adam Jacob, Collins is currently studying the relationship between categorical stability conditions and existence of solutions to differential equations arising from mirror symmetry.

Collins earned his BS in mathematics at the University of British Columbia in 2009, after which he completed his PhD in mathematics at Columbia University in 2014 under the direction of Duong H. Phong. Following a four-year appointment as a Benjamin Peirce Assistant Professor at Harvard University, Collins joins MIT as an assistant professor in the Department of Mathematics.

Julien de Wit develops and applies new techniques to study exoplanets, their atmospheres, and their interactions with their stars. While a graduate student in the Sara Seager group at MIT, he developed innovative analysis techniques to map exoplanet atmospheres, studied the radiative and tidal planet-star interactions in eccentric planetary systems, and constrained the atmospheric properties and mass of exoplanets solely from transmission spectroscopy. He plays a critical role in the TRAPPIST/SPECULOOS project, headed by Université of Liège, leading the atmospheric characterization of the newly discovered TRAPPIST-1 planets, for which he has already obtained significant results with the Hubble Space Telescope. De Wit’s efforts are now also focused on expanding the SPECULOOS network of telescopes in the northern hemisphere to continue the search for new potentially habitable TRAPPIST-1-like systems.

De Wit earned a BEng in physics and mechanics from the Université de Liège in Belgium in 2008, an MS in aeronautic engineering and an MRes in astrophysics, planetology, and space sciences from the Institut Supérieur de l’Aéronautique et de l’Espace at the Université de Toulouse, France in 2010; he returned to the Université de Liège for an MS in aerospace engineering, completed in 2011. After finishing his PhD in planetary sciences in 2014 and a postdoc at MIT, both under the direction of Sara Seager, he joins the MIT faculty in the Department of Earth, Atmospheric and Planetary Sciences as an assistant professor.

After earning a BS in mathematics and physics at the University of Michigan, Fiete obtained her PhD in 2004 at Harvard University in the Department of Physics. While holding an appointment at the Kavli Institute for Theoretical Physics at the University of California, Santa Barbara from 2004 to 2006, she was also a visiting member of the Center for Theoretical Biophysics at the University of California at San Diego. Fiete subsequently spent two years at Caltech as a Broad Fellow in brain circuitry, and in 2008 joined the faculty of the University of Texas at Austin. She joins the MIT faculty in the Department of Brain and Cognitive Sciences as an associate professor with tenure.

Ankur Jain explores the biology of RNA aggregation. Several genetic neuromuscular disorders, such as myotonic dystrophy and amyotrophic lateral sclerosis, are caused by expansions of nucleotide repeats in their cognate disease genes. Such repeats cause the transcribed RNA to form pathogenic clumps or aggregates. Jain uses a variety of biophysical approaches to understand how the RNA aggregates form, and how they can be disrupted to restore normal cell function. Jain will also study the role of RNA-DNA interactions in chromatin organization, investigating whether the RNA transcribed from telomeres (the protective repetitive sequences that cap the ends of chromosomes) undergoes the phase separation that characterizes repeat expansion diseases.

Jain completed a bachelor’s of technology degree in biotechnology and biochemical engineering at the Indian Institute of Technology Kharagpur, India in 2007, followed by a PhD in biophysics and computational biology at the University of Illinois at Urbana-Champaign under the direction of Taekjip Ha in 2013. After a postdoc at the University of California at San Francisco, he joins the MIT faculty in the Department of Biology as an assistant professor with an appointment as a member of the Whitehead Institute for Biomedical Research.

Kiyoshi Masui works to understand fundamental physics and the evolution of the universe through observations of the large-scale structure — the distribution of matter on scales much larger than galaxies. He works principally with radio-wavelength surveys to develop new observational methods such as hydrogen intensity mapping and fast radio bursts. Masui has shown that such observations will ultimately permit precise measurements of properties of the early and late universe and enable sensitive searches for primordial gravitational waves. To this end, he is working with a new generation of rapid-survey digital radio telescopes that have no moving parts and rely on signal processing software running on large computer clusters to focus and steer, including work on the Canadian Hydrogen Intensity Mapping Experiment (CHIME).

Masui obtained a BSCE in engineering physics at Queen’s University, Canada in 2008 and a PhD in physics at the University of Toronto in 2013 under the direction of Ue-Li Pen. After postdoctoral appointments at the University of British Columbia as the Canadian Institute for Advanced Research Global Scholar and the Canadian Institute for Theoretical Astrophysics National Fellow, Masui joins the MIT faculty in the Department of Physics as an assistant professor.

Phiala Shanahan studies theoretical nuclear and particle physics, in particular the structure and interactions of hadrons and nuclei from the fundamental (quark and gluon) degrees of freedom encoded in the Standard Model of particle physics. Shanahan’s recent work has focused on the role of gluons, the force carriers of the strong interactions described by quantum chromodynamics (QCD), in hadron and nuclear structure by using analytic tools and high-performance supercomputing. She recently achieved the first calculation of the gluon structure of light nuclei, making predictions that will be testable in new experiments proposed at Jefferson National Accelerator Facility and at the planned Electron-Ion Collider. She has also undertaken extensive studies of the role of strange quarks in the proton and light nuclei that sharpen theory predictions for dark matter cross-sections in direct detection experiments. To overcome computational limitations in QCD calculations for hadrons and in particular for nuclei, Shanahan is pursuing a program to integrate modern machine learning techniques in computational nuclear physics studies.

Shanahan obtained her BS in 2012 and her PhD in 2015, both in physics, from the University of Adelaide. She completed postdoctoral work at MIT in 2017, then held a joint position as an assistant professor at the College of William and Mary and senior staff scientist at the Thomas Jefferson National Accelerator Facility until 2018. She returns to MIT in the Department of Physics as an assistant professor.

Nike Sun works in probability theory at the interface of statistical physics and computation. Her research focuses in particular on phase transitions in average-case (randomized) formulations of classical computational problems. Her joint work with Jian Ding and Allan Sly establishes the satisfiability threshold of random k-SAT for large k, and relatedly the independence ratio of random regular graphs of large degree. Both are long-standing open problems where heuristic methods of statistical physics yield detailed conjectures, but few rigorous techniques exist. More recently she has been investigating phase transitions of dense graph models.

Sun completed BA mathematics and MA statistics degrees at Harvard in 2009, and an MASt in mathematics at Cambridge in 2010. She received her PhD in statistics from Stanford University in 2014 under the supervision of Amir Dembo. She held a Schramm fellowship at Microsoft New England and MIT Mathematics in 2014-2015 and a Simons postdoctoral fellowship at the University of California at Berkeley in 2016, and joined the Berkeley Department of Statistics as an assistant professor in 2016. She returns to the MIT Department of Mathematics as an associate professor with tenure.

Alison Wendlandt focuses on the development of selective, catalytic reactions using the tools of organic and organometallic synthesis and physical organic chemistry. Mechanistic study plays a central role in the development of these new transformations. Her projects involve the design of new catalysts and catalytic transformations, identification of important applications for selective catalytic processes, and elucidation of new mechanistic principles to expand powerful existing catalytic reaction manifolds.

Wendlandt received a BS in chemistry and biological chemistry from the University of Chicago in 2007, an MS in chemistry from Yale University in 2009, and a PhD in chemistry from the University of Wisconsin at Madison in 2015 under the direction of Shannon S. Stahl. Following an NIH Ruth L. Krichstein Postdoctoral Fellowship at Harvard University, Wendlandt joins the MIT faculty in the Department of Chemistry as an assistant professor.

Chenyang Xu specializes in higher-dimensional algebraic geometry, an area that involves classifying algebraic varieties, primarily through the minimal model program (MMP). MMP was introduced by Fields Medalist S. Mori in the early 1980s to make advances in higher dimensional birational geometry. The MMP was further developed by Hacon and McKernan in the mid-2000s, so that the MMP could be applied to other questions. Collaborating with Hacon, Xu expanded the MMP to varieties of certain conditions, such as those of characteristic p, and, with Hacon and McKernan, proved a fundamental conjecture on the MMP, generating a great deal of follow-up activity. In collaboration with Chi Li, Xu proved a conjecture of Gang Tian concerning higher-dimensional Fano varieties, a significant achievement. In a series of papers with different collaborators, he successfully applied MMP to singularities.

Xu received his BS in 2002 and MS in 2004 in mathematics from Peking University, and completed his PhD at Princeton University under János Kollár in 2008. He came to MIT as a CLE Moore Instructor in 2008-2011, and was subsequently appointed assistant professor at the University of Utah. He returned to Peking University as a research fellow at the Beijing International Center of Mathematical Research in 2012, and was promoted to professor in 2013. Xu joins the MIT faculty as a full professor in the Department of Mathematics.

Zhiwei Yun’s research is at the crossroads between algebraic geometry, number theory, and representation theory. He studies geometric structures aiming at solving problems in representation theory and number theory, especially those in the Langlands program. While he was a CLE Moore Instructor at MIT, he started to develop the theory of rigid automorphic forms, and used it to answer an open question of J-P Serre on motives, which also led to a major result on the inverse Galois problem in number theory. More recently, in his joint work with Wei Zhang, they give geometric interpretation of higher derivatives of automorphic L- functions in terms of intersection numbers, which sheds new light on the geometric analogue of the Birch and Swinnerton-Dyer conjecture.

Yun earned his BS at Peking University in 2004, after which he completed his PhD at Princeton University in 2009 under the direction of Robert MacPherson. After appointments at the Institute for Advanced Study and as a CLE Moore Instructor at MIT, he held faculty appointments at Stanford and Yale. He returned to the MIT Department of Mathematics as a full professor in the spring of 2018.

Mark Harnett named Vallee Foundation Scholar

The Bert L and N Kuggie Vallee Foundation has named McGovern Institute investigator Mark Harnett a 2018 Vallee Scholar. The Vallee Scholars Program recognizes original, innovative, and pioneering work by early career scientists at a critical juncture in their careers and provides $300,000 in discretionary funds to be spent over four years for basic biomedical research. Harnett is among five researchers named to this year’s Vallee Scholars Program.

Harnett, who is also the Fred and Carole Middleton Career Development Assistant Professor in the Department of Brain and Cognitive Sciences, is being recognized for his work exploring how the biophysical features of neurons give rise to the computational power of the brain. By exploiting new technologies and approaches at the interface of biophysics and systems neuroscience, research in the Harnett lab aims to provide a new understanding of the biology underlying how mammalian brains learn. This may open new areas of research into brain disorders characterized by atypical learning and memory (such as dementia and schizophrenia) and may also have important implications for designing new, brain-inspired artificial neural networks.

The Vallee Foundation was established in 1996 by Bert and Kuggie Vallee to foster originality, creativity, and leadership within biomedical scientific research and medical education. The foundation’s goal to fund originality, innovation, and pioneering work “recognizes the future promise of these scientists who are dedicated to understanding fundamental biological processes.” Harnett joins a list of 24 Vallee Scholars, including McGovern investigator Feng Zhang, who have been appointed to the program since its inception in 2013.