McGovern scientists named STAT Wunderkinds

McGovern researchers Sam Rodriques and Jonathan Strecker have been named to the class of 2019 STAT wunderkinds. This group of 22 researchers was selected from a national pool of hundreds of nominees, and aims to recognize trail-blazing scientists that are on the cusp of launching their careers but not yet fully independent.

“We were thrilled to receive this news,” said Robert Desimone, director of the McGovern Institute. “It’s great to see the remarkable progress being made by young scientists in McGovern labs be recognized in this way.”

Finding context

Sam Rodriques works in Ed Boyden’s lab at the McGovern Institute, where he develops new technologies that enable researchers to understand the behaviors of cells within their native spatial and temporal context.

“Psychiatric disease is a huge problem, but only a handful of first-in-class drugs for psychiatric diseases approved since the 1960s,” explains Rodriques, also affiliated with the MIT Media Lab and Broad Institute. “Coming up with novel cures is going to require new ways to generate hypotheses about the biological processes that underpin disease.”

Rodriques also works on several technologies within the Boyden lab, including preserving spatial information in molecular mapping technologies, finding ways of following neural connectivity in the brain, and Implosion Fabrication, or “Imp Fab.” This nanofabrication technology allows objects to be evenly shrunk to the nanoscale and has a wide range of potential applications, including building new miniature devices for examining neural function.

“I was very surprised, not expecting it at all!” explains Rodriques when asked about becoming a STAT Wunderkind, “I’m sure that all of the hundreds of applicants are very accomplished scientists, and so to be chosen like this is really an honor.”

New tools for gene editing

Jonathan Strecker is currently a postdoc working in Feng Zhang’s lab, and associated with both the McGovern Institute and Broad Institute. While CRISPR-Cas9 continues to have a profound effect and huge potential for research and biomedical, and agricultural applications, the ability to move entire genes into specific target locations remained out reach.

“Genome editing with CRISPR-Cas enzymes typically involves cutting and disrupting genes, or making certain base edits,” explains Strecker, “however, inserting large pieces of DNA is still hard to accomplish.”

As a postdoctoral researcher in the lab of CRISPR pioneer Feng Zhang, Strecker led research that showed how large sequences could be inserted into a genome at a given location.

“Nature often has interesting solutions to these problems and we were fortunate to identify and characterize a remarkable CRISPR system from cyanobacteria that functions as a programmable transposase.”

Importantly, the system he discovered, called CAST, doesn’t require cellular machinery to insert DNA. This is important as it means that CAST could work in many cell types, including those that have stopped dividing such as neurons, something that is being pursued.

By finding new sources of inspiration, be it nature or art, both Rodriques and Strecker join a stellar line up of young investigators being recognized for creativity and innovation.

 

MIT appoints 14 faculty members to named professorships

The School of Science has announced that 14 of its faculty members have been appointed to named professorships. The faculty members selected for these positions receive additional support to pursue their research and develop their careers.

Riccardo Comin is an assistant professor in the Department of Physics. He has been named a Class of 1947 Career Development Professor. This three-year professorship is granted in recognition of the recipient’s outstanding work in both research and teaching. Comin is interested in condensed matter physics. He uses experimental methods to synthesize new materials, as well as analysis through spectroscopy and scattering to investigate solid state physics. Specifically, the Comin lab attempts to discover and characterize electronic phases of quantum materials. Recently, his lab, in collaboration with colleagues, discovered that weaving a conductive material into a particular pattern known as the “kagome” pattern can result in quantum behavior when electricity is passed through.

Joseph Davis, assistant professor in the Department of Biology, has been named a Whitehead Career Development Professor. He looks at how cells build and deconstruct complex molecular machinery. The work of his lab group relies on biochemistry, biophysics, and structural approaches that include spectrometry and microscopy. A current project investigates the formation of the ribosome, an essential component in all cells. His work has implications for metabolic engineering, drug delivery, and materials science.

Lawrence Guth is now the Claude E. Shannon (1940) Professor of Mathematics. Guth explores harmonic analysis and combinatorics, and he is also interested in metric geometry and identifying connections between geometric inequalities and topology. The subject of metric geometry revolves around being able to estimate measurements, including length, area, volume and distance, and combinatorial geometry is essentially the estimation of the intersection of patters in simple shapes, including lines and circles.

Michael Halassa, an assistant professor in the Department of Brain and Cognitive Sciences, will hold the three-year Class of 1958 Career Development Professorship. His area of interest is brain circuitry. By investigating the networks and connections in the brain, he hopes to understand how they operate — and identify any ways in which they might deviate from normal operations, causing neurological and psychiatric disorders. Several publications from his lab discuss improvements in the treatment of the deleterious symptoms of autism spectrum disorder and schizophrenia, and his latest news provides insights on how the brain filters out distractions, particularly noise. Halassa is an associate investigator at the McGovern Institute for Brain Research and an affiliate member of the Picower Institute for Learning and Memory.

Sebastian Lourido, an assistant professor and the new Latham Family Career Development Professor in the Department of Biology for the next three years, works on treatments for infectious disease by learning about parasitic vulnerabilities. Focusing on human pathogens, Lourido and his lab are interested in what allows parasites to be so widespread and deadly, looking on a molecular level. This includes exploring how calcium regulates eukaryotic cells, which, in turn, affect processes such as muscle contraction and membrane repair, in addition to kinase responses.

Brent Minchew is named a Cecil and Ida Green Career Development Professor for a three-year term. Minchew, a faculty member in the Department of Earth, Atmospheric and Planetary Sciences, studies glaciers using remote sensing methods, such as interferometric synthetic aperture radar. His research into glaciers, including their mechanics, rheology, and interactions with their surrounding environment, extends as far as observing their responses to climate change. His group recently determined that Antarctica, in a worst-case scenario climate projection, would not contribute as much as predicted to rising sea level.

Elly Nedivi, a professor in the departments of Brain and Cognitive Sciences and Biology, has been named the inaugural William R. (1964) And Linda R. Young Professor. She works on brain plasticity, defined as the brain’s ability to adapt with experience, by identifying genes that play a role in plasticity and their neuronal and synaptic functions. In one of her lab’s recent publications, they suggest that variants of a particular gene may undermine expression or production of a protein, increasing the risk of bipolar disorder. In addition, she collaborates with others at MIT to develop new microscopy tools that allow better analysis of brain connectivity. Nedivi is also a member of the Picower Institute for Learning and Memory.

Andrei Negut has been named a Class of 1947 Career Development Professor for a three-year term. Negut, a member of the Department of Mathematics, fixates on problems in geometric representation theory. This topic requires investigation within algebraic geometry and representation theory simultaneously, with implications for mathematical physics, symplectic geometry, combinatorics and probability theory.

Matĕj Peč, the Victor P. Starr Career Development Professor in the Department of Earth, Atmospheric and Planetary Science until 2021, studies how the movement of the Earth’s tectonic plates affects rocks, mechanically and microstructurally. To investigate such a large-scale topic, he utilizes high-pressure, high-temperature experiments in a lab to simulate the driving forces associated with plate motion, and compares results with natural observations and theoretical modeling. His lab has identified a particular boundary beneath the Earth’s crust where rock properties shift from brittle, like peanut brittle, to viscous, like honey, and determined how that layer accommodates building strain between the two. In his investigations, he also considers the effect on melt generation miles underground.

Kerstin Perez has been named the three-year Class of 1948 Career Development Professor in the Department of Physics. Her research interest is dark matter. She uses novel analytical tools, such as those affixed on a balloon-borne instrument that can carry out processes similar to that of a particle collider (like the Large Hadron Collider) to detect new particle interactions in space with the help of cosmic rays. In another research project, Perez uses a satellite telescope array on Earth to search for X-ray signatures of mysterious particles. Her work requires heavy involvement with collaborative observatories, instruments, and telescopes. Perez is affiliated with the Kavli Institute for Astrophysics and Space Research.

Bjorn Poonen, named a Distinguished Professor of Science in the Department of Mathematics, studies number theory and algebraic geometry. He, his colleagues, and his lab members generate algorithms that can solve polynomial equations with the particular requirement that the solutions be rational numbers. These types of problems can be useful in encoding data. He also helps to determine what is undeterminable, that is exploring the limits of computing.

Daniel Suess, named a Class of 1948 Career Development Professor in the Department of Chemistry, uses molecular chemistry to explain global biogeochemical cycles. In the fields of inorganic and biological chemistry, Suess and his lab look into understanding complex and challenging reactions and clustering of particular chemical elements and their catalysts. Most notably, these reactions include those that are essential to solar fuels. Suess’s efforts to investigate both biological and synthetic systems have broad aims of both improving human health and decreasing environmental impacts.

Alison Wendlandt is the new holder of the five-year Cecil and Ida Green Career Development Professorship. In the Department of Chemistry, the Wendlandt research group focuses on physical organic chemistry and organic and organometallic synthesis to develop reaction catalysts. Her team fixates on designing new catalysts, identifying processes to which these catalysts can be applied, and determining principles that can expand preexisting reactions. Her team’s efforts delve into the fields of synthetic organic chemistry, reaction kinetics, and mechanics.

Julien de Wit, a Department of Earth, Atmospheric and Planetary Sciences assistant professor, has been named a Class of 1954 Career Development Professor. He combines math and science to answer questions about big-picture planetary questions. Using data science, de Wit develops new analytical techniques for mapping exoplanetary atmospheres, studies planet-star interactions of planetary systems, and determines atmospheric and planetary properties of exoplanets from spectroscopic information. He is a member of the scientific team involved in the Search for habitable Planets EClipsing ULtra-cOOl Stars (SPECULOOS) TRANsiting Planets and Planetesimals Small Telescope (TRAPPIST), made up of an international collection of observatories. He is affiliated with the Kavli Institute.

Drug combination reverses hypersensitivity to noise

People with autism often experience hypersensitivity to noise and other sensory input. MIT neuroscientists have now identified two brain circuits that help tune out distracting sensory information, and they have found a way to reverse noise hypersensitivity in mice by boosting the activity of those circuits.

One of the circuits the researchers identified is involved in filtering noise, while the other exerts top-down control by allowing the brain to switch its attention between different sensory inputs.

The researchers showed that restoring the function of both circuits worked much better than treating either circuit alone. This demonstrates the benefits of mapping and targeting multiple circuits involved in neurological disorders, says Michael Halassa, an assistant professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research.

“We think this work has the potential to transform how we think about neurological and psychiatric disorders, [so that we see them] as a combination of circuit deficits,” says Halassa, the senior author of the study. “The way we should approach these brain disorders is to map, to the best of our ability, what combination of deficits are there, and then go after that combination.”

MIT postdoc Miho Nakajima and research scientist L. Ian Schmitt are the lead authors of the paper, which appears in Neuron on Oct. 21. Guoping Feng, the James W. and Patricia Poitras Professor of Neuroscience and a member of the McGovern Institute, is also an author of the paper.

Hypersensitivity

Many gene variants have been linked with autism, but most patients have very few, if any, of those variants. One of those genes is ptchd1, which is mutated in about 1 percent of people with autism. In a 2016 study, Halassa and Feng found that during development this gene is primarily expressed in a part of the thalamus called the thalamic reticular nucleus (TRN).

That study revealed that neurons of the TRN help the brain to adjust to changes in sensory input, such as noise level or brightness. In mice with ptchd1 missing, TRN neurons fire too fast, and they can’t adjust when noise levels change. This prevents the TRN from performing its usual sensory filtering function, Halassa says.

“Neurons that are there to filter out noise, or adjust the overall level of activity, are not adapting. Without the ability to fine-tune the overall level of activity, you can get overwhelmed very easily,” he says.

In the 2016 study, the researchers also found that they could restore some of the mice’s noise filtering ability by treating them with a drug called EBIO that activates neurons’ potassium channels. EBIO has harmful cardiac side effects so likely could not be used in human patients, but other drugs that boost TRN activity may have a similar beneficial effect on hypersensitivity, Halassa says.

In the new Neuron paper, the researchers delved more deeply into the effects of ptchd1, which is also expressed in the prefrontal cortex. To explore whether the prefrontal cortex might play a role in the animals’ hypersensitivity, the researchers used a task in which mice have to distinguish between three different tones, presented with varying amounts of background noise.

Normal mice can learn to use a cue that alerts them whenever the noise level is going to be higher, improving their overall performance on the task. A similar phenomenon is seen in humans, who can adjust better to noisier environments when they have some advance warning, Halassa says. However, mice with the ptchd1 mutation were unable to use these cues to improve their performance, even when their TRN deficit was treated with EBIO.

This suggested that another brain circuit must be playing a role in the animals’ ability to filter out distracting noise. To test the possibility that this circuit is located in the prefrontal cortex, the researchers recorded from neurons in that region while mice lacking ptch1 performed the task. They found that neuronal activity died out much faster in these mice than in the prefrontal cortex of normal mice. That led the researchers to test another drug, known as modafinil, which is FDA-approved to treat narcolepsy and is sometimes prescribed to improve memory and attention.

The researchers found that when they treated mice missing ptchd1 with both modafinil and EBIO, their hypersensitivity disappeared, and their performance on the task was the same as that of normal mice.

Targeting circuits

This successful reversal of symptoms suggests that the mice missing ptchd1 experience a combination of circuit deficits that each contribute differently to noise hypersensitivity. One circuit filters noise, while the other helps to control noise filtering based on external cues. Ptch1 mutations affect both circuits, in different ways that can be treated with different drugs.

Both of those circuits could also be affected by other genetic mutations that have been linked to autism and other neurological disorders, Halassa says. Targeting those circuits, rather than specific genetic mutations, may offer a more effective way to treat such disorders, he says.

“These circuits are important for moving things around the brain — sensory information, cognitive information, working memory,” he says. “We’re trying to reverse-engineer circuit operations in the service of figuring out what to do about a real human disease.”

He now plans to study circuit-level disturbances that arise in schizophrenia. That disorder affects circuits involving cognitive processes such as inference — the ability to draw conclusions from available information.

The research was funded by the Simons Center for the Social Brain at MIT, the Stanley Center for Psychiatric Research at the Broad Institute, the McGovern Institute for Brain Research at MIT, the Pew Foundation, the Human Frontiers Science Program, the National Institutes of Health, the James and Patricia Poitras Center for Psychiatric Disorders Research at MIT, a Japan Society for the Promotion of Science Fellowship, and a National Alliance for the Research of Schizophrenia and Depression Young Investigator Award.

Word Play

Ev Fedorenko uses the widely translated book “Alice in Wonderland” to test brain responses to different languages.

Language is a uniquely human ability that allows us to build vibrant pictures of non-existent places (think Wonderland or Westeros). How does the brain build mental worlds from words? Can machines do the same? Can we recover this ability after brain injury? These questions require an understanding of how the brain processes language, a fascination for Ev Fedorenko.

“I’ve always been interested in language. Early on, I wanted to found a company that teaches kids languages that share structure — Spanish, French, Italian — in one go,” says Fedorenko, an associate investigator at the McGovern Institute and an assistant professor in brain and cognitive sciences at MIT.

Her road to understanding how thoughts, ideas, emotions, and meaning can be delivered through sound and words became clear when she realized that language was accessible through cognitive neuroscience.

Early on, Fedorenko made a seminal finding that undermined dominant theories of the time. Scientists believed a single network was extracting meaning from all we experience: language, music, math, etc. Evolving separate networks for these functions seemed unlikely, as these capabilities arose recently in human evolution.

Language Regions
Ev Fedorenko has found that language regions of the brain (shown in teal) are sensitive to both word meaning and sentence structure. Image: Ev Fedorenko

But when Fedorenko examined brain activity in subjects while they read or heard sentences in the MRI, she found a network of brain regions that is indeed specialized for language.

“A lot of brain areas, like motor and social systems, were already in place when language emerged during human evolution,” explains Fedorenko. “In some sense, the brain seemed fully occupied. But rather than co-opt these existing systems, the evolution of language in humans involved language carving out specific brain regions.”

Different aspects of language recruit brain regions across the left hemisphere, including Broca’s area and portions of the temporal lobe. Many believe that certain regions are involved in processing word meaning while others unpack the rules of language. Fedorenko and colleagues have however shown that the entire language network is selectively engaged in linguistic tasks, processing both the rules (syntax) and meaning (semantics) of language in the same brain areas.

Semantic Argument

Fedorenko’s lab even challenges the prevailing view that syntax is core to language processing. By gradually degrading sentence structure through local word swaps (see figure), they found that language regions still respond strongly to these degraded sentences, deciphering meaning from them, even as syntax, or combinatorial rules, disappear.

The Fedorenko lab has shown that the brain finds meaning in a sentence, even when “local” words are swapped (2, 3). But when clusters of neighboring words are scrambled (4), the brain struggles to find its meaning.

“A lot of focus in language research has been on structure-building, or building a type of hierarchical graph of the words in a sentence. But actually the language system seems optimized and driven to find rich, representational meaning in a string of words processed together,” explains Fedorenko.

Computing Language

When asked about emerging areas of research, Fedorenko points to the data structures and algorithms underlying linguistic processing. Modern computational models can perform sophisticated tasks, including translation, ever more effectively. Consider Google translate. A decade ago, the system translated one word at a time with laughable results. Now, instead of treating words as providing context for each other, the latest artificial translation systems are performing more accurately. Understanding how they resolve meaning could be very revealing.

“Maybe we can link these models to human neural data to both get insights about linguistic computations in the human brain, and maybe help improve artificial systems by making them more human-like,” says Fedorenko.

She is also trying to understand how the system breaks down, how it over-performs, and even more philosophical questions. Can a person who loses language abilities (with aphasia, for example) recover — a very relevant question given the language-processing network occupies such specific brain regions. How are some unique people able to understand 10, 15 or even more languages? Do we need words to have thoughts?

Using a battery of approaches, Fedorenko seems poised to answer some of these questions.

New method visualizes groups of neurons as they compute

Using a fluorescent probe that lights up when brain cells are electrically active, MIT and Boston University researchers have shown that they can image the activity of many neurons at once, in the brains of mice.

McGovern Investigator Ed Boyden has developed a technology that allows neuroscientists to visualize the activity of circuits within the brain and link them to specific behaviors.

This technique, which can be performed using a simple light microscope, could allow neuroscientists to visualize the activity of circuits within the brain and link them to specific behaviors, says Edward Boyden, the Y. Eva Tan Professor in Neurotechnology and a professor of biological engineering and of brain and cognitive sciences at MIT.

“If you want to study a behavior, or a disease, you need to image the activity of populations of neurons because they work together in a network,” says Boyden, who is also a member of MIT’s McGovern Institute for Brain Research, Media Lab, and Koch Institute for Integrative Cancer Research.

Using this voltage-sensing molecule, the researchers showed that they could record electrical activity from many more neurons than has been possible with any existing, fully genetically encoded, fluorescent voltage probe.

Boyden and Xue Han, an associate professor of biomedical engineering at Boston University, are the senior authors of the study, which appears in the Oct. 9 online edition of Nature. The lead authors of the paper are MIT postdoc Kiryl Piatkevich, BU graduate student Seth Bensussen, and BU research scientist Hua-an Tseng.

Seeing connections

Neurons compute using rapid electrical impulses, which underlie our thoughts, behavior, and perception of the world. Traditional methods for measuring this electrical activity require inserting an electrode into the brain, a process that is labor-intensive and usually allows researchers to record from only one neuron at a time. Multielectrode arrays allow the monitoring of electrical activity from many neurons at once, but they don’t sample densely enough to get all the neurons within a given volume.  Calcium imaging does allow such dense sampling, but it measures calcium, an indirect and slow measure of neural electrical activity.

In 2018, MIT researchers developed a light-sensitive protein that can be embedded into neuron membranes, where it emits a fluorescent signal that indicates how much voltage a particular cell is experiencing. Image courtesy of the researchers

In 2018, Boyden’s team developed an alternative way to monitor electrical activity by labeling neurons with a fluorescent probe. Using a technique known as directed protein evolution, his group engineered a molecule called Archon1 that can be genetically inserted into neurons, where it becomes embedded in the cell membrane. When a neuron’s electrical activity increases, the molecule becomes brighter, and this fluorescence can be seen with a standard light microscope.

In the 2018 paper, Boyden and his colleagues showed that they could use the molecule to image electrical activity in the brains of transparent worms and zebrafish embryos, and also in mouse brain slices. In the new study, they wanted to try to use it in living, awake mice as they engaged in a specific behavior.

To do that, the researchers had to modify the probe so that it would go to a subregion of the neuron membrane. They found that when the molecule inserts itself throughout the entire cell membrane, the resulting images are blurry because the axons and dendrites that extend from neurons also fluoresce. To overcome that, the researchers attached a small peptide that guides the probe specifically to membranes of the cell bodies of neurons. They called this modified protein SomArchon.

“With SomArchon, you can see each cell as a distinct sphere,” Boyden says. “Rather than having one cell’s light blurring all its neighbors, each cell can speak by itself loudly and clearly, uncontaminated by its neighbors.”

The researchers used this probe to image activity in a part of the brain called the striatum, which is involved in planning movement, as mice ran on a ball. They were able to monitor activity in several neurons simultaneously and correlate each one’s activity with the mice’s movement. Some neurons’ activity went up when the mice were running, some went down, and others showed no significant change.

“Over the years, my lab has tried many different versions of voltage sensors, and none of them have worked in living mammalian brains until this one,” Han says.

Using this fluorescent probe, the researchers were able to obtain measurements similar to those recorded by an electrical probe, which can pick up activity on a very rapid timescale. This makes the measurements more informative than existing techniques such as imaging calcium, which neuroscientists often use as a proxy for electrical activity.

“We want to record electrical activity on a millisecond timescale,” Han says. “The timescale and activity patterns that we get from calcium imaging are very different. We really don’t know exactly how these calcium changes are related to electrical dynamics.”

With the new voltage sensor, it is also possible to measure very small fluctuations in activity that occur even when a neuron is not firing a spike. This could help neuroscientists study how small fluctuations impact a neuron’s overall behavior, which has previously been very difficult in living brains, Han says.

Mapping circuits

The researchers also showed that this imaging technique can be combined with optogenetics — a technique developed by the Boyden lab and collaborators that allows researchers to turn neurons on and off with light by engineering them to express light-sensitive proteins. In this case, the researchers activated certain neurons with light and then measured the resulting electrical activity in these neurons.

This imaging technology could also be combined with expansion microscopy, a technique that Boyden’s lab developed to expand brain tissue before imaging it, make it easier to see the anatomical connections between neurons in high resolution.

“One of my dream experiments is to image all the activity in a brain, and then use expansion microscopy to find the wiring between those neurons,” Boyden says. “Then can we predict how neural computations emerge from the wiring.”

Such wiring diagrams could allow researchers to pinpoint circuit abnormalities that underlie brain disorders, and may also help researchers to design artificial intelligence that more closely mimics the human brain, Boyden says.

The MIT portion of the research was funded by Edward and Kay Poitras, the National Institutes of Health, including a Director’s Pioneer Award, Charles Hieken, John Doerr, the National Science Foundation, the HHMI-Simons Faculty Scholars Program, the Human Frontier Science Program, and the U.S. Army Research Office.

Controlling our internal world

Olympic skaters can launch, perform multiple aerial turns, and land gracefully, anticipating imperfections and reacting quickly to correct course. To make such elegant movements, the brain must have an internal model of the body to control, predict, and make almost instantaneous adjustments to motor commands. So-called “internal models” are a fundamental concept in engineering and have long been suggested to underlie control of movement by the brain, but what about processes that occur in the absence of movement, such as contemplation, anticipation, planning?

Using a novel combination of task design, data analysis, and modeling, MIT neuroscientist Mehrdad Jazayeri and colleagues now provide compelling evidence that the core elements of an internal model also control purely mental processes in a study published in Nature Neuroscience.

“During my thesis I realized that I’m interested, not so much in how our senses react to sensory inputs, but instead in how my internal model of the world helps me make sense of those inputs,”says Jazayeri, the Robert A. Swanson Career Development Professor of Life Sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Indeed, understanding the building blocks exerting control of such mental processes could help to paint a better picture of disruptions in mental disorders, such as schizophrenia.

Internal models for mental processes

Scientists working on the motor system have long theorized that the brain overcomes noisy and slow signals using an accurate internal model of the body. This internal model serves three critical functions: it provides motor to control movement, simulates upcoming movement to overcome delays, and uses feedback to make real-time adjustments.

“The framework that we currently use to think about how the brain controls our actions is one that we have borrowed from robotics: we use controllers, simulators, and sensory measurements to control machines and train operators,” explains Reza Shadmehr, a professor at the Johns Hopkins School of Medicine who was not involved with the study. “That framework has largely influenced how we imagine our brain controlling our movements.”

Jazazyeri and colleagues wondered whether the same framework might explain the control principles governing mental states in the absence of any movement.

“When we’re simply sitting, thoughts and images run through our heads and, fundamental to intellect, we can control them,” explains lead author Seth Egger, a former postdoctoral associate in the Jazayeri lab and now at Duke University.

“We wanted to find out what’s happening between our ears when we are engaged in thinking,” says Egger.

Imagine, for example, a sign language interpreter keeping up with a fast speaker. To track speech accurately, the translator continuously anticipates where the speech is going, rapidly adjusting when the actual words deviate from the prediction. The interpreter could be using an internal model to anticipate upcoming words, and use feedback to make adjustments on the fly.

1-2-3…Go

Hypothesizing about how the components of an internal model function in scenarios such as translation is one thing. Cleanly measuring and proving the existence of these elements is much more complicated as the activity of the controller, simulator, and feedback are intertwined. To tackle this problem, Jazayeri and colleagues devised a clever task with primate models in which the controller, simulator, and feedback act at distinct times.

In this task, called “1-2-3-Go,” the animal sees three consecutive flashes (1, 2, and 3) that form a regular beat, and learns to make an eye movement (Go) when they anticipate the 4th flash should occur. During the task, researchers measured neural activity in a region of the frontal cortex they had previously linked to the timing of movement.

Jazayeri and colleagues had clear predictions about when the controller would act (between the third flash and “Go”) and when feedback would be engaged (with each flash of light). The key surprise came when researchers saw evidence for the simulator anticipating the third flash. This unexpected neural activity has dynamics that resemble the controller, but was not associated with a response. In other words, the researchers uncovered a covert plan that functions as the simulator, thus uncovering all three elements of an internal model for a mental process, the planning and anticipation of “Go” in the “1-2-3-Go” sequence.

“Jazayeri’s work is important because it demonstrates how to study mental simulation in animals,” explains Shadmehr, “and where in the brain that simulation is taking place.”

Having found how and where to measure an internal model in action, Jazayeri and colleagues now plan to ask whether these control strategies can explain how primates effortlessly generalize their knowledge from one behavioral context to another. For example, how does an interpreter rapidly adjust when someone with widely different speech habits takes the podium? This line of investigation promises to shed light on high-level mental capacities of the primate brain that simpler animals seem to lack, that go awry in mental disorders, and that designers of artificial intelligence systems so fondly seek.

What is the social brain?

As part of our Ask the Brain series, Anila D’Mello, a postdoctoral fellow in John Gabrieli’s lab answers the question,”What is the social brain?”

_____

Anila D'Mello portrait
Anila D’Mello is the Simons Center for the Social Brain Postdoctoral Fellow in John Gabrieli’s lab at the McGovern Institute.

“Knock Knock.”
“Who’s there?”
“The Social Brain.”
“The Social Brain, who?”

Call and response jokes, like the “Knock Knock” joke above, leverage our common understanding of how a social interaction typically proceeds. Joke telling allows us to interact socially with others based on our shared experiences and understanding of the world. But where do these abilities “live” in the brain and how does the social brain develop?

Neuroimaging and lesion studies have identified a network of brain regions that support social interaction, including the ability to understand and partake in jokes – we refer to this as the “social brain.” This social brain network is made up of multiple regions throughout the brain that together support complex social interactions. Within this network, each region likely contributes to a specific type of social processing. The right temporo-parietal junction, for instance, is important for thinking about another person’s mental state, whereas the amygdala is important for the interpretation of emotional facial expressions and fear processing. Damage to these brain regions can have striking effects on social behaviors. One recent study even found that individuals with bigger amygdala volumes had larger and more complex social networks!

Though social interaction is such a fundamental human trait, we aren’t born with a prewired social brain.

Much of our social ability is grown and honed over time through repeated social interactions. Brain networks that support social interaction continue to specialize into adulthood. Neuroimaging work suggests that though newborn infants may have all the right brain parts to support social interaction, these regions may not yet be specialized or connected in the right way. This means that early experiences and environments can have large influences on the social brain. For instance, social neglect, especially very early in development, can have negative impacts on social behaviors and on how the social brain is wired. One prominent example is that of children raised in orphanages or institutions, who are sometimes faced with limited adult interaction or access to language. Children raised in these conditions are more likely to have social challenges including difficulties forming attachments. Prolonged lack of social stimulation also alters the social brain in these children resulting in changes in amygdala size and connections between social brain regions.

The social brain is not just a result of our environment. Genetics and biology also contribute to the social brain in ways we don’t yet fully understand. For example, individuals with autism / autistic individuals may experience difficulties with social interaction and communication. This may include challenges with things like understanding the punchline of a joke. These challenges in autism have led to the hypothesis that there may be differences in the social brain network in autism. However, despite documented behavioral differences in social tasks, there is conflicting brain imaging evidence for whether differences exist between people with and without autism in the social brain network.

Examples such as that of autism imply that the reality of the social brain is probably much more complex than the story painted here. It is likely that social interaction calls upon many different parts of the brain, even beyond those that we have termed the “social brain,” that must work in concert to support this highly complex set of behaviors. These include regions of the brain important for listening, seeing, speaking, and moving. In addition, it’s important to remember that the social brain and regions that make it up do not stand alone. Regions of the social brain also play an intimate role in language, humor, and other cognitive processes.

“Knock Knock”
“Who’s there?”
“The Social Brain”
“The Social Brain, who?”
“I just told you…didn’t you read what I wrote?”

Anila D’Mello earned her bachelor’s degree in psychology from Georgetown University in 2012, and went on to receive her PhD in Behavior, Cognition, and Neuroscience from American University in 2017. She joined the Gabrieli lab as a postdoc in 2017 and studies the neural correlates of social communication in autism.

_____

Do you have a question for The Brain? Ask it here.

Better sleep habits lead to better college grades

Two MIT professors have found a strong relationship between students’ grades and how much sleep they’re getting. What time students go to bed and the consistency of their sleep habits also make a big difference. And no, getting a good night’s sleep just before a big test is not good enough — it takes several nights in a row of good sleep to make a difference.

Those are among the conclusions from an experiment in which 100 students in an MIT engineering class were given Fitbits, the popular wrist-worn devices that track a person’s activity 24/7, in exchange for the researchers’ access to a semester’s worth of their activity data. The findings — some unsurprising, but some quite unexpected — are reported today in the journal Science of Learning in a paper by former MIT postdoc Kana Okano, professors Jeffrey Grossman and John Gabrieli, and two others.

One of the surprises was that individuals who went to bed after some particular threshold time — for these students, that tended to be 2 a.m., but it varied from one person to another — tended to perform less well on their tests no matter how much total sleep they ended up getting.

The study didn’t start out as research on sleep at all. Instead, Grossman was trying to find a correlation between physical exercise and the academic performance of students in his class 3.091 (Introduction to Solid-State Chemistry). In addition to having 100 of the students wear Fitbits for the semester, he also enrolled about one-fourth of them in an intense fitness class in MIT’s Department of Athletics, Physical Education, and Recreation, with the help of assistant professors Carrie Moore and Matthew Breen, who created the class specifically for this study. The thinking was that there might be measurable differences in test performance between the two groups.

There wasn’t. Those without the fitness classes performed just as well as those who did take them. “What we found at the end of the day was zero correlation with fitness, which I must say was disappointing since I believed, and still believe, there is a tremendous positive impact of exercise on cognitive performance,” Grossman says.

He speculates that the intervals between the fitness program and the classes may have been too long to show an effect. But meanwhile, in the vast amount of data collected during the semester, some other correlations did become obvious. While the devices weren’t explicitly monitoring sleep, the Fitbit program’s proprietary algorithms did detect periods of sleep and changes in sleep quality, primarily based on lack of activity.

These correlations were not at all subtle, Grossman says. There was essentially a straight-line relationship between the average amount of sleep a student got and their grades on the 11 quizzes, three midterms, and final exam, with the grades ranging from A’s to C’s. “There’s lots of scatter, it’s a noisy plot, but it’s a straight line,” he says. The fact that there was a correlation between sleep and performance wasn’t surprising, but the extent of it was, he says. Of course, this correlation can’t absolutely prove that sleep was the determining factor in the students’ performance, as opposed to some other influence that might have affected both sleep and grades. But the results are a strong indication, Grossman says, that sleep “really, really matters.”

“Of course, we knew already that more sleep would be beneficial to classroom performance, from a number of previous studies that relied on subjective measures like self-report surveys,” Grossman says. “But in this study the benefits of sleep are correlated to performance in the context of a real-life college course, and driven by large amounts of objective data collection.”

The study also revealed no improvement in scores for those who made sure to get a good night’s sleep right before a big test. According to the data, “the night before doesn’t matter,” Grossman says. “We’ve heard the phrase ‘Get a good night’s sleep, you’ve got a big day tomorrow.’ It turns out this does not correlate at all with test performance. Instead, it’s the sleep you get during the days when learning is happening that matter most.”

Another surprising finding is that there appears to be a certain cutoff for bedtimes, such that going to bed later results in poorer performance, even if the total amount of sleep is the same. “When you go to bed matters,” Grossman says. “If you get a certain amount of sleep  — let’s say seven hours — no matter when you get that sleep, as long as it’s before certain times, say you go to bed at 10, or at 12, or at 1, your performance is the same. But if you go to bed after 2, your performance starts to go down even if you get the same seven hours. So, quantity isn’t everything.”

Quality of sleep also mattered, not just quantity. For example, those who got relatively consistent amounts of sleep each night did better than those who had greater variations from one night to the next, even if they ended up with the same average amount.

This research also helped to provide an explanation for something that Grossman says he had noticed and wondered about for years, which is that on average, the women in his class have consistently gotten better grades than the men. Now, he has a possible answer: The data show that the differences in quantity and quality of sleep can fully account for the differences in grades. “If we correct for sleep, men and women do the same in class. So sleep could be the explanation for the gender difference in our class,” he says.

More research will be needed to understand the reasons why women tend to have better sleep habits than men. “There are so many factors out there that it could be,” Grossman says. “I can envision a lot of exciting follow-on studies to try to understand this result more deeply.”

“The results of this study are very gratifying to me as a sleep researcher, but are terrifying to me as a parent,” says Robert Stickgold, a professor of psychiatry and director of the Center for Sleep and Cognition at Harvard Medical School, who was not connected with this study. He adds, “The overall course grades for students averaging six and a half hours of sleep were down 50 percent from other students who averaged just one hour more sleep. Similarly, those who had just a half-hour more night-to-night variation in their total sleep time had grades that dropped 45 percent below others with less variation. This is huge!”

Stickgold says “a full quarter of the variation in grades was explained by these sleep parameters (including bedtime). All students need to not only be aware of these results, but to understand their implication for success in college. I can’t help but believe the same is true for high school students.” But he adds one caution: “That said, correlation is not the same as causation. While I have no doubt that less and more variable sleep will hurt a student’s grades, it’s also possible that doing poorly in classes leads to less and more variable sleep, not the other way around, or that some third factor, such as ADHD, could independently lead to poorer grades and poorer sleep.”

The team also included technical assistant Jakub Kaezmarzyk and Harvard Business School researcher Neha Dave. The study was supported by MIT’s Department of Materials Science and Engineering, the Lubin Fund, and the MIT Integrated Learning Initiative.

Perception of musical pitch varies across cultures

People who are accustomed to listening to Western music, which is based on a system of notes organized in octaves, can usually perceive the similarity between notes that are same but played in different registers — say, high C and middle C. However, a longstanding question is whether this a universal phenomenon or one that has been ingrained by musical exposure.

This question has been hard to answer, in part because of the difficulty in finding people who have not been exposed to Western music. Now, a new study led by researchers from MIT and the Max Planck Institute for Empirical Aesthetics has found that unlike residents of the United States, people living in a remote area of the Bolivian rainforest usually do not perceive the similarities between two versions of the same note played at different registers (high or low).

“We’re finding that … there seems to be really striking variation in things that a lot of people would have presumed would be common across cultures and listeners,” says McDermott.

The findings suggest that although there is a natural mathematical relationship between the frequencies of every “C,” no matter what octave it’s played in, the brain only becomes attuned to those similarities after hearing music based on octaves, says Josh McDermott, an associate professor in MIT’s Department of Brain and Cognitive Sciences.

“It may well be that there is a biological predisposition to favor octave relationships, but it doesn’t seem to be realized unless you are exposed to music in an octave-based system,” says McDermott, who is also a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds and Machines.

The study also found that members of the Bolivian tribe, known as the Tsimane’, and Westerners do have a very similar upper limit on the frequency of notes that they can accurately distinguish, suggesting that that aspect of pitch perception may be independent of musical experience and biologically determined.

McDermott is the senior author of the study, which appears in the journal Current Biology on Sept. 19. Nori Jacoby, a former MIT postdoc who is now a group leader at the Max Planck Institute for Empirical Aesthetics, is the paper’s lead author. Other authors are Eduardo Undurraga, an assistant professor at the Pontifical Catholic University of Chile; Malinda McPherson, a graduate student in the Harvard/MIT Program in Speech and Hearing Bioscience and Technology; Joaquin Valdes, a graduate student at the Pontifical Catholic University of Chile; and Tomas Ossandon, an assistant professor at the Pontifical Catholic University of Chile.

Octaves apart

Cross-cultural studies of how music is perceived can shed light on the interplay between biological constraints and cultural influences that shape human perception. McDermott’s lab has performed several such studies with the participation of Tsimane’ tribe members, who live in relative isolation from Western culture and have had little exposure to Western music.

In a study published in 2016, McDermott and his colleagues found that Westerners and Tsimane’ had different aesthetic reactions to chords, or combinations of notes. To Western ears, the combination of C and F# is very grating, but Tsimane’ listeners rated this chord just as likeable as other chords that Westerners would interpret as more pleasant, such as C and G.

Later, Jacoby and McDermott found that both Westerners and Tsimane’ are drawn to musical rhythms composed of simple integer ratios, but the ratios they favor are different, based on which rhythms are more common in the music they listen to.

In their new study, the researchers studied pitch perception using an experimental design in which they play a very simple tune, only two or three notes, and then ask the listener to sing it back. The notes that were played could come from any octave within the range of human hearing, but listeners sang their responses within their vocal range, usually restricted to a single octave.

pitch perception experiment
Eduardo Undurraga, an assistant professor at the Pontifical Catholic University of Chile, runs a musical pitch perception experiment with a member of the Tsimane’ tribe of the Bolivian rainforest. Photo: Josh McDermott

Western listeners, especially those who were trained musicians, tended to reproduce the tune an exact number of octaves above or below what they heard, though they were not specifically instructed to do so. In Western music, the pitch of the same note doubles with each ascending octave, so tones with frequencies of 27.5 hertz, 55 hertz, 110 hertz, 220 hertz, and so on, are all heard as the note A.

Western listeners in the study, all of whom lived in New York or Boston, accurately reproduced sequences such as A-C-A, but in a different register, as though they hear the similarity of notes separated by octaves. However, the Tsimane’ did not.

“The relative pitch was preserved (between notes in the series), but the absolute pitch produced by the Tsimane’ didn’t have any relationship to the absolute pitch of the stimulus,” Jacoby says. “That’s consistent with the idea that perceptual similarity is something that we acquire from exposure to Western music, where the octave is structurally very important.”

The ability to reproduce the same note in different octaves may be honed by singing along with others whose natural registers are different, or singing along with an instrument being played in a different pitch range, Jacoby says.

Limits of perception

The study findings also shed light on the upper limits of pitch perception for humans. It has been known for a long time that Western listeners cannot accurately distinguish pitches above about 4,000 hertz, although they can still hear frequencies up to nearly 20,000 hertz. In a traditional 88-key piano, the highest note is about 4,100 hertz.

People have speculated that the piano was designed to go only that high because of a fundamental limit on pitch perception, but McDermott thought it could be possible that the opposite was true: That is, the limit was culturally influenced by the fact that few musical instruments produce frequencies higher than 4,000 hertz.

The researchers found that although Tsimane’ musical instruments usually have upper limits much lower than 4,000 hertz, Tsimane’ listeners could distinguish pitches very well up to about 4,000 hertz, as evidenced by accurate sung reproductions of those pitch intervals. Above that threshold, their perceptions broke down, very similarly to Western listeners.

“It looks almost exactly the same across groups, so we have some evidence for biological constraints on the limits of pitch,” Jacoby says.

One possible explanation for this limit is that once frequencies reach about 4,000 hertz, the firing rates of the neurons of our inner ear can’t keep up and we lose a critical cue with which to distinguish different frequencies.

“The new study contributes to the age-long debate about the interplays between culture and biological constraints in music,” says Daniel Pressnitzer, a senior research scientist at Paris Descartes University, who was not involved in the research. “This unique, precious, and extensive dataset demonstrates both striking similarities and unexpected differences in how Tsimane’ and Western listeners perceive or conceive musical pitch.”

Jacoby and McDermott now hope to expand their cross-cultural studies to other groups who have had little exposure to Western music, and to perform more detailed studies of pitch perception among the Tsimane’.

Such studies have already shown the value of including research participants other than the Western-educated, relatively wealthy college undergraduates who are the subjects of most academic studies on perception, McDermott says. These broader studies allow researchers to tease out different elements of perception that cannot be seen when examining only a single, homogenous group.

“We’re finding that there are some cross-cultural similarities, but there also seems to be really striking variation in things that a lot of people would have presumed would be common across cultures and listeners,” McDermott says. “These differences in experience can lead to dissociations of different aspects of perception, giving you clues to what the parts of the perceptual system are.”

The research was funded by the James S. McDonnell Foundation, the National Institutes of Health, and the Presidential Scholar in Society and Neuroscience Program at Columbia University.

Mehrdad Jazayeri and Hazel Sive awarded 2019 School of Science teaching prizes

The School of Science has announced that the recipients of the school’s 2019 Teaching Prizes for Graduate and Undergraduate Education are Mehrdad Jazayeri and Hazel Sive. Nominated by peers and students, the faculty members chosen to receive these prizes are selected to acknowledge their exemplary efforts in teaching graduate and undergraduate students.

Mehrdad Jazayeri, an associate professor in the Department of Brain and Cognitive Sciences and investigator at the McGovern Institute for Brain Research, is awarded the prize for graduate education for 9.014 (Quantitative Methods and Computational Models in Neuroscience). Earlier this year, he was recognized for excellence in graduate teaching by the Department of Brain and Cognitive Sciences and won a Graduate Student Council teaching award in 2016. In their nomination letters, peers and students alike remarked that he displays not only great knowledge, but extraordinary skill in teaching, most notably by ensuring everyone learns the material. Jazayeri does so by considering students’ diverse backgrounds and contextualizing subject material to relatable applications in various fields of science according to students’ interests. He also improves and adjusts the course content, pace, and intensity in response to student input via surveys administered throughout the semester.

Hazel Sive, a professor in the Department of Biology, member of the Whitehead Institute for Biomedical Research, and associate member of the Broad Institute of MIT and Harvard, is awarded the prize for undergraduate education. A MacVicar Faculty Fellow, she has been recognized with MIT’s highest undergraduate teaching award in the past, as well as the 2003 School of Science Teaching Prize for Graduate Education. Exemplified by her nominations, Sive’s laudable teaching career at MIT continues to receive praise from undergraduate students who take her classes. In recent post-course evaluations, students commended her exemplary and dedicated efforts to her field and to their education.

The School of Science welcomes nominations for the teaching prize in the spring semester of each academic year. Nominations can be submitted at the school’s website.