Three from MIT awarded 2020 Guggenheim Fellowships

MIT faculty members Sabine Iatridou, Jonathan Gruber, and Rebecca Saxe are among 175 scientists, artists, and scholars awarded 2020 fellowships from the John Simon Guggenheim Foundation. Appointed on the basis of prior achievement and exceptional promise, the 2020 Guggenheim Fellows were selected from almost 3,000 applicants.

“It’s exceptionally encouraging to be able to share such positive news at this terribly challenging time” says Edward Hirsch, president of the foundation. “A Guggenheim Fellowship has always offered practical assistance, helping fellows do their work, but for many of the new fellows, it may be a lifeline at a time of hardship, a survival tool as well as a creative one.”

Since 1925, the foundation has granted more the $375 million in fellowships to over 18,000 individuals, including Nobel laureates, Fields medalists, poets laureate, and winners of the Pulitzer Prize, among other internationally recognized honors. This year’s MIT recipients include a linguist, an economist, and a cognitive neuroscientist.

Rebecca Saxe is an associate investigator of the McGovern Institute and the John W. Jarve (1978) Professor in Brain and Cognitive Sciences. She studies human social cognition, using a combination of behavioral testing and brain imaging technologies. She is best known for her work on brain regions specialized for abstract concepts such as “theory of mind” tasks that involve understanding the mental states of other people. She also studies the development of the human brain during early infancy. She obtained her PhD from MIT and was a Harvard University junior fellow before joining the MIT faculty in 2006. Saxe was chosen in 2012 as a Young Global Leader by the World Economic Forum, and she received the 2014 Troland Award from the National Academy of Sciences. Her TED Talk, “How we read each other’s minds” has been viewed over 3 million times.

Jonathan Gruber is the Ford Professor of Economics at MIT, the director of the Health Care Program at the National Bureau of Economic Research, and the former president of the American Society of Health Economists. He has published more than 175 research articles, has edited six research volumes, and is the author of “Public Finance and Public Policy,” a leading undergraduate text; “Health Care Reform,” a graphic novel; and “Jump-Starting America: How Breakthrough Science Can Revive Economic Growth and the American Dream.” In 2006 he received the American Society of Health Economists Inaugural Medal for the best health economist in the nation aged 40 and under. He served as deputy sssistant secretary for economic policy at the U.S. Department of the Treasury. He was a key architect of Massachusetts’ ambitious health reform effort, and became an inaugural member of the Health Connector Board, the main implementing body for that effort. He served as a technical consultant to the Obama administration and worked with both the administration and Congress to help craft the Affordable Care Act. In 2011, he was named “One of the Top 25 Most Innovative and Practical Thinkers of Our Time” by Slate magazine.

Sabine Iatridou is professor of linguistics in MIT’s Department of Linguistics and Philosophy. Her work focuses on syntax and the syntax-semantics interface, as well as comparative linguistics. She is the author and coauthor of a series of innovative papers about tense and modality that opened up whole new domains of research for the field. Since those publications, she has made foundational contributions to many branches of linguistics that connect form with meaning. She is the recipient of the National Young Investigator Award (USA), of an honorary doctorate from the University of Crete in Greece, and of an award from the Royal Dutch Academy of Sciences. She was elected fellow of the Linguistic Society of America. She is co-founder and co-director of the CreteLing Summer School of Linguistics.

“As we grapple with the difficulties of the moment, it is also important to look to the future,” says Hirsch. “The artists, writers, scholars, and scientific researchers supported by the fellowship will help us understand and learn from what we are enduring individually and collectively, and it is an honor for the foundation to help them do their essential work.”

Researchers achieve remote control of hormone release

Abnormal levels of stress hormones such as adrenaline and cortisol are linked to a variety of mental health disorders, including depression and posttraumatic stress disorder (PTSD). MIT researchers have now devised a way to remotely control the release of these hormones from the adrenal gland, using magnetic nanoparticles.

This approach could help scientists to learn more about how hormone release influences mental health, and could eventually offer a new way to treat hormone-linked disorders, the researchers say.

“We’re looking how can we study and eventually treat stress disorders by modulating peripheral organ function, rather than doing something highly invasive in the central nervous system,” says Polina Anikeeva, an MIT professor of materials science and engineering and of brain and cognitive sciences.

To achieve control over hormone release, Dekel Rosenfeld, an MIT-Technion postdoc in Anikeeva’s group, has developed specialized magnetic nanoparticles that can be injected into the adrenal gland. When exposed to a weak magnetic field, the particles heat up slightly, activating heat-responsive channels that trigger hormone release. This technique can be used to stimulate an organ deep in the body with minimal invasiveness.

Anikeeva and Alik Widge, an assistant professor of psychiatry at the University of Minnesota and a former research fellow at MIT’s Picower Institute for Learning and Memory, are the senior authors of the study. Rosenfeld is the lead author of the paper, which appears today in Science Advances.

Controlling hormones

Anikeeva’s lab has previously devised several novel magnetic nanomaterials, including particles that can release drugs at precise times in specific locations in the body.

In the new study, the research team wanted to explore the idea of treating disorders of the brain by manipulating organs that are outside the central nervous system but influence it through hormone release. One well-known example is the hypothalamic-pituitary-adrenal (HPA) axis, which regulates stress response in mammals. Hormones secreted by the adrenal gland, including cortisol and adrenaline, play important roles in depression, stress, and anxiety.

“Some disorders that we consider neurological may be treatable from the periphery, if we can learn to modulate those local circuits rather than going back to the global circuits in the central nervous system,” says Anikeeva, who is a member of MIT’s Research Laboratory of Electronics and McGovern Institute for Brain Research.

As a target to stimulate hormone release, the researchers decided on ion channels that control the flow of calcium into adrenal cells. Those ion channels can be activated by a variety of stimuli, including heat. When calcium flows through the open channels into adrenal cells, the cells begin pumping out hormones. “If we want to modulate the release of those hormones, we need to be able to essentially modulate the influx of calcium into adrenal cells,” Rosenfeld says.

Unlike previous research in Anikeeva’s group, in this study magnetothermal stimulation was applied to modulate the function of cells without artificially introducing any genes.

To stimulate these heat-sensitive channels, which naturally occur in adrenal cells, the researchers designed nanoparticles made of magnetite, a type of iron oxide that forms tiny magnetic crystals about 1/5000 the thickness of a human hair. In rats, they found these particles could be injected directly into the adrenal glands and remain there for at least six months. When the rats were exposed to a weak magnetic field — about 50 millitesla, 100 times weaker than the fields used for magnetic resonance imaging (MRI) — the particles heated up by about 6 degrees Celsius, enough to trigger the calcium channels to open without damaging any surrounding tissue.

The heat-sensitive channel that they targeted, known as TRPV1, is found in many sensory neurons throughout the body, including pain receptors. TRPV1 channels can be activated by capsaicin, the organic compound that gives chili peppers their heat, as well as by temperature. They are found across mammalian species, and belong to a family of many other channels that are also sensitive to heat.

This stimulation triggered a hormone rush — doubling cortisol production and boosting noradrenaline by about 25 percent. That led to a measurable increase in the animals’ heart rates.

Treating stress and pain

The researchers now plan to use this approach to study how hormone release affects PTSD and other disorders, and they say that eventually it could be adapted for treating such disorders. This method would offer a much less invasive alternative to potential treatments that involve implanting a medical device to electrically stimulate hormone release, which is not feasible in organs such as the adrenal glands that are soft and highly vascularized, the researchers say.

Another area where this strategy could hold promise is in the treatment of pain, because heat-sensitive ion channels are often found in pain receptors.

“Being able to modulate pain receptors with this technique potentially will allow us to study pain, control pain, and have some clinical applications in the future, which hopefully may offer an alternative to medications or implants for chronic pain,” Anikeeva says. With further investigation of the existence of TRPV1 in other organs, the technique can potentially be extended to other peripheral organs such as the digestive system and the pancreas.

The research was funded by the U.S. Defense Advance Research Projects Agency ElectRx Program, a Bose Research Grant, the National Institutes of Health BRAIN Initiative, and a MIT-Technion fellowship.

How dopamine drives brain activity

Using a specialized magnetic resonance imaging (MRI) sensor, MIT neuroscientists have discovered how dopamine released deep within the brain influences both nearby and distant brain regions.

Dopamine plays many roles in the brain, most notably related to movement, motivation, and reinforcement of behavior. However, until now it has been difficult to study precisely how a flood of dopamine affects neural activity throughout the brain. Using their new technique, the MIT team found that dopamine appears to exert significant effects in two regions of the brain’s cortex, including the motor cortex.

“There has been a lot of work on the immediate cellular consequences of dopamine release, but here what we’re looking at are the consequences of what dopamine is doing on a more brain-wide level,” says Alan Jasanoff, an MIT professor of biological engineering, brain and cognitive sciences, and nuclear science and engineering. Jasanoff is also an associate member of MIT’s McGovern Institute for Brain Research and the senior author of the study.

The MIT team found that in addition to the motor cortex, the remote brain area most affected by dopamine is the insular cortex. This region is critical for many cognitive functions related to perception of the body’s internal states, including physical and emotional states.

MIT postdoc Nan Li is the lead author of the study, which appears today in Nature.

Tracking dopamine

Like other neurotransmitters, dopamine helps neurons to communicate with each other over short distances. Dopamine holds particular interest for neuroscientists because of its role in motivation, addiction, and several neurodegenerative disorders, including Parkinson’s disease. Most of the brain’s dopamine is produced in the midbrain by neurons that connect to the striatum, where the dopamine is released.

For many years, Jasanoff’s lab has been developing tools to study how molecular phenomena such as neurotransmitter release affect brain-wide functions. At the molecular scale, existing techniques can reveal how dopamine affects individual cells, and at the scale of the entire brain, functional magnetic resonance imaging (fMRI) can reveal how active a particular brain region is. However, it has been difficult for neuroscientists to determine how single-cell activity and brain-wide function are linked.

“There have been very few brain-wide studies of dopaminergic function or really any neurochemical function, in large part because the tools aren’t there,” Jasanoff says. “We’re trying to fill in the gaps.”

About 10 years ago, his lab developed MRI sensors that consist of magnetic proteins that can bind to dopamine. When this binding occurs, the sensors’ magnetic interactions with surrounding tissue weaken, dimming the tissue’s MRI signal. This allows researchers to continuously monitor dopamine levels in a specific part of the brain.

In their new study, Li and Jasanoff set out to analyze how dopamine released in the striatum of rats influences neural function both locally and in other brain regions. First, they injected their dopamine sensors into the striatum, which is located deep within the brain and plays an important role in controlling movement. Then they electrically stimulated a part of the brain called the lateral hypothalamus, which is a common experimental technique for rewarding behavior and inducing the brain to produce dopamine.

Then, the researchers used their dopamine sensor to measure dopamine levels throughout the striatum. They also performed traditional fMRI to measure neural activity in each part of the striatum. To their surprise, they found that high dopamine concentrations did not make neurons more active. However, higher dopamine levels did make the neurons remain active for a longer period of time.

“When dopamine was released, there was a longer duration of activity, suggesting a longer response to the reward,” Jasanoff says. “That may have something to do with how dopamine promotes learning, which is one of its key functions.”

Long-range effects

After analyzing dopamine release in the striatum, the researchers set out to determine this dopamine might affect more distant locations in the brain. To do that, they performed traditional fMRI imaging on the brain while also mapping dopamine release in the striatum. “By combining these techniques we could probe these phenomena in a way that hasn’t been done before,” Jasanoff says.

The regions that showed the biggest surges in activity in response to dopamine were the motor cortex and the insular cortex. If confirmed in additional studies, the findings could help researchers understand the effects of dopamine in the human brain, including its roles in addiction and learning.

“Our results could lead to biomarkers that could be seen in fMRI data, and these correlates of dopaminergic function could be useful for analyzing animal and human fMRI,” Jasanoff says.

The research was funded by the National Institutes of Health and a Stanley Fahn Research Fellowship from the Parkinson’s Disease Foundation.

How the brain encodes landmarks that help us navigate

When we move through the streets of our neighborhood, we often use familiar landmarks to help us navigate. And as we think to ourselves, “OK, now make a left at the coffee shop,” a part of the brain called the retrosplenial cortex (RSC) lights up.

While many studies have linked this brain region with landmark-based navigation, exactly how it helps us find our way is not well-understood. A new study from MIT neuroscientists now reveals how neurons in the RSC use both visual and spatial information to encode specific landmarks.

“There’s a synthesis of some of these signals — visual inputs and body motion — to represent concepts like landmarks,” says Mark Harnett, an assistant professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research. “What we went after in this study is the neuron-level and population-level representation of these different aspects of spatial navigation.”

In a study of mice, the researchers found that this brain region creates a “landmark code” by combining visual information about the surrounding environment with spatial feedback of the mice’s own position along a track. Integrating these two sources of information allowed the mice to learn where to find a reward, based on landmarks that they saw.

“We believe that this code that we found, which is really locked to the landmarks, and also gives the animals a way to discriminate between landmarks, contributes to the animals’ ability to use those landmarks to find rewards,” says Lukas Fischer, an MIT postdoc and the lead author of the study.

Harnett is the senior author of the study, which appears today in the journal eLife. Other authors are graduate student Raul Mojica Soto-Albors and recent MIT graduate Friederike Buck.

Encoding landmarks

Previous studies have found that people with damage to the RSC have trouble finding their way from one place to another, even though they can still recognize their surroundings. The RSC is also one of the first areas affected in Alzheimer’s patients, who often have trouble navigating.

The RSC is wedged between the primary visual cortex and the motor cortex, and it receives input from both of those areas. It also appears to be involved in combining two types of representations of space — allocentric, meaning the relationship of objects to each other, and egocentric, meaning the relationship of objects to the viewer.

“The evidence suggests that RSC is really a place where you have a fusion of these different frames of reference,” Harnett says. “Things look different when I move around in the room, but that’s because my vantage point has changed. They’re not changing with respect to one another.”

In this study, the MIT team set out to analyze the behavior of individual RSC neurons in mice, including how they integrate multiple inputs that help with navigation. To do that, they created a virtual reality environment for the mice by allowing them to run on a treadmill while they watch a video screen that makes it appear they are running along a track. The speed of the video is determined by how fast the mice run.

At specific points along the track, landmarks appear, signaling that there’s a reward available a certain distance beyond the landmark. The mice had to learn to distinguish between two different landmarks, and to learn how far beyond each one they had to run to get the reward.

Once the mice learned the task, the researchers recorded neural activity in the RSC as the animals ran along the virtual track. They were able to record from a few hundred neurons at a time, and found that most of them anchored their activity to a specific aspect of the task.

There were three primary anchoring points: the beginning of the trial, the landmark, and the reward point. The majority of the neurons were anchored to the landmarks, meaning that their activity would consistently peak at a specific point relative to the landmark, say 50 centimeters before it or 20 centimeters after it.

Most of those neurons responded to both of the landmarks, but a small subset responded to only one or the other. The researchers hypothesize that those strongly selective neurons help the mice to distinguish between the landmarks and run the correct distance to get the reward.

When the researchers used optogenetics (a tool that can turn off neuron activity) to block activity in the RSC, the mice’s performance on the task became much worse.

Combining inputs

The researchers also did an experiment in which the mice could choose to run or not while the video played at a constant speed, unrelated to the mice’s movement. The mice could still see the landmarks, but the location of the landmarks was no longer linked to a reward or to the animals’ own behavior. In that situation, RSC neurons did respond to the landmarks, but not as strongly as they did when the mice were using them for navigation.

Further experiments allowed the researchers to tease out just how much neuron activation is produced by visual input (seeing the landmarks) and by feedback on the mouse’s own movement. However, simply adding those two numbers yielded totals much lower than the neuron activity seen when the mice were actively navigating the track.

“We believe that is evidence for a mechanism of nonlinear integration of these inputs, where they get combined in a way that creates a larger response than what you would get if you just added up those two inputs in a linear fashion,” Fischer says.

The researchers now plan to analyze data that they have already collected on how neuron activity evolves over time as the mice learn the task. They also hope to perform further experiments in which they could try to separately measure visual and spatial inputs into different locations within RSC neurons.

The research was funded by the National Institutes of Health, the McGovern Institute, the NEC Corporation Fund for Research in Computers and Communications at MIT, and the Klingenstein-Simons Fellowship in Neuroscience.

Empowering faculty partnerships across the globe

MIT faculty share their creative and technical talent on campus as well as across the globe, compounding the Institute’s impact through strong international partnerships. Thanks to the MIT Global Seed Funds (GSF) program, managed by the MIT International Science and Technology Initiatives (MISTI), more of these faculty members will be able to build on these relationships to develop ideas and create new projects.

“This MISTI fund was extremely helpful in consolidating our collaboration and has been the start of a long-term interaction between the two teams,” says 2017 GSF awardee Mehrdad Jazayeri, associate professor of brain and cognitive sciences and investigator at the McGovern Institute for Brain Research. “We have already submitted multiple abstracts to conferences together, mapped out several ongoing projects, and secured international funding thanks to the preliminary progress this seed fund enabled.”

This year, the 28 funds that comprise MISTI GSF received 232 MIT applications. Over $2.3 million was awarded to 107 projects from 23 departments across the entire Institute. This brings the amount awarded to $22 million over the 12-year life of the program. Besides supporting faculty, these funds also provide meaningful educational opportunities for students. The majority of GSF teams include students from MIT and international collaborators, bolstering both their research portfolios and global experience.

“This project has had important impact on my grad student’s education and development. She was able to apply techniques she has learned to a new and challenging system, mentor an international student, participate in a major international meeting, and visit CEA,” says Professor of Chemistry Elizabeth Nolan, a 2017 GSF awardee.

On top of these academic and research goals, students are actively broadening their cultural experience and scope. “The environment at CEA differs enormously from MIT because it is a national lab and because lab structure and graduate education in France is markedly different than at MIT,” Nolan continues. “At CEA, she had the opportunity to present research to distinguished international colleagues.”

These impactful partnerships unite faculty teams behind common goals to tackle worldwide challenges, helping to develop solutions that would not be possible without international collaboration. 2017 GSF winner Emilio Bizzi, professor emeritus of brain and cognitive sciences and emeritus investigator at the McGovern Institute, articulated the advantage of combining these individual skills within a high-level team. “The collaboration among researchers was valuable in sharing knowledge, experience, skills and techniques … as well as offering the probability of future development of systems to aid in rehabilitation of patients suffering TBI.”

The research opportunities that grow from these seed funds often lead to published papers and additional funding leveraged from early results. The next call for proposals will be in mid-May.

MISTI creates applied international learning opportunities for MIT students that increase their ability to understand and address real-world problems. MISTI collaborates with partners at MIT and beyond, serving as a vital nexus of international activity and bolstering the Institute’s research mission by promoting collaborations between MIT faculty members and their counterparts abroad.

The neural basis of sensory hypersensitivity

Many people with autism spectrum disorders are highly sensitive to light, noise, and other sensory input. A new study in mice reveals a neural circuit that appears to underlie this hypersensitivity, offering a possible strategy for developing new treatments.

MIT and Brown University neuroscientists found that mice lacking a protein called Shank3, which has been previously linked with autism, were more sensitive to a touch on their whiskers than genetically normal mice. These Shank3-deficient mice also had overactive excitatory neurons in a region of the brain called the somatosensory cortex, which the researchers believe accounts for their over-reactivity.

There are currently no treatments for sensory hypersensitivity, but the researchers believe that uncovering the cellular basis of this sensitivity may help scientists to develop potential treatments.

“We hope our studies can point us to the right direction for the next generation of treatment development,” says Guoping Feng, the James W. and Patricia Poitras Professor of Neuroscience at MIT and a member of MIT’s McGovern Institute for Brain Research.

Feng and Christopher Moore, a professor of neuroscience at Brown University, are the senior authors of the paper, which appears today in Nature Neuroscience. McGovern Institute research scientist Qian Chen and Brown postdoc Christopher Deister are the lead authors of the study.

Too much excitation

The Shank3 protein is important for the function of synapses — connections that allow neurons to communicate with each other. Feng has previously shown that mice lacking the Shank3 gene display many traits associated with autism, including avoidance of social interaction, and compulsive, repetitive behavior.

In the new study, Feng and his colleagues set out to study whether these mice also show sensory hypersensitivity. For mice, one of the most important sources of sensory input is the whiskers, which help them to navigate and to maintain their balance, among other functions.

The researchers developed a way to measure the mice’s sensitivity to slight deflections of their whiskers, and then trained the mutant Shank3 mice and normal (“wild-type”) mice to display behaviors that signaled when they felt a touch to their whiskers. They found that mice that were missing Shank3 accurately reported very slight deflections that were not noticed by the normal mice.

“They are very sensitive to weak sensory input, which barely can be detected by wild-type mice,” Feng says. “That is a direct indication that they have sensory over-reactivity.”

Once they had established that the mutant mice experienced sensory hypersensitivity, the researchers set out to analyze the underlying neural activity. To do that, they used an imaging technique that can measure calcium levels, which indicate neural activity, in specific cell types.

They found that when the mice’s whiskers were touched, excitatory neurons in the somatosensory cortex were overactive. This was somewhat surprising because when Shank3 is missing, synaptic activity should drop. That led the researchers to hypothesize that the root of the problem was low levels of Shank3 in the inhibitory neurons that normally turn down the activity of excitatory neurons. Under that hypothesis, diminishing those inhibitory neurons’ activity would allow excitatory neurons to go unchecked, leading to sensory hypersensitivity.

To test this idea, the researchers genetically engineered mice so that they could turn off Shank3 expression exclusively in inhibitory neurons of the somatosensory cortex. As they had suspected, they found that in these mice, excitatory neurons were overactive, even though those neurons had normal levels of Shank3.

“If you only delete Shank3 in the inhibitory neurons in the somatosensory cortex, and the rest of the brain and the body is normal, you see a similar phenomenon where you have hyperactive excitatory neurons and increased sensory sensitivity in these mice,” Feng says.

Reversing hypersensitivity

The results suggest that reestablishing normal levels of neuron activity could reverse this kind of hypersensitivity, Feng says.

“That gives us a cellular target for how in the future we could potentially modulate the inhibitory neuron activity level, which might be beneficial to correct this sensory abnormality,” he says.

Many other studies in mice have linked defects in inhibitory neurons to neurological disorders, including Fragile X syndrome and Rett syndrome, as well as autism.

“Our study is one of several that provide a direct and causative link between inhibitory defects and sensory abnormality, in this model at least,” Feng says. “It provides further evidence to support inhibitory neuron defects as one of the key mechanisms in models of autism spectrum disorders.”

He now plans to study the timing of when these impairments arise during an animal’s development, which could help to guide the development of possible treatments. There are existing drugs that can turn down excitatory neurons, but these drugs have a sedative effect if used throughout the brain, so more targeted treatments could be a better option, Feng says.

“We don’t have a clear target yet, but we have a clear cellular phenomenon to help guide us,” he says. “We are still far away from developing a treatment, but we’re happy that we have identified defects that point in which direction we should go.”

The research was funded by the Hock E. Tan and K. Lisa Yang Center for Autism Research at MIT, the Stanley Center for Psychiatric Research at the Broad Institute of MIT and Harvard, the Nancy Lurie Marks Family Foundation, the Poitras Center for Psychiatric Disorders Research at the McGovern Institute, the Varanasi Family, R. Buxton, and the National Institutes of Health.

Differences between deep neural networks and human perception

When your mother calls your name, you know it’s her voice — no matter the volume, even over a poor cell phone connection. And when you see her face, you know it’s hers — if she is far away, if the lighting is poor, or if you are on a bad FaceTime call. This robustness to variation is a hallmark of human perception. On the other hand, we are susceptible to illusions: We might fail to distinguish between sounds or images that are, in fact, different. Scientists have explained many of these illusions, but we lack a full understanding of the invariances in our auditory and visual systems.

Deep neural networks also have performed speech recognition and image classification tasks with impressive robustness to variations in the auditory or visual stimuli. But are the invariances learned by these models similar to the invariances learned by human perceptual systems? A group of MIT researchers has discovered that they are different. They presented their findings yesterday at the 2019 Conference on Neural Information Processing Systems.

The researchers made a novel generalization of a classical concept: “metamers” — physically distinct stimuli that generate the same perceptual effect. The most famous examples of metamer stimuli arise because most people have three different types of cones in their retinae, which are responsible for color vision. The perceived color of any single wavelength of light can be matched exactly by a particular combination of three lights of different colors — for example, red, green, and blue lights. Nineteenth-century scientists inferred from this observation that humans have three different types of bright-light detectors in our eyes. This is the basis for electronic color displays on all of the screens we stare at every day. Another example in the visual system is that when we fix our gaze on an object, we may perceive surrounding visual scenes that differ at the periphery as identical. In the auditory domain, something analogous can be observed. For example, the “textural” sound of two swarms of insects might be indistinguishable, despite differing in the acoustic details that compose them, because they have similar aggregate statistical properties. In each case, the metamers provide insight into the mechanisms of perception, and constrain models of the human visual or auditory systems.

In the current work, the researchers randomly chose natural images and sound clips of spoken words from standard databases, and then synthesized sounds and images so that deep neural networks would sort them into the same classes as their natural counterparts. That is, they generated physically distinct stimuli that are classified identically by models, rather than by humans. This is a new way to think about metamers, generalizing the concept to swap the role of computer models for human perceivers. They therefore called these synthesized stimuli “model metamers” of the paired natural stimuli. The researchers then tested whether humans could identify the words and images.

“Participants heard a short segment of speech and had to identify from a list of words which word was in the middle of the clip. For the natural audio this task is easy, but for many of the model metamers humans had a hard time recognizing the sound,” explains first-author Jenelle Feather, a graduate student in the MIT Department of Brain and Cognitive Sciences (BCS) and a member of the Center for Brains, Minds, and Machines (CBMM). That is, humans would not put the synthetic stimuli in the same class as the spoken word “bird” or the image of a bird. In fact, model metamers generated to match the responses of the deepest layers of the model were generally unrecognizable as words or images by human subjects.

Josh McDermott, associate professor in BCS and investigator in CBMM, makes the following case: “The basic logic is that if we have a good model of human perception, say of speech recognition, then if we pick two sounds that the model says are the same and present these two sounds to a human listener, that human should also say that the two sounds are the same. If the human listener instead perceives the stimuli to be different, this is a clear indication that the representations in our model do not match those of human perception.”

Joining Feather and McDermott on the paper are Alex Durango, a post-baccalaureate student, and Ray Gonzalez, a research assistant, both in BCS.

There is another type of failure of deep networks that has received a lot of attention in the media: adversarial examples (see, for example, “Why did my classifier just mistake a turtle for a rifle?“). These are stimuli that appear similar to humans but are misclassified by a model network (by design — they are constructed to be misclassified). They are complementary to the stimuli generated by Feather’s group, which sound or appear different to humans but are designed to be co-classified by the model network. The vulnerabilities of model networks exposed to adversarial attacks are well-known — face-recognition software might mistake identities; automated vehicles might not recognize pedestrians.

The importance of this work lies in improving models of perception beyond deep networks. Although the standard adversarial examples indicate differences between deep networks and human perceptual systems, the new stimuli generated by the McDermott group arguably represent a more fundamental model failure — they show that generic examples of stimuli classified as the same by a deep network produce wildly different percepts for humans.

The team also figured out ways to modify the model networks to yield metamers that were more plausible sounds and images to humans. As McDermott says, “This gives us hope that we may be able to eventually develop models that pass the metamer test and better capture human invariances.”

“Model metamers demonstrate a significant failure of present-day neural networks to match the invariances in the human visual and auditory systems,” says Feather, “We hope that this work will provide a useful behavioral measuring stick to improve model representations and create better models of human sensory systems.”

Controlling attention with brain waves

Having trouble paying attention? MIT neuroscientists may have a solution for you: Turn down your alpha brain waves. In a new study, the researchers found that people can enhance their attention by controlling their own alpha brain waves based on neurofeedback they receive as they perform a particular task.

The study found that when subjects learned to suppress alpha waves in one hemisphere of their parietal cortex, they were able to pay better attention to objects that appeared on the opposite side of their visual field. This is the first time that this cause-and-effect relationship has been seen, and it suggests that it may be possible for people to learn to improve their attention through neurofeedback.

Desimone lab study shows that people can boost attention by manipulating their own alpha brain waves with neurofeedback training.

“There’s a lot of interest in using neurofeedback to try to help people with various brain disorders and behavioral problems,” says Robert Desimone, director of MIT’s McGovern Institute for Brain Research. “It’s a completely noninvasive way of controlling and testing the role of different types of brain activity.”

It’s unknown how long these effects might last and whether this kind of control could be achieved with other types of brain waves, such as beta waves, which are linked to Parkinson’s disease. The researchers are now planning additional studies of whether this type of neurofeedback training might help people suffering from attentional or other neurological disorders.

Desimone is the senior author of the paper, which appears in Neuron on Dec. 4. McGovern Institute postdoc Yasaman Bagherzadeh is the lead author of the study. Daniel Baldauf, a former McGovern Institute research scientist, and Dimitrios Pantazis, a McGovern Institute principal research scientist, are also authors of the paper.

Alpha and attention

There are billions of neurons in the brain, and their combined electrical signals generate oscillations known as brain waves. Alpha waves, which oscillate in the frequency of 8 to 12 hertz, are believed to play a role in filtering out distracting sensory information.

Previous studies have shown a strong correlation between attention and alpha brain waves, particularly in the parietal cortex. In humans and in animal studies, a decrease in alpha waves has been linked to enhanced attention. However, it was unclear if alpha waves control attention or are just a byproduct of some other process that governs attention, Desimone says.

To test whether alpha waves actually regulate attention, the researchers designed an experiment in which people were given real-time feedback on their alpha waves as they performed a task. Subjects were asked to look at a grating pattern in the center of a screen, and told to use mental effort to increase the contrast of the pattern as they looked at it, making it more visible.

During the task, subjects were scanned using magnetoencephalography (MEG), which reveals brain activity with millisecond precision. The researchers measured alpha levels in both the left and right hemispheres of the parietal cortex and calculated the degree of asymmetry between the two levels. As the asymmetry between the two hemispheres grew, the grating pattern became more visible, offering the participants real-time feedback.

McGovern postdoc Yasaman sits in a magnetoencephalography (MEG) scanner. Photo: Justin Knight

Although subjects were not told anything about what was happening, after about 20 trials (which took about 10 minutes), they were able to increase the contrast of the pattern. The MEG results indicated they had done so by controlling the asymmetry of their alpha waves.

“After the experiment, the subjects said they knew that they were controlling the contrast, but they didn’t know how they did it,” Bagherzadeh says. “We think the basis is conditional learning — whenever you do a behavior and you receive a reward, you’re reinforcing that behavior. People usually don’t have any feedback on their brain activity, but when we provide it to them and reward them, they learn by practicing.”

Although the subjects were not consciously aware of how they were manipulating their brain waves, they were able to do it, and this success translated into enhanced attention on the opposite side of the visual field. As the subjects looked at the pattern in the center of the screen, the researchers flashed dots of light on either side of the screen. The participants had been told to ignore these flashes, but the researchers measured how their visual cortex responded to them.

One group of participants was trained to suppress alpha waves in the left side of the brain, while the other was trained to suppress the right side. In those who had reduced alpha on the left side, their visual cortex showed a larger response to flashes of light on the right side of the screen, while those with reduced alpha on the right side responded more to flashes seen on the left side.

“Alpha manipulation really was controlling people’s attention, even though they didn’t have any clear understanding of how they were doing it,” Desimone says.

Persistent effect

After the neurofeedback training session ended, the researchers asked subjects to perform two additional tasks that involve attention, and found that the enhanced attention persisted. In one experiment, subjects were asked to watch for a grating pattern, similar to what they had seen during the neurofeedback task, to appear. In some of the trials, they were told in advance to pay attention to one side of the visual field, but in others, they were not given any direction.

When the subjects were told to pay attention to one side, that instruction was the dominant factor in where they looked. But if they were not given any cue in advance, they tended to pay more attention to the side that had been favored during their neurofeedback training.

In another task, participants were asked to look at an image such as a natural outdoor scene, urban scene, or computer-generated fractal shape. By tracking subjects’ eye movements, the researchers found that people spent more time looking at the side that their alpha waves had trained them to pay attention to.

“It is promising that the effects did seem to persist afterwards,” says Desimone, though more study is needed to determine how long these effects might last.

The research was funded by the McGovern Institute.

MIT appoints 14 faculty members to named professorships

The School of Science has announced that 14 of its faculty members have been appointed to named professorships. The faculty members selected for these positions receive additional support to pursue their research and develop their careers.

Riccardo Comin is an assistant professor in the Department of Physics. He has been named a Class of 1947 Career Development Professor. This three-year professorship is granted in recognition of the recipient’s outstanding work in both research and teaching. Comin is interested in condensed matter physics. He uses experimental methods to synthesize new materials, as well as analysis through spectroscopy and scattering to investigate solid state physics. Specifically, the Comin lab attempts to discover and characterize electronic phases of quantum materials. Recently, his lab, in collaboration with colleagues, discovered that weaving a conductive material into a particular pattern known as the “kagome” pattern can result in quantum behavior when electricity is passed through.

Joseph Davis, assistant professor in the Department of Biology, has been named a Whitehead Career Development Professor. He looks at how cells build and deconstruct complex molecular machinery. The work of his lab group relies on biochemistry, biophysics, and structural approaches that include spectrometry and microscopy. A current project investigates the formation of the ribosome, an essential component in all cells. His work has implications for metabolic engineering, drug delivery, and materials science.

Lawrence Guth is now the Claude E. Shannon (1940) Professor of Mathematics. Guth explores harmonic analysis and combinatorics, and he is also interested in metric geometry and identifying connections between geometric inequalities and topology. The subject of metric geometry revolves around being able to estimate measurements, including length, area, volume and distance, and combinatorial geometry is essentially the estimation of the intersection of patters in simple shapes, including lines and circles.

Michael Halassa, an assistant professor in the Department of Brain and Cognitive Sciences, will hold the three-year Class of 1958 Career Development Professorship. His area of interest is brain circuitry. By investigating the networks and connections in the brain, he hopes to understand how they operate — and identify any ways in which they might deviate from normal operations, causing neurological and psychiatric disorders. Several publications from his lab discuss improvements in the treatment of the deleterious symptoms of autism spectrum disorder and schizophrenia, and his latest news provides insights on how the brain filters out distractions, particularly noise. Halassa is an associate investigator at the McGovern Institute for Brain Research and an affiliate member of the Picower Institute for Learning and Memory.

Sebastian Lourido, an assistant professor and the new Latham Family Career Development Professor in the Department of Biology for the next three years, works on treatments for infectious disease by learning about parasitic vulnerabilities. Focusing on human pathogens, Lourido and his lab are interested in what allows parasites to be so widespread and deadly, looking on a molecular level. This includes exploring how calcium regulates eukaryotic cells, which, in turn, affect processes such as muscle contraction and membrane repair, in addition to kinase responses.

Brent Minchew is named a Cecil and Ida Green Career Development Professor for a three-year term. Minchew, a faculty member in the Department of Earth, Atmospheric and Planetary Sciences, studies glaciers using remote sensing methods, such as interferometric synthetic aperture radar. His research into glaciers, including their mechanics, rheology, and interactions with their surrounding environment, extends as far as observing their responses to climate change. His group recently determined that Antarctica, in a worst-case scenario climate projection, would not contribute as much as predicted to rising sea level.

Elly Nedivi, a professor in the departments of Brain and Cognitive Sciences and Biology, has been named the inaugural William R. (1964) And Linda R. Young Professor. She works on brain plasticity, defined as the brain’s ability to adapt with experience, by identifying genes that play a role in plasticity and their neuronal and synaptic functions. In one of her lab’s recent publications, they suggest that variants of a particular gene may undermine expression or production of a protein, increasing the risk of bipolar disorder. In addition, she collaborates with others at MIT to develop new microscopy tools that allow better analysis of brain connectivity. Nedivi is also a member of the Picower Institute for Learning and Memory.

Andrei Negut has been named a Class of 1947 Career Development Professor for a three-year term. Negut, a member of the Department of Mathematics, fixates on problems in geometric representation theory. This topic requires investigation within algebraic geometry and representation theory simultaneously, with implications for mathematical physics, symplectic geometry, combinatorics and probability theory.

Matĕj Peč, the Victor P. Starr Career Development Professor in the Department of Earth, Atmospheric and Planetary Science until 2021, studies how the movement of the Earth’s tectonic plates affects rocks, mechanically and microstructurally. To investigate such a large-scale topic, he utilizes high-pressure, high-temperature experiments in a lab to simulate the driving forces associated with plate motion, and compares results with natural observations and theoretical modeling. His lab has identified a particular boundary beneath the Earth’s crust where rock properties shift from brittle, like peanut brittle, to viscous, like honey, and determined how that layer accommodates building strain between the two. In his investigations, he also considers the effect on melt generation miles underground.

Kerstin Perez has been named the three-year Class of 1948 Career Development Professor in the Department of Physics. Her research interest is dark matter. She uses novel analytical tools, such as those affixed on a balloon-borne instrument that can carry out processes similar to that of a particle collider (like the Large Hadron Collider) to detect new particle interactions in space with the help of cosmic rays. In another research project, Perez uses a satellite telescope array on Earth to search for X-ray signatures of mysterious particles. Her work requires heavy involvement with collaborative observatories, instruments, and telescopes. Perez is affiliated with the Kavli Institute for Astrophysics and Space Research.

Bjorn Poonen, named a Distinguished Professor of Science in the Department of Mathematics, studies number theory and algebraic geometry. He, his colleagues, and his lab members generate algorithms that can solve polynomial equations with the particular requirement that the solutions be rational numbers. These types of problems can be useful in encoding data. He also helps to determine what is undeterminable, that is exploring the limits of computing.

Daniel Suess, named a Class of 1948 Career Development Professor in the Department of Chemistry, uses molecular chemistry to explain global biogeochemical cycles. In the fields of inorganic and biological chemistry, Suess and his lab look into understanding complex and challenging reactions and clustering of particular chemical elements and their catalysts. Most notably, these reactions include those that are essential to solar fuels. Suess’s efforts to investigate both biological and synthetic systems have broad aims of both improving human health and decreasing environmental impacts.

Alison Wendlandt is the new holder of the five-year Cecil and Ida Green Career Development Professorship. In the Department of Chemistry, the Wendlandt research group focuses on physical organic chemistry and organic and organometallic synthesis to develop reaction catalysts. Her team fixates on designing new catalysts, identifying processes to which these catalysts can be applied, and determining principles that can expand preexisting reactions. Her team’s efforts delve into the fields of synthetic organic chemistry, reaction kinetics, and mechanics.

Julien de Wit, a Department of Earth, Atmospheric and Planetary Sciences assistant professor, has been named a Class of 1954 Career Development Professor. He combines math and science to answer questions about big-picture planetary questions. Using data science, de Wit develops new analytical techniques for mapping exoplanetary atmospheres, studies planet-star interactions of planetary systems, and determines atmospheric and planetary properties of exoplanets from spectroscopic information. He is a member of the scientific team involved in the Search for habitable Planets EClipsing ULtra-cOOl Stars (SPECULOOS) TRANsiting Planets and Planetesimals Small Telescope (TRAPPIST), made up of an international collection of observatories. He is affiliated with the Kavli Institute.

Drug combination reverses hypersensitivity to noise

People with autism often experience hypersensitivity to noise and other sensory input. MIT neuroscientists have now identified two brain circuits that help tune out distracting sensory information, and they have found a way to reverse noise hypersensitivity in mice by boosting the activity of those circuits.

One of the circuits the researchers identified is involved in filtering noise, while the other exerts top-down control by allowing the brain to switch its attention between different sensory inputs.

The researchers showed that restoring the function of both circuits worked much better than treating either circuit alone. This demonstrates the benefits of mapping and targeting multiple circuits involved in neurological disorders, says Michael Halassa, an assistant professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research.

“We think this work has the potential to transform how we think about neurological and psychiatric disorders, [so that we see them] as a combination of circuit deficits,” says Halassa, the senior author of the study. “The way we should approach these brain disorders is to map, to the best of our ability, what combination of deficits are there, and then go after that combination.”

MIT postdoc Miho Nakajima and research scientist L. Ian Schmitt are the lead authors of the paper, which appears in Neuron on Oct. 21. Guoping Feng, the James W. and Patricia Poitras Professor of Neuroscience and a member of the McGovern Institute, is also an author of the paper.

Hypersensitivity

Many gene variants have been linked with autism, but most patients have very few, if any, of those variants. One of those genes is ptchd1, which is mutated in about 1 percent of people with autism. In a 2016 study, Halassa and Feng found that during development this gene is primarily expressed in a part of the thalamus called the thalamic reticular nucleus (TRN).

That study revealed that neurons of the TRN help the brain to adjust to changes in sensory input, such as noise level or brightness. In mice with ptchd1 missing, TRN neurons fire too fast, and they can’t adjust when noise levels change. This prevents the TRN from performing its usual sensory filtering function, Halassa says.

“Neurons that are there to filter out noise, or adjust the overall level of activity, are not adapting. Without the ability to fine-tune the overall level of activity, you can get overwhelmed very easily,” he says.

In the 2016 study, the researchers also found that they could restore some of the mice’s noise filtering ability by treating them with a drug called EBIO that activates neurons’ potassium channels. EBIO has harmful cardiac side effects so likely could not be used in human patients, but other drugs that boost TRN activity may have a similar beneficial effect on hypersensitivity, Halassa says.

In the new Neuron paper, the researchers delved more deeply into the effects of ptchd1, which is also expressed in the prefrontal cortex. To explore whether the prefrontal cortex might play a role in the animals’ hypersensitivity, the researchers used a task in which mice have to distinguish between three different tones, presented with varying amounts of background noise.

Normal mice can learn to use a cue that alerts them whenever the noise level is going to be higher, improving their overall performance on the task. A similar phenomenon is seen in humans, who can adjust better to noisier environments when they have some advance warning, Halassa says. However, mice with the ptchd1 mutation were unable to use these cues to improve their performance, even when their TRN deficit was treated with EBIO.

This suggested that another brain circuit must be playing a role in the animals’ ability to filter out distracting noise. To test the possibility that this circuit is located in the prefrontal cortex, the researchers recorded from neurons in that region while mice lacking ptch1 performed the task. They found that neuronal activity died out much faster in these mice than in the prefrontal cortex of normal mice. That led the researchers to test another drug, known as modafinil, which is FDA-approved to treat narcolepsy and is sometimes prescribed to improve memory and attention.

The researchers found that when they treated mice missing ptchd1 with both modafinil and EBIO, their hypersensitivity disappeared, and their performance on the task was the same as that of normal mice.

Targeting circuits

This successful reversal of symptoms suggests that the mice missing ptchd1 experience a combination of circuit deficits that each contribute differently to noise hypersensitivity. One circuit filters noise, while the other helps to control noise filtering based on external cues. Ptch1 mutations affect both circuits, in different ways that can be treated with different drugs.

Both of those circuits could also be affected by other genetic mutations that have been linked to autism and other neurological disorders, Halassa says. Targeting those circuits, rather than specific genetic mutations, may offer a more effective way to treat such disorders, he says.

“These circuits are important for moving things around the brain — sensory information, cognitive information, working memory,” he says. “We’re trying to reverse-engineer circuit operations in the service of figuring out what to do about a real human disease.”

He now plans to study circuit-level disturbances that arise in schizophrenia. That disorder affects circuits involving cognitive processes such as inference — the ability to draw conclusions from available information.

The research was funded by the Simons Center for the Social Brain at MIT, the Stanley Center for Psychiatric Research at the Broad Institute, the McGovern Institute for Brain Research at MIT, the Pew Foundation, the Human Frontiers Science Program, the National Institutes of Health, the James and Patricia Poitras Center for Psychiatric Disorders Research at MIT, a Japan Society for the Promotion of Science Fellowship, and a National Alliance for the Research of Schizophrenia and Depression Young Investigator Award.