Drug combination reverses hypersensitivity to noise

People with autism often experience hypersensitivity to noise and other sensory input. MIT neuroscientists have now identified two brain circuits that help tune out distracting sensory information, and they have found a way to reverse noise hypersensitivity in mice by boosting the activity of those circuits.

One of the circuits the researchers identified is involved in filtering noise, while the other exerts top-down control by allowing the brain to switch its attention between different sensory inputs.

The researchers showed that restoring the function of both circuits worked much better than treating either circuit alone. This demonstrates the benefits of mapping and targeting multiple circuits involved in neurological disorders, says Michael Halassa, an assistant professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research.

“We think this work has the potential to transform how we think about neurological and psychiatric disorders, [so that we see them] as a combination of circuit deficits,” says Halassa, the senior author of the study. “The way we should approach these brain disorders is to map, to the best of our ability, what combination of deficits are there, and then go after that combination.”

MIT postdoc Miho Nakajima and research scientist L. Ian Schmitt are the lead authors of the paper, which appears in Neuron on Oct. 21. Guoping Feng, the James W. and Patricia Poitras Professor of Neuroscience and a member of the McGovern Institute, is also an author of the paper.

Hypersensitivity

Many gene variants have been linked with autism, but most patients have very few, if any, of those variants. One of those genes is ptchd1, which is mutated in about 1 percent of people with autism. In a 2016 study, Halassa and Feng found that during development this gene is primarily expressed in a part of the thalamus called the thalamic reticular nucleus (TRN).

That study revealed that neurons of the TRN help the brain to adjust to changes in sensory input, such as noise level or brightness. In mice with ptchd1 missing, TRN neurons fire too fast, and they can’t adjust when noise levels change. This prevents the TRN from performing its usual sensory filtering function, Halassa says.

“Neurons that are there to filter out noise, or adjust the overall level of activity, are not adapting. Without the ability to fine-tune the overall level of activity, you can get overwhelmed very easily,” he says.

In the 2016 study, the researchers also found that they could restore some of the mice’s noise filtering ability by treating them with a drug called EBIO that activates neurons’ potassium channels. EBIO has harmful cardiac side effects so likely could not be used in human patients, but other drugs that boost TRN activity may have a similar beneficial effect on hypersensitivity, Halassa says.

In the new Neuron paper, the researchers delved more deeply into the effects of ptchd1, which is also expressed in the prefrontal cortex. To explore whether the prefrontal cortex might play a role in the animals’ hypersensitivity, the researchers used a task in which mice have to distinguish between three different tones, presented with varying amounts of background noise.

Normal mice can learn to use a cue that alerts them whenever the noise level is going to be higher, improving their overall performance on the task. A similar phenomenon is seen in humans, who can adjust better to noisier environments when they have some advance warning, Halassa says. However, mice with the ptchd1 mutation were unable to use these cues to improve their performance, even when their TRN deficit was treated with EBIO.

This suggested that another brain circuit must be playing a role in the animals’ ability to filter out distracting noise. To test the possibility that this circuit is located in the prefrontal cortex, the researchers recorded from neurons in that region while mice lacking ptch1 performed the task. They found that neuronal activity died out much faster in these mice than in the prefrontal cortex of normal mice. That led the researchers to test another drug, known as modafinil, which is FDA-approved to treat narcolepsy and is sometimes prescribed to improve memory and attention.

The researchers found that when they treated mice missing ptchd1 with both modafinil and EBIO, their hypersensitivity disappeared, and their performance on the task was the same as that of normal mice.

Targeting circuits

This successful reversal of symptoms suggests that the mice missing ptchd1 experience a combination of circuit deficits that each contribute differently to noise hypersensitivity. One circuit filters noise, while the other helps to control noise filtering based on external cues. Ptch1 mutations affect both circuits, in different ways that can be treated with different drugs.

Both of those circuits could also be affected by other genetic mutations that have been linked to autism and other neurological disorders, Halassa says. Targeting those circuits, rather than specific genetic mutations, may offer a more effective way to treat such disorders, he says.

“These circuits are important for moving things around the brain — sensory information, cognitive information, working memory,” he says. “We’re trying to reverse-engineer circuit operations in the service of figuring out what to do about a real human disease.”

He now plans to study circuit-level disturbances that arise in schizophrenia. That disorder affects circuits involving cognitive processes such as inference — the ability to draw conclusions from available information.

The research was funded by the Simons Center for the Social Brain at MIT, the Stanley Center for Psychiatric Research at the Broad Institute, the McGovern Institute for Brain Research at MIT, the Pew Foundation, the Human Frontiers Science Program, the National Institutes of Health, the James and Patricia Poitras Center for Psychiatric Disorders Research at MIT, a Japan Society for the Promotion of Science Fellowship, and a National Alliance for the Research of Schizophrenia and Depression Young Investigator Award.

Better sleep habits lead to better college grades

Two MIT professors have found a strong relationship between students’ grades and how much sleep they’re getting. What time students go to bed and the consistency of their sleep habits also make a big difference. And no, getting a good night’s sleep just before a big test is not good enough — it takes several nights in a row of good sleep to make a difference.

Those are among the conclusions from an experiment in which 100 students in an MIT engineering class were given Fitbits, the popular wrist-worn devices that track a person’s activity 24/7, in exchange for the researchers’ access to a semester’s worth of their activity data. The findings — some unsurprising, but some quite unexpected — are reported today in the journal Science of Learning in a paper by former MIT postdoc Kana Okano, professors Jeffrey Grossman and John Gabrieli, and two others.

One of the surprises was that individuals who went to bed after some particular threshold time — for these students, that tended to be 2 a.m., but it varied from one person to another — tended to perform less well on their tests no matter how much total sleep they ended up getting.

The study didn’t start out as research on sleep at all. Instead, Grossman was trying to find a correlation between physical exercise and the academic performance of students in his class 3.091 (Introduction to Solid-State Chemistry). In addition to having 100 of the students wear Fitbits for the semester, he also enrolled about one-fourth of them in an intense fitness class in MIT’s Department of Athletics, Physical Education, and Recreation, with the help of assistant professors Carrie Moore and Matthew Breen, who created the class specifically for this study. The thinking was that there might be measurable differences in test performance between the two groups.

There wasn’t. Those without the fitness classes performed just as well as those who did take them. “What we found at the end of the day was zero correlation with fitness, which I must say was disappointing since I believed, and still believe, there is a tremendous positive impact of exercise on cognitive performance,” Grossman says.

He speculates that the intervals between the fitness program and the classes may have been too long to show an effect. But meanwhile, in the vast amount of data collected during the semester, some other correlations did become obvious. While the devices weren’t explicitly monitoring sleep, the Fitbit program’s proprietary algorithms did detect periods of sleep and changes in sleep quality, primarily based on lack of activity.

These correlations were not at all subtle, Grossman says. There was essentially a straight-line relationship between the average amount of sleep a student got and their grades on the 11 quizzes, three midterms, and final exam, with the grades ranging from A’s to C’s. “There’s lots of scatter, it’s a noisy plot, but it’s a straight line,” he says. The fact that there was a correlation between sleep and performance wasn’t surprising, but the extent of it was, he says. Of course, this correlation can’t absolutely prove that sleep was the determining factor in the students’ performance, as opposed to some other influence that might have affected both sleep and grades. But the results are a strong indication, Grossman says, that sleep “really, really matters.”

“Of course, we knew already that more sleep would be beneficial to classroom performance, from a number of previous studies that relied on subjective measures like self-report surveys,” Grossman says. “But in this study the benefits of sleep are correlated to performance in the context of a real-life college course, and driven by large amounts of objective data collection.”

The study also revealed no improvement in scores for those who made sure to get a good night’s sleep right before a big test. According to the data, “the night before doesn’t matter,” Grossman says. “We’ve heard the phrase ‘Get a good night’s sleep, you’ve got a big day tomorrow.’ It turns out this does not correlate at all with test performance. Instead, it’s the sleep you get during the days when learning is happening that matter most.”

Another surprising finding is that there appears to be a certain cutoff for bedtimes, such that going to bed later results in poorer performance, even if the total amount of sleep is the same. “When you go to bed matters,” Grossman says. “If you get a certain amount of sleep  — let’s say seven hours — no matter when you get that sleep, as long as it’s before certain times, say you go to bed at 10, or at 12, or at 1, your performance is the same. But if you go to bed after 2, your performance starts to go down even if you get the same seven hours. So, quantity isn’t everything.”

Quality of sleep also mattered, not just quantity. For example, those who got relatively consistent amounts of sleep each night did better than those who had greater variations from one night to the next, even if they ended up with the same average amount.

This research also helped to provide an explanation for something that Grossman says he had noticed and wondered about for years, which is that on average, the women in his class have consistently gotten better grades than the men. Now, he has a possible answer: The data show that the differences in quantity and quality of sleep can fully account for the differences in grades. “If we correct for sleep, men and women do the same in class. So sleep could be the explanation for the gender difference in our class,” he says.

More research will be needed to understand the reasons why women tend to have better sleep habits than men. “There are so many factors out there that it could be,” Grossman says. “I can envision a lot of exciting follow-on studies to try to understand this result more deeply.”

“The results of this study are very gratifying to me as a sleep researcher, but are terrifying to me as a parent,” says Robert Stickgold, a professor of psychiatry and director of the Center for Sleep and Cognition at Harvard Medical School, who was not connected with this study. He adds, “The overall course grades for students averaging six and a half hours of sleep were down 50 percent from other students who averaged just one hour more sleep. Similarly, those who had just a half-hour more night-to-night variation in their total sleep time had grades that dropped 45 percent below others with less variation. This is huge!”

Stickgold says “a full quarter of the variation in grades was explained by these sleep parameters (including bedtime). All students need to not only be aware of these results, but to understand their implication for success in college. I can’t help but believe the same is true for high school students.” But he adds one caution: “That said, correlation is not the same as causation. While I have no doubt that less and more variable sleep will hurt a student’s grades, it’s also possible that doing poorly in classes leads to less and more variable sleep, not the other way around, or that some third factor, such as ADHD, could independently lead to poorer grades and poorer sleep.”

The team also included technical assistant Jakub Kaezmarzyk and Harvard Business School researcher Neha Dave. The study was supported by MIT’s Department of Materials Science and Engineering, the Lubin Fund, and the MIT Integrated Learning Initiative.

Perception of musical pitch varies across cultures

People who are accustomed to listening to Western music, which is based on a system of notes organized in octaves, can usually perceive the similarity between notes that are same but played in different registers — say, high C and middle C. However, a longstanding question is whether this a universal phenomenon or one that has been ingrained by musical exposure.

This question has been hard to answer, in part because of the difficulty in finding people who have not been exposed to Western music. Now, a new study led by researchers from MIT and the Max Planck Institute for Empirical Aesthetics has found that unlike residents of the United States, people living in a remote area of the Bolivian rainforest usually do not perceive the similarities between two versions of the same note played at different registers (high or low).

“We’re finding that … there seems to be really striking variation in things that a lot of people would have presumed would be common across cultures and listeners,” says McDermott.

The findings suggest that although there is a natural mathematical relationship between the frequencies of every “C,” no matter what octave it’s played in, the brain only becomes attuned to those similarities after hearing music based on octaves, says Josh McDermott, an associate professor in MIT’s Department of Brain and Cognitive Sciences.

“It may well be that there is a biological predisposition to favor octave relationships, but it doesn’t seem to be realized unless you are exposed to music in an octave-based system,” says McDermott, who is also a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds and Machines.

The study also found that members of the Bolivian tribe, known as the Tsimane’, and Westerners do have a very similar upper limit on the frequency of notes that they can accurately distinguish, suggesting that that aspect of pitch perception may be independent of musical experience and biologically determined.

McDermott is the senior author of the study, which appears in the journal Current Biology on Sept. 19. Nori Jacoby, a former MIT postdoc who is now a group leader at the Max Planck Institute for Empirical Aesthetics, is the paper’s lead author. Other authors are Eduardo Undurraga, an assistant professor at the Pontifical Catholic University of Chile; Malinda McPherson, a graduate student in the Harvard/MIT Program in Speech and Hearing Bioscience and Technology; Joaquin Valdes, a graduate student at the Pontifical Catholic University of Chile; and Tomas Ossandon, an assistant professor at the Pontifical Catholic University of Chile.

Octaves apart

Cross-cultural studies of how music is perceived can shed light on the interplay between biological constraints and cultural influences that shape human perception. McDermott’s lab has performed several such studies with the participation of Tsimane’ tribe members, who live in relative isolation from Western culture and have had little exposure to Western music.

In a study published in 2016, McDermott and his colleagues found that Westerners and Tsimane’ had different aesthetic reactions to chords, or combinations of notes. To Western ears, the combination of C and F# is very grating, but Tsimane’ listeners rated this chord just as likeable as other chords that Westerners would interpret as more pleasant, such as C and G.

Later, Jacoby and McDermott found that both Westerners and Tsimane’ are drawn to musical rhythms composed of simple integer ratios, but the ratios they favor are different, based on which rhythms are more common in the music they listen to.

In their new study, the researchers studied pitch perception using an experimental design in which they play a very simple tune, only two or three notes, and then ask the listener to sing it back. The notes that were played could come from any octave within the range of human hearing, but listeners sang their responses within their vocal range, usually restricted to a single octave.

pitch perception experiment
Eduardo Undurraga, an assistant professor at the Pontifical Catholic University of Chile, runs a musical pitch perception experiment with a member of the Tsimane’ tribe of the Bolivian rainforest. Photo: Josh McDermott

Western listeners, especially those who were trained musicians, tended to reproduce the tune an exact number of octaves above or below what they heard, though they were not specifically instructed to do so. In Western music, the pitch of the same note doubles with each ascending octave, so tones with frequencies of 27.5 hertz, 55 hertz, 110 hertz, 220 hertz, and so on, are all heard as the note A.

Western listeners in the study, all of whom lived in New York or Boston, accurately reproduced sequences such as A-C-A, but in a different register, as though they hear the similarity of notes separated by octaves. However, the Tsimane’ did not.

“The relative pitch was preserved (between notes in the series), but the absolute pitch produced by the Tsimane’ didn’t have any relationship to the absolute pitch of the stimulus,” Jacoby says. “That’s consistent with the idea that perceptual similarity is something that we acquire from exposure to Western music, where the octave is structurally very important.”

The ability to reproduce the same note in different octaves may be honed by singing along with others whose natural registers are different, or singing along with an instrument being played in a different pitch range, Jacoby says.

Limits of perception

The study findings also shed light on the upper limits of pitch perception for humans. It has been known for a long time that Western listeners cannot accurately distinguish pitches above about 4,000 hertz, although they can still hear frequencies up to nearly 20,000 hertz. In a traditional 88-key piano, the highest note is about 4,100 hertz.

People have speculated that the piano was designed to go only that high because of a fundamental limit on pitch perception, but McDermott thought it could be possible that the opposite was true: That is, the limit was culturally influenced by the fact that few musical instruments produce frequencies higher than 4,000 hertz.

The researchers found that although Tsimane’ musical instruments usually have upper limits much lower than 4,000 hertz, Tsimane’ listeners could distinguish pitches very well up to about 4,000 hertz, as evidenced by accurate sung reproductions of those pitch intervals. Above that threshold, their perceptions broke down, very similarly to Western listeners.

“It looks almost exactly the same across groups, so we have some evidence for biological constraints on the limits of pitch,” Jacoby says.

One possible explanation for this limit is that once frequencies reach about 4,000 hertz, the firing rates of the neurons of our inner ear can’t keep up and we lose a critical cue with which to distinguish different frequencies.

“The new study contributes to the age-long debate about the interplays between culture and biological constraints in music,” says Daniel Pressnitzer, a senior research scientist at Paris Descartes University, who was not involved in the research. “This unique, precious, and extensive dataset demonstrates both striking similarities and unexpected differences in how Tsimane’ and Western listeners perceive or conceive musical pitch.”

Jacoby and McDermott now hope to expand their cross-cultural studies to other groups who have had little exposure to Western music, and to perform more detailed studies of pitch perception among the Tsimane’.

Such studies have already shown the value of including research participants other than the Western-educated, relatively wealthy college undergraduates who are the subjects of most academic studies on perception, McDermott says. These broader studies allow researchers to tease out different elements of perception that cannot be seen when examining only a single, homogenous group.

“We’re finding that there are some cross-cultural similarities, but there also seems to be really striking variation in things that a lot of people would have presumed would be common across cultures and listeners,” McDermott says. “These differences in experience can lead to dissociations of different aspects of perception, giving you clues to what the parts of the perceptual system are.”

The research was funded by the James S. McDonnell Foundation, the National Institutes of Health, and the Presidential Scholar in Society and Neuroscience Program at Columbia University.

Mehrdad Jazayeri and Hazel Sive awarded 2019 School of Science teaching prizes

The School of Science has announced that the recipients of the school’s 2019 Teaching Prizes for Graduate and Undergraduate Education are Mehrdad Jazayeri and Hazel Sive. Nominated by peers and students, the faculty members chosen to receive these prizes are selected to acknowledge their exemplary efforts in teaching graduate and undergraduate students.

Mehrdad Jazayeri, an associate professor in the Department of Brain and Cognitive Sciences and investigator at the McGovern Institute for Brain Research, is awarded the prize for graduate education for 9.014 (Quantitative Methods and Computational Models in Neuroscience). Earlier this year, he was recognized for excellence in graduate teaching by the Department of Brain and Cognitive Sciences and won a Graduate Student Council teaching award in 2016. In their nomination letters, peers and students alike remarked that he displays not only great knowledge, but extraordinary skill in teaching, most notably by ensuring everyone learns the material. Jazayeri does so by considering students’ diverse backgrounds and contextualizing subject material to relatable applications in various fields of science according to students’ interests. He also improves and adjusts the course content, pace, and intensity in response to student input via surveys administered throughout the semester.

Hazel Sive, a professor in the Department of Biology, member of the Whitehead Institute for Biomedical Research, and associate member of the Broad Institute of MIT and Harvard, is awarded the prize for undergraduate education. A MacVicar Faculty Fellow, she has been recognized with MIT’s highest undergraduate teaching award in the past, as well as the 2003 School of Science Teaching Prize for Graduate Education. Exemplified by her nominations, Sive’s laudable teaching career at MIT continues to receive praise from undergraduate students who take her classes. In recent post-course evaluations, students commended her exemplary and dedicated efforts to her field and to their education.

The School of Science welcomes nominations for the teaching prize in the spring semester of each academic year. Nominations can be submitted at the school’s website.

Benefits of mindfulness for middle schoolers

Two new studies from investigators at the McGovern Institute at MIT suggest that mindfulness — the practice of focusing one’s awareness on the present moment — can enhance academic performance and mental health in middle schoolers. The researchers found that more mindfulness correlates with better academic performance, fewer suspensions from school, and less stress.

“By definition, mindfulness is the ability to focus attention on the present moment, as opposed to being distracted by external things or internal thoughts. If you’re focused on the teacher in front of you, or the homework in front of you, that should be good for learning,” says John Gabrieli, the Grover M. Hermann Professor in Health Sciences and Technology, a professor of brain and cognitive sciences, and a member of MIT’s McGovern Institute for Brain Research.

The researchers also showed, for the first time, that mindfulness training can alter brain activity in students. Sixth-graders who received mindfulness training not only reported feeling less stressed, but their brain scans revealed reduced activation of the amygdala, a brain region that processes fear and other emotions, when they viewed images of fearful faces.

“Mindfulness is like going to the gym. If you go for a month, that’s good, but if you stop going, the effects won’t last,” Gabrieli says. “It’s a form of mental exercise that needs to be sustained.”

Together, the findings suggest that offering mindfulness training in schools could benefit many students, says Gabrieli, who is the senior author of both studies.

“We think there is a reasonable possibility that mindfulness training would be beneficial for children as part of the daily curriculum in their classroom,” he says. “What’s also appealing about mindfulness is that there are pretty well-established ways of teaching it.”

In the moment

Both studies were performed at charter schools in Boston. In one of the papers, which appears today in the journal Behavioral Neuroscience, the MIT team studied about 100 sixth-graders. Half of the students received mindfulness training every day for eight weeks, while the other half took a coding class. The mindfulness exercises were designed to encourage students to pay attention to their breath, and to focus on the present moment rather than thoughts of the past or the future.

Students who received the mindfulness training reported that their stress levels went down after the training, while the students in the control group did not. Students in the mindfulness training group also reported fewer negative feelings, such as sadness or anger, after the training.

About 40 of the students also participated in brain imaging studies before and after the training. The researchers measured activity in the amygdala as the students looked at pictures of faces expressing different emotions.

At the beginning of the study, before any training, students who reported higher stress levels showed more amygdala activity when they saw fearful faces. This is consistent with previous research showing that the amygdala can be overactive in people who experience more stress, leading them to have stronger negative reactions to adverse events.

“There’s a lot of evidence that an overly strong amygdala response to negative things is associated with high stress in early childhood and risk for depression,” Gabrieli says.

After the mindfulness training, students showed a smaller amygdala response when they saw the fearful faces, consistent with their reports that they felt less stressed. This suggests that mindfulness training could potentially help prevent or mitigate mood disorders linked with higher stress levels, the researchers say.

Richard Davidson, a professor of psychology and psychiatry at the University of Wisconsin, says that the findings suggest there could be great benefit to implementing mindfulness training in middle schools.

“This is really one of the very first rigorous studies with children of that age to demonstrate behavioral and neural benefits of a simple mindfulness training,” says Davidson, who was not involved in the study.

Evaluating mindfulness

In the other paper, which appeared in the journal Mind, Brain, and Education in June, the researchers did not perform any mindfulness training but used a questionnaire to evaluate mindfulness in more than 2,000 students in grades 5-8. The questionnaire was based on the Mindfulness Attention Awareness Scale, which is often used in mindfulness studies on adults. Participants are asked to rate how strongly they agree with statements such as “I rush through activities without being really attentive to them.”

The researchers compared the questionnaire results with students’ grades, their scores on statewide standardized tests, their attendance rates, and the number of times they had been suspended from school. Students who showed more mindfulness tended to have better grades and test scores, as well as fewer absences and suspensions.

“People had not asked that question in any quantitative sense at all, as to whether a more mindful child is more likely to fare better in school,” Gabrieli says. “This is the first paper that says there is a relationship between the two.”

The researchers now plan to do a full school-year study, with a larger group of students across many schools, to examine the longer-term effects of mindfulness training. Shorter programs like the two-month training used in the Behavioral Neuroscience study would most likely not have a lasting impact, Gabrieli says.

“Mindfulness is like going to the gym. If you go for a month, that’s good, but if you stop going, the effects won’t last,” he says. “It’s a form of mental exercise that needs to be sustained.”

The research was funded by the Walton Family Foundation, the Poitras Center for Psychiatric Disorders Research at the McGovern Institute for Brain Research, and the National Council of Science and Technology of Mexico. Camila Caballero ’13, now a graduate student at Yale University, is the lead author of the Mind, Brain, and Education study. Caballero and MIT postdoc Clemens Bauer are lead authors of the Behavioral Neuroscience study. Additional collaborators were from the Harvard Graduate School of Education, Transforming Education, Boston Collegiate Charter School, and Calmer Choice.

A new way to deliver drugs with pinpoint targeting

Most pharmaceuticals must either be ingested or injected into the body to do their work. Either way, it takes some time for them to reach their intended targets, and they also tend to spread out to other areas of the body. Now, researchers at the McGovern Institute at MIT and elsewhere have developed a system to deliver medical treatments that can be released at precise times, minimally-invasively, and that ultimately could also deliver those drugs to specifically targeted areas such as a specific group of neurons in the brain.

The new approach is based on the use of tiny magnetic particles enclosed within a tiny hollow bubble of lipids (fatty molecules) filled with water, known as a liposome. The drug of choice is encapsulated within these bubbles, and can be released by applying a magnetic field to heat up the particles, allowing the drug to escape from the liposome and into the surrounding tissue.

The findings are reported today in the journal Nature Nanotechnology in a paper by MIT postdoc Siyuan Rao, Associate Professor Polina Anikeeva, and 14 others at MIT, Stanford University, Harvard University, and the Swiss Federal Institute of Technology in Zurich.

“We wanted a system that could deliver a drug with temporal precision, and could eventually target a particular location,” Anikeeva explains. “And if we don’t want it to be invasive, we need to find a non-invasive way to trigger the release.”

Magnetic fields, which can easily penetrate through the body — as demonstrated by detailed internal images produced by magnetic resonance imaging, or MRI — were a natural choice. The hard part was finding materials that could be triggered to heat up by using a very weak magnetic field (about one-hundredth the strength of that used for MRI), in order to prevent damage to the drug or surrounding tissues, Rao says.

Rao came up with the idea of taking magnetic nanoparticles, which had already been shown to be capable of being heated by placing them in a magnetic field, and packing them into these spheres called liposomes. These are like little bubbles of lipids, which naturally form a spherical double layer surrounding a water droplet.

Electron microscope image shows the actual liposome, the white blob at center, with its magnetic particles showing up in black at its center.
Image courtesy of the researchers

When placed inside a high-frequency but low-strength magnetic field, the nanoparticles heat up, warming the lipids and making them undergo a transition from solid to liquid, which makes the layer more porous — just enough to let some of the drug molecules escape into the surrounding areas. When the magnetic field is switched off, the lipids re-solidify, preventing further releases. Over time, this process can be repeated, thus releasing doses of the enclosed drug at precisely controlled intervals.

The drug carriers were engineered to be stable inside the body at the normal body temperature of 37 degrees Celsius, but able to release their payload of drugs at a temperature of 42 degrees. “So we have a magnetic switch for drug delivery,” and that amount of heat is small enough “so that you don’t cause thermal damage to tissues,” says Anikeeva, who also holds appointments in the departments of Materials Science and Engineering and the Brain and Cognitive Sciences.

In principle, this technique could also be used to guide the particles to specific, pinpoint locations in the body, using gradients of magnetic fields to push them along, but that aspect of the work is an ongoing project. For now, the researchers have been injecting the particles directly into the target locations, and using the magnetic fields to control the timing of drug releases. “The technology will allow us to address the spatial aspect,” Anikeeva says, but that has not yet been demonstrated.

This could enable very precise treatments for a wide variety of conditions, she says. “Many brain disorders are characterized by erroneous activity of certain cells. When neurons are too active or not active enough, that manifests as a disorder, such as Parkinson’s, or depression, or epilepsy.” If a medical team wanted to deliver a drug to a specific patch of neurons and at a particular time, such as when an onset of symptoms is detected, without subjecting the rest of the brain to that drug, this system “could give us a very precise way to treat those conditions,” she says.

Rao says that making these nanoparticle-activated liposomes is actually quite a simple process. “We can prepare the liposomes with the particles within minutes in the lab,” she says, and the process should be “very easy to scale up” for manufacturing. And the system is broadly applicable for drug delivery: “we can encapsulate any water-soluble drug,” and with some adaptations, other drugs as well, she says.

One key to developing this system was perfecting and calibrating a way of making liposomes of a highly uniform size and composition. This involves mixing a water base with the fatty acid lipid molecules and magnetic nanoparticles and homogenizing them under precisely controlled conditions. Anikeeva compares it to shaking a bottle of salad dressing to get the oil and vinegar mixed, but controlling the timing, direction and strength of the shaking to ensure a precise mixing.

Anikeeva says that while her team has focused on neurological disorders, as that is their specialty, the drug delivery system is actually quite general and could be applied to almost any part of the body, for example to deliver cancer drugs, or even to deliver painkillers directly to an affected area instead of delivering them systemically and affecting the whole body. “This could deliver it to where it’s needed, and not deliver it continuously,” but only as needed.

Because the magnetic particles themselves are similar to those already in widespread use as contrast agents for MRI scans, the regulatory approval process for their use may be simplified, as their biological compatibility has largely been proven.

The team included researchers in MIT’s departments of Materials Science and Engineering and Brain and Cognitive Sciences, as well as the McGovern Institute for Brain Research, the Simons Center for Social Brain, and the Research Laboratory of Electronics; the Harvard University Department of Chemistry and Chemical Biology and the John A. Paulsen School of Engineering and Applied Sciences; Stanford University; and the Swiss Federal Institute of Technology in Zurich. The work was supported by the Simons Postdoctoral Fellowship, the U.S. Defense Advanced Research Projects Agency, the Bose Research Grant, and the National Institutes of Health.

Brain region linked to altered social interactions in autism model

Although psychiatric disorders can be linked to particular genes, the brain regions and mechanisms underlying particular disorders are not well-understood. Mutations or deletions of the SHANK3 gene are strongly associated with autism spectrum disorder (ASD) and a related rare disorder called Phelan-McDermid syndrome. Mice with SHANK3 mutations also display some of the traits associated with autism, including avoidance of social interactions, but the brain regions responsible for this behavior have not been identified.

A new study by neuroscientists at MIT and colleagues in China provides clues to the neural circuits underlying social deficits associated with ASD. The paper, published in Nature Neuroscience, found that structural and functional impairments in the anterior cingulate cortex (ACC) of SHANK3 mutant mice are linked to altered social interactions.

“Neurobiological mechanisms of social deficits are very complex and involve many brain regions, even in a mouse model,” explains Guoping Feng, the James W. and Patricia T. Poitras Professor at MIT and one of the senior authors of the study. “These findings add another piece of the puzzle to mapping the neural circuits responsible for this social deficit in ASD models.”

The Nature Neuroscience paper is the result of a collaboration between Feng, who is also an investigator at MIT’s McGovern Institute and a senior scientist in the Broad Institute’s Stanley Center for Psychiatric Research, and Wenting Wang and Shengxi Wu at the Fourth Military Medical University, Xi’an, China.

A number of brain regions have been implicated in social interactions, including the prefrontal cortex (PFC) and its projections to brain regions including the nucleus accumbens and habenula, but these studies failed to definitively link the PFC to altered social interactions seen in SHANK3 knockout mice.

In the new study, the authors instead focused on the ACC, a brain region noted for its role in social functions in humans and animal models. The ACC is also known to play a role in fundamental cognitive processes, including cost-benefit calculation, motivation, and decision making.

In mice lacking SHANK3, the researchers found structural and functional disruptions at the synapses, or connections, between excitatory neurons in the ACC. The researchers went on to show that the loss of SHANK3 in excitatory ACC neurons alone was enough to disrupt communication between these neurons and led to unusually reduced activity of these neurons during behavioral tasks reflecting social interaction.

Having implicated these ACC neurons in social preferences and interactions in SHANK3 knockout mice, the authors then tested whether activating these same neurons could rescue these behaviors. Using optogenetics and specfic drugs, the researchers activated the ACC neurons and found improved social behavior in the SHANK3 mutant mice.

“Next, we are planning to explore brain regions downstream of the ACC that modulate social behavior in normal mice and models of autism,” explains Wenting Wang, co-corresponding author on the study. “This will help us to better understand the neural mechanisms of social behavior, as well as social deficits in neurodevelopmental disorders.”

Previous clinical studies reported that anatomical structures in the ACC were altered and/or dysfunctional in people with ASD, an initial indication that the findings from SHANK3 mice may also hold true in these individuals.

The research was funded, in part, by the Natural Science Foundation of China. Guoping Feng was supported by NIMH grant no. MH097104, the  Poitras Center for Psychiatric Disorders Research at the McGovern Institute at MIT, and the Hock E. Tan and K. Lisa Yang Center for Autism Research at the McGovern Institute at MIT.

Four new faces in the School of Science faculty

This fall, the School of Science will welcome four new members joining the faculty in the departments of Biology, Brain and Cognitive Sciences, and Chemistry.

Evelina Fedorenko investigates how our brains process language. She has developed novel analytic approaches for functional magnetic resonance imaging (fMRI) and other brain imaging techniques to help answer the questions of how the language processing network functions and how it relates to other networks in the brain. She works with both neurotypical individuals and individuals with brain disorders. Fedorenko joins the Department of Brain and Cognitive Sciences as an assistant professor. She received her BA from Harvard University in linguistics and psychology and then completed her doctoral studies at MIT in 2007. After graduating from MIT, Fedorenko worked as a postdoc and then as a research scientist at the McGovern Institute for Brain Research. In 2014, she joined the faculty at Massachusetts General Hospital and Harvard Medical School, where she was an associate researcher and an assistant professor, respectively. She is also a member of the McGovern Institute.

Morgan Sheng focuses on the structure, function, and turnover of synapses, the junctions that allow communication between brain cells. His discoveries have improved our understanding of the molecular basis of cognitive function and diseases of the nervous system, such as autism, Alzheimer’s disease, and dementia. Being both a physician and a scientist, he incorporates genetic as well as biological insights to aid the study and treatment of mental illnesses and neurodegenerative diseases. He rejoins the Department of Brain and Cognitive Sciences (BCS), returning as a professor of neuroscience, a position he also held from 2001 to 2008. At that time, he was a member of the Picower Institute for Learning and Memory, a joint appointee in the Department of Biology, and an investigator of the Howard Hughes Medical Institute. Sheng earned his PhD from Harvard University in 1990, completed a postdoc at the University of California at San Francisco in 1994, and finished his medical training with a residency in London in 1986. From 1994 to 2001, he researched molecular and cellular neuroscience at Massachusetts General Hospital and Harvard Medical School. From 2008 to 2019 he was vice president of neuroscience at Genentech, a leading biotech company. In addition to his faculty appointment in BCS, Sheng is core institute member and co-director of the Stanley Center for Psychiatric Research at the Broad Institute of MIT and Harvard, as well as an affiliate member of the McGovern Institute and the Picower Institute.

Seychelle Vos studies genome organization and its effect on gene expression at the intersection of biochemistry and genetics. Vos uses X-ray crystallography, cryo-electron microscopy, and biophysical approaches to understand how transcription is physically coupled to the genome’s organization and structure. She joins the Department of Biology as an assistant professor after completing a postdoc at the Max Plank Institute for Biophysical Chemistry. Vos received her BS in genetics in 2008 from the University of Georgia and her PhD in molecular and cell biology in 2013 from the University of California at Berkeley.

Xiao Wang is a chemist and molecular engineer working to improve our understanding of biology and human health. She focuses on brain function and dysfunction, producing and applying new chemical, biophysical, and genomic tools at the molecular level. Previously, she focused on RNA modifications and how they impact cellular function. Wang is joining MIT as an assistant professor in the Department of Chemistry. She was previously a postdoc of the Life Science Research Foundation at Stanford University. Wang received her BS in chemistry and molecular engineering from Peking University in 2010 and her PhD in chemistry from the University of Chicago in 2015. She is also a core member of the Broad Institute of MIT and Harvard.

How expectation influences perception

For decades, research has shown that our perception of the world is influenced by our expectations. These expectations, also called “prior beliefs,” help us make sense of what we are perceiving in the present, based on similar past experiences. Consider, for instance, how a shadow on a patient’s X-ray image, easily missed by a less experienced intern, jumps out at a seasoned physician. The physician’s prior experience helps her arrive at the most probable interpretation of a weak signal.

The process of combining prior knowledge with uncertain evidence is known as Bayesian integration and is believed to widely impact our perceptions, thoughts, and actions. Now, MIT neuroscientists have discovered distinctive brain signals that encode these prior beliefs. They have also found how the brain uses these signals to make judicious decisions in the face of uncertainty.

“How these beliefs come to influence brain activity and bias our perceptions was the question we wanted to answer,” says Mehrdad Jazayeri, the Robert A. Swanson Career Development Professor of Life Sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

The researchers trained animals to perform a timing task in which they had to reproduce different time intervals. Performing this task is challenging because our sense of time is imperfect and can go too fast or too slow. However, when intervals are consistently within a fixed range, the best strategy is to bias responses toward the middle of the range. This is exactly what animals did. Moreover, recording from neurons in the frontal cortex revealed a simple mechanism for Bayesian integration: Prior experience warped the representation of time in the brain so that patterns of neural activity associated with different intervals were biased toward those that were within the expected range.

MIT postdoc Hansem Sohn, former postdoc Devika Narain, and graduate student Nicolas Meirhaeghe are the lead authors of the study, which appears in the July 15 issue of Neuron.

Ready, set, go

Statisticians have known for centuries that Bayesian integration is the optimal strategy for handling uncertain information. When we are uncertain about something, we automatically rely on our prior experiences to optimize behavior.

“If you can’t quite tell what something is, but from your prior experience you have some expectation of what it ought to be, then you will use that information to guide your judgment,” Jazayeri says. “We do this all the time.”

In this new study, Jazayeri and his team wanted to understand how the brain encodes prior beliefs, and put those beliefs to use in the control of behavior. To that end, the researchers trained animals to reproduce a time interval, using a task called “ready-set-go.” In this task, animals measure the time between two flashes of light (“ready” and “set”) and then generate a “go” signal by making a delayed response after the same amount of time has elapsed.

They trained the animals to perform this task in two contexts. In the “Short” scenario, intervals varied between 480 and 800 milliseconds, and in the “Long” context, intervals were between 800 and 1,200 milliseconds. At the beginning of the task, the animals were given the information about the context (via a visual cue), and therefore knew to expect intervals from either the shorter or longer range.

Jazayeri had previously shown that humans performing this task tend to bias their responses toward the middle of the range. Here, they found that animals do the same. For example, if animals believed the interval would be short, and were given an interval of 800 milliseconds, the interval they produced was a little shorter than 800 milliseconds. Conversely, if they believed it would be longer, and were given the same 800-millisecond interval, they produced an interval a bit longer than 800 milliseconds.

“Trials that were identical in almost every possible way, except the animal’s belief led to different behaviors,” Jazayeri says. “That was compelling experimental evidence that the animal is relying on its own belief.”

Once they had established that the animals relied on their prior beliefs, the researchers set out to find how the brain encodes prior beliefs to guide behavior. They recorded activity from about 1,400 neurons in a region of the frontal cortex, which they have previously shown is involved in timing.

During the “ready-set” epoch, the activity profile of each neuron evolved in its own way, and about 60 percent of the neurons had different activity patterns depending on the context (Short versus Long). To make sense of these signals, the researchers analyzed the evolution of neural activity across the entire population over time, and found that prior beliefs bias behavioral responses by warping the neural representation of time toward the middle of the expected range.

“We have never seen such a concrete example of how the brain uses prior experience to modify the neural dynamics by which it generates sequences of neural activities, to correct for its own imprecision. This is the unique strength of this paper: bringing together perception, neural dynamics, and Bayesian computation into a coherent framework, supported by both theory and measurements of behavior and neural activities,” says Mate Lengyel, a professor of computational neuroscience at Cambridge University, who was not involved in the study.

Embedded knowledge

Researchers believe that prior experiences change the strength of connections between neurons. The strength of these connections, also known as synapses, determines how neurons act upon one another and constrains the patterns of activity that a network of interconnected neurons can generate. The finding that prior experiences warp the patterns of neural activity provides a window onto how experience alters synaptic connections. “The brain seems to embed prior experiences into synaptic connections so that patterns of brain activity are appropriately biased,” Jazayeri says.

As an independent test of these ideas, the researchers developed a computer model consisting of a network of neurons that could perform the same ready-set-go task. Using techniques borrowed from machine learning, they were able to modify the synaptic connections and create a model that behaved like the animals.

These models are extremely valuable as they provide a substrate for the detailed analysis of the underlying mechanisms, a procedure that is known as “reverse-engineering.” Remarkably, reverse-engineering the model revealed that it solved the task the same way the monkeys’ brain did. The model also had a warped representation of time according to prior experience.

The researchers used the computer model to further dissect the underlying mechanisms using perturbation experiments that are currently impossible to do in the brain. Using this approach, they were able to show that unwarping the neural representations removes the bias in the behavior. This important finding validated the critical role of warping in Bayesian integration of prior knowledge.

The researchers now plan to study how the brain builds up and slowly fine-tunes the synaptic connections that encode prior beliefs as an animal is learning to perform the timing task.

The research was funded by the Center for Sensorimotor Neural Engineering, the Netherlands Scientific Organization, the Marie Sklodowska Curie Reintegration Grant, the National Institutes of Health, the Sloan Foundation, the Klingenstein Foundation, the Simons Foundation, the McKnight Foundation, and the McGovern Institute.

Artificial “muscles” achieve powerful pulling force

As a cucumber plant grows, it sprouts tightly coiled tendrils that seek out supports in order to pull the plant upward. This ensures the plant receives as much sunlight exposure as possible. Now, researchers at MIT have found a way to imitate this coiling-and-pulling mechanism to produce contracting fibers that could be used as artificial muscles for robots, prosthetic limbs, or other mechanical and biomedical applications.

While many different approaches have been used for creating artificial muscles, including hydraulic systems, servo motors, shape-memory metals, and polymers that respond to stimuli, they all have limitations, including high weight or slow response times. The new fiber-based system, by contrast, is extremely lightweight and can respond very quickly, the researchers say. The findings are being reported today in the journal Science.

The new fibers were developed by MIT postdoc Mehmet Kanik and MIT graduate student Sirma Örgüç, working with professors Polina Anikeeva, Yoel Fink, Anantha Chandrakasan, and C. Cem Taşan, and five others, using a fiber-drawing technique to combine two dissimilar polymers into a single strand of fiber.

The key to the process is mating together two materials that have very different thermal expansion coefficients — meaning they have different rates of expansion when they are heated. This is the same principle used in many thermostats, for example, using a bimetallic strip as a way of measuring temperature. As the joined material heats up, the side that wants to expand faster is held back by the other material. As a result, the bonded material curls up, bending toward the side that is expanding more slowly.

Credit: Courtesy of the researchers

Using two different polymers bonded together, a very stretchable cyclic copolymer elastomer and a much stiffer thermoplastic polyethylene, Kanik, Örgüç and colleagues produced a fiber that, when stretched out to several times its original length, naturally forms itself into a tight coil, very similar to the tendrils that cucumbers produce. But what happened next actually came as a surprise when the researchers first experienced it. “There was a lot of serendipity in this,” Anikeeva recalls.

As soon as Kanik picked up the coiled fiber for the first time, the warmth of his hand alone caused the fiber to curl up more tightly. Following up on that observation, he found that even a small increase in temperature could make the coil tighten up, producing a surprisingly strong pulling force. Then, as soon as the temperature went back down, the fiber returned to its original length. In later testing, the team showed that this process of contracting and expanding could be repeated 10,000 times “and it was still going strong,” Anikeeva says.

Credit: Courtesy of the researchers

One of the reasons for that longevity, she says, is that “everything is operating under very moderate conditions,” including low activation temperatures. Just a 1-degree Celsius increase can be enough to start the fiber contraction.

The fibers can span a wide range of sizes, from a few micrometers (millionths of a meter) to a few millimeters (thousandths of a meter) in width, and can easily be manufactured in batches up to hundreds of meters long. Tests have shown that a single fiber is capable of lifting loads of up to 650 times its own weight. For these experiments on individual fibers, Örgüç and Kanik have developed dedicated, miniaturized testing setups.

Credit: Courtesy of the researchers

The degree of tightening that occurs when the fiber is heated can be “programmed” by determining how much of an initial stretch to give the fiber. This allows the material to be tuned to exactly the amount of force needed and the amount of temperature change needed to trigger that force.

The fibers are made using a fiber-drawing system, which makes it possible to incorporate other components into the fiber itself. Fiber drawing is done by creating an oversized version of the material, called a preform, which is then heated to a specific temperature at which the material becomes viscous. It can then be pulled, much like pulling taffy, to create a fiber that retains its internal structure but is a small fraction of the width of the preform.

For testing purposes, the researchers coated the fibers with meshes of conductive nanowires. These meshes can be used as sensors to reveal the exact tension experienced or exerted by the fiber. In the future, these fibers could also include heating elements such as optical fibers or electrodes, providing a way of heating it internally without having to rely on any outside heat source to activate the contraction of the “muscle.”

Such fibers could find uses as actuators in robotic arms, legs, or grippers, and in prosthetic limbs, where their slight weight and fast response times could provide a significant advantage.

Some prosthetic limbs today can weigh as much as 30 pounds, with much of the weight coming from actuators, which are often pneumatic or hydraulic; lighter-weight actuators could thus make life much easier for those who use prosthetics. Such fibers might also find uses in tiny biomedical devices, such as a medical robot that works by going into an artery and then being activated,” Anikeeva suggests. “We have activation times on the order of tens of milliseconds to seconds,” depending on the dimensions, she says.

To provide greater strength for lifting heavier loads, the fibers can be bundled together, much as muscle fibers are bundled in the body. The team successfully tested bundles of 100 fibers. Through the fiber drawing process, sensors could also be incorporated in the fibers to provide feedback on conditions they encounter, such as in a prosthetic limb. Örgüç says bundled muscle fibers with a closed-loop feedback mechanism could find applications in robotic systems where automated and precise control are required.

Kanik says that the possibilities for materials of this type are virtually limitless, because almost any combination of two materials with different thermal expansion rates could work, leaving a vast realm of possible combinations to explore. He adds that this new finding was like opening a new window, only to see “a bunch of other windows” waiting to be opened.

“The strength of this work is coming from its simplicity,” he says.

The team also included MIT graduate student Georgios Varnavides, postdoc Jinwoo Kim, and undergraduate students Thomas Benavides, Dani Gonzalez, and Timothy Akintlio. The work was supported by the National Institute of Neurological Disorders and Stroke and the National Science Foundation.