New COVID-19 resource to address shortage of face masks

When the COVID-19 crisis hit the United States this March, McGovern scientist Jill Crittenden wanted to help. One of her greatest concerns was the shortage of face masks, which are a key weapon for healthcare providers, frontline service workers, and the public to protect against respiratory transmission of COVID-19. For those caring for COVID-19 patients, face masks that provide a near 100% seal are essential. These critical pieces of equipment, called N95 masks, are now scarce, and healthcare workers are now faced with reusing potentially contaminated masks.

To address this, Crittenden joined a team of 60 scientists and engineers, students and clinicians, drawn from universities and the private sector to synthesize the scientific literature about mask decontamination and create a set of best practices for bad times. Today the group unveiled its website, N95decon.org, which provides a summary of this critical information.

McGovern research scientist Jill Crittenden helped the N95DECON consortium assess face mask decontamination protocols so healthcare workers can easily access them for COVID-19 protection. Photo: Caitlin Cunningham

 

“I first heard about the group from Larissa Little, a Harvard graduate student with John Doyle,” explains Crittenden, who is a research scientist in Ann Graybiel‘s lab at the McGovern Institute. “The three of us began communicating because we are all also members of the Boston-based MGB COVID-19 Innovation Center and we agreed that helping to assess the flood of information on N95 decontamination would be an important contribution.”

The team members who came together over several weeks scoured hundreds of peer-reviewed publications, and held continuous online meetings to review studies of decontamination methods that had been used to inactivate previous viral and bacterial pathogens, and to then assess the potential for these methods to neutralize the novel SARS-CoV-2 virus that causes COVID-19.

“This group is absolutely amazing,” says Crittenden. “The zoom meetings are very productive because it is all data and solutions driven. Everyone throws out ideas, what they know and what the literature source is, with the only goal being to get to a data-based consensus efficiently.”

Reliable resource

The goal of the consortium was to provide overwhelmed health officials who don’t have the time to study the literature for themselves, reliable, pre-digested scientific information about the pros and cons of three decontamination methods that offer the best options should local shortages force a choice between decontamination and reuse, or going unmasked.

The three methods involve (1) heat and humidity (2) a specific wavelength of light called ultraviolet C (UVC) and (3) treatment with hydrogen peroxide vapors (HPV). The scientists did not endorse any one method but instead sought to describe the circumstances under which each could inactivate the virus provided rigorous procedures were followed. Devices that rely on heat, for instance, could be used under specific temperature, humidity, and time parameters. With UVC devices – which emit a particular wavelength and energy level of light – considerations involve making sure masks are properly oriented to the light so the entire surface is bathed in sufficient energy. The HPV method has the potential advantage of decontaminating masks in volume, as the U.S. Food and Drug Administration, acting in this emergency, has certified certain vendors to offer hydrogen peroxide vapor treatments on a large scale. In addition to giving health officials the scientific information to assess the methods best suited to their circumstances, N95decon.org points decision makers to sources of reliable and detailed how-to information provided by other organizations, institutions, and commercial services.

“While there is no perfect method for decontamination of N95 masks, it is crucial that decision-makers and users have as much information as possible about the strengths and weaknesses of various approaches,” said Manu Prakash, an associate professor of bioengineering at Stanford who helped coordinate this ad hoc, volunteer undertaking. “Manufacturers currently do not recommend N95 mask reuse. We aim to provide information and evidence in this critical time to help those on the front lines of this crisis make risk-management decisions given the specific conditions and limitations they face.”

The researchers stressed that decontamination does not solve the N95 shortage, and expressed the hope that new masks should be made available in large numbers as soon as possible so that health care workers and first providers could be issued fresh protective gear whenever needed as specified by the non-emergency guidelines set by the U.S. the Centers for Disease Control.

Forward thinking

Meanwhile, these ad hoc volunteers have pledged to continue working together to update N95decon.org website as new information becomes available, and to coordinate their efforts to do research to plug the gaps in current knowledge to avoid duplication of effort.

“We are, at heart, a group of people that want to help better equip hospitals and healthcare personnel in this time of crisis,” says Brian Fleischer, a surgeon at the University of Chicago Medical Center and a member of the N95DECON consortium. “As a healthcare provider, many of my colleagues across the country have expressed concern with a lack of quality information in this ever-evolving landscape. I have learned a great deal from this team and I look forward to our continued collaboration to positively affect change.”

Crittenden is hopeful that the new website will help healthcare workers make informed decisions about the safest methods available for decontamination and reuse of N95 masks. “I know physicians personally who are very grateful that teams of scientists are doing the in-depth data analysis so that they can feel confident in what is best for their own health,” she says.

The members of the N95decon.org team come from institutions including UC Berkeley, the University of Chicago, Stanford, Georgetown University, Harvard University, Seattle University, University of Utah, the McGovern Institute for Brain Research at MIT, the University of Michigan, and from Consolidated Sterilizers and X, the Moonshot Factory.

 

How dopamine drives brain activity

Using a specialized magnetic resonance imaging (MRI) sensor, MIT neuroscientists have discovered how dopamine released deep within the brain influences both nearby and distant brain regions.

Dopamine plays many roles in the brain, most notably related to movement, motivation, and reinforcement of behavior. However, until now it has been difficult to study precisely how a flood of dopamine affects neural activity throughout the brain. Using their new technique, the MIT team found that dopamine appears to exert significant effects in two regions of the brain’s cortex, including the motor cortex.

“There has been a lot of work on the immediate cellular consequences of dopamine release, but here what we’re looking at are the consequences of what dopamine is doing on a more brain-wide level,” says Alan Jasanoff, an MIT professor of biological engineering, brain and cognitive sciences, and nuclear science and engineering. Jasanoff is also an associate member of MIT’s McGovern Institute for Brain Research and the senior author of the study.

The MIT team found that in addition to the motor cortex, the remote brain area most affected by dopamine is the insular cortex. This region is critical for many cognitive functions related to perception of the body’s internal states, including physical and emotional states.

MIT postdoc Nan Li is the lead author of the study, which appears today in Nature.

Tracking dopamine

Like other neurotransmitters, dopamine helps neurons to communicate with each other over short distances. Dopamine holds particular interest for neuroscientists because of its role in motivation, addiction, and several neurodegenerative disorders, including Parkinson’s disease. Most of the brain’s dopamine is produced in the midbrain by neurons that connect to the striatum, where the dopamine is released.

For many years, Jasanoff’s lab has been developing tools to study how molecular phenomena such as neurotransmitter release affect brain-wide functions. At the molecular scale, existing techniques can reveal how dopamine affects individual cells, and at the scale of the entire brain, functional magnetic resonance imaging (fMRI) can reveal how active a particular brain region is. However, it has been difficult for neuroscientists to determine how single-cell activity and brain-wide function are linked.

“There have been very few brain-wide studies of dopaminergic function or really any neurochemical function, in large part because the tools aren’t there,” Jasanoff says. “We’re trying to fill in the gaps.”

About 10 years ago, his lab developed MRI sensors that consist of magnetic proteins that can bind to dopamine. When this binding occurs, the sensors’ magnetic interactions with surrounding tissue weaken, dimming the tissue’s MRI signal. This allows researchers to continuously monitor dopamine levels in a specific part of the brain.

In their new study, Li and Jasanoff set out to analyze how dopamine released in the striatum of rats influences neural function both locally and in other brain regions. First, they injected their dopamine sensors into the striatum, which is located deep within the brain and plays an important role in controlling movement. Then they electrically stimulated a part of the brain called the lateral hypothalamus, which is a common experimental technique for rewarding behavior and inducing the brain to produce dopamine.

Then, the researchers used their dopamine sensor to measure dopamine levels throughout the striatum. They also performed traditional fMRI to measure neural activity in each part of the striatum. To their surprise, they found that high dopamine concentrations did not make neurons more active. However, higher dopamine levels did make the neurons remain active for a longer period of time.

“When dopamine was released, there was a longer duration of activity, suggesting a longer response to the reward,” Jasanoff says. “That may have something to do with how dopamine promotes learning, which is one of its key functions.”

Long-range effects

After analyzing dopamine release in the striatum, the researchers set out to determine this dopamine might affect more distant locations in the brain. To do that, they performed traditional fMRI imaging on the brain while also mapping dopamine release in the striatum. “By combining these techniques we could probe these phenomena in a way that hasn’t been done before,” Jasanoff says.

The regions that showed the biggest surges in activity in response to dopamine were the motor cortex and the insular cortex. If confirmed in additional studies, the findings could help researchers understand the effects of dopamine in the human brain, including its roles in addiction and learning.

“Our results could lead to biomarkers that could be seen in fMRI data, and these correlates of dopaminergic function could be useful for analyzing animal and human fMRI,” Jasanoff says.

The research was funded by the National Institutes of Health and a Stanley Fahn Research Fellowship from the Parkinson’s Disease Foundation.

Ed Boyden wins prestigious entrepreneurial science award

The Austrian Association of Entrepreneurs announced today that Edward S. Boyden, the Y. Eva Tan Professor in Neurotechnology at MIT, has been awarded the 2020 Wilhelm Exner Medal.

Named after Austrian businessman Wilhelm Exner, the medal has been awarded annually since 1921 to scientists, inventors, and designers that are “promoting the economy directly or indirectly in an outstanding manner.” Past honorees include 22 Nobel laureates.

“It’s a great honor to receive this award, which recognizes not only the basic science impact of our group’s work, but the impact of the work in the industrial and startup worlds,” says Boyden, who is a professor of biological engineering and of brain and cognitive sciences at MIT.

Boyden is a leading scientist whose work is widely used in industry, both in his own startup companies and in existing companies. Boyden is also a member of MIT’s McGovern Institute for Brain Research, Media Lab, and Koch Institute for Integrative Cancer Research.

“I am so thrilled that Ed has received this honor,” says Robert Desimone, director of the McGovern Institute. “Ed’s work has transformed neuroscience, through optogenetics, expansion microscopy, and other findings that are pushing biotechnology forward too.”

He is interested in understanding the brain as a computational system, and builds and applies tools for the analysis of neural circuit structure and dynamics, in behavioral and disease contexts. He played a critical role in the development of optogenetics, a revolutionary tool where the activity of neurons can be controlled using light. Boyden also led the team that invented expansion microscopy, which gives an unprecedented view of the nanoscale structures of cells, even in the absence of special super resolution microscopy equipment. Exner Medal laureates include notable luminaries of science, including Robert Langer of MIT. In addition, Boyden has founded a number of companies based on his inventions in the busy biotech hub of Kendall Square, Cambridge. These include a startup that is seeking to apply expansion microscopy to medical problems.

Boyden will deliver his prize lecture at the Exner symposium in November 2020, during which economists and scientists come together to hear about the winner’s research.

How the brain encodes landmarks that help us navigate

When we move through the streets of our neighborhood, we often use familiar landmarks to help us navigate. And as we think to ourselves, “OK, now make a left at the coffee shop,” a part of the brain called the retrosplenial cortex (RSC) lights up.

While many studies have linked this brain region with landmark-based navigation, exactly how it helps us find our way is not well-understood. A new study from MIT neuroscientists now reveals how neurons in the RSC use both visual and spatial information to encode specific landmarks.

“There’s a synthesis of some of these signals — visual inputs and body motion — to represent concepts like landmarks,” says Mark Harnett, an assistant professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research. “What we went after in this study is the neuron-level and population-level representation of these different aspects of spatial navigation.”

In a study of mice, the researchers found that this brain region creates a “landmark code” by combining visual information about the surrounding environment with spatial feedback of the mice’s own position along a track. Integrating these two sources of information allowed the mice to learn where to find a reward, based on landmarks that they saw.

“We believe that this code that we found, which is really locked to the landmarks, and also gives the animals a way to discriminate between landmarks, contributes to the animals’ ability to use those landmarks to find rewards,” says Lukas Fischer, an MIT postdoc and the lead author of the study.

Harnett is the senior author of the study, which appears today in the journal eLife. Other authors are graduate student Raul Mojica Soto-Albors and recent MIT graduate Friederike Buck.

Encoding landmarks

Previous studies have found that people with damage to the RSC have trouble finding their way from one place to another, even though they can still recognize their surroundings. The RSC is also one of the first areas affected in Alzheimer’s patients, who often have trouble navigating.

The RSC is wedged between the primary visual cortex and the motor cortex, and it receives input from both of those areas. It also appears to be involved in combining two types of representations of space — allocentric, meaning the relationship of objects to each other, and egocentric, meaning the relationship of objects to the viewer.

“The evidence suggests that RSC is really a place where you have a fusion of these different frames of reference,” Harnett says. “Things look different when I move around in the room, but that’s because my vantage point has changed. They’re not changing with respect to one another.”

In this study, the MIT team set out to analyze the behavior of individual RSC neurons in mice, including how they integrate multiple inputs that help with navigation. To do that, they created a virtual reality environment for the mice by allowing them to run on a treadmill while they watch a video screen that makes it appear they are running along a track. The speed of the video is determined by how fast the mice run.

At specific points along the track, landmarks appear, signaling that there’s a reward available a certain distance beyond the landmark. The mice had to learn to distinguish between two different landmarks, and to learn how far beyond each one they had to run to get the reward.

Once the mice learned the task, the researchers recorded neural activity in the RSC as the animals ran along the virtual track. They were able to record from a few hundred neurons at a time, and found that most of them anchored their activity to a specific aspect of the task.

There were three primary anchoring points: the beginning of the trial, the landmark, and the reward point. The majority of the neurons were anchored to the landmarks, meaning that their activity would consistently peak at a specific point relative to the landmark, say 50 centimeters before it or 20 centimeters after it.

Most of those neurons responded to both of the landmarks, but a small subset responded to only one or the other. The researchers hypothesize that those strongly selective neurons help the mice to distinguish between the landmarks and run the correct distance to get the reward.

When the researchers used optogenetics (a tool that can turn off neuron activity) to block activity in the RSC, the mice’s performance on the task became much worse.

Combining inputs

The researchers also did an experiment in which the mice could choose to run or not while the video played at a constant speed, unrelated to the mice’s movement. The mice could still see the landmarks, but the location of the landmarks was no longer linked to a reward or to the animals’ own behavior. In that situation, RSC neurons did respond to the landmarks, but not as strongly as they did when the mice were using them for navigation.

Further experiments allowed the researchers to tease out just how much neuron activation is produced by visual input (seeing the landmarks) and by feedback on the mouse’s own movement. However, simply adding those two numbers yielded totals much lower than the neuron activity seen when the mice were actively navigating the track.

“We believe that is evidence for a mechanism of nonlinear integration of these inputs, where they get combined in a way that creates a larger response than what you would get if you just added up those two inputs in a linear fashion,” Fischer says.

The researchers now plan to analyze data that they have already collected on how neuron activity evolves over time as the mice learn the task. They also hope to perform further experiments in which they could try to separately measure visual and spatial inputs into different locations within RSC neurons.

The research was funded by the National Institutes of Health, the McGovern Institute, the NEC Corporation Fund for Research in Computers and Communications at MIT, and the Klingenstein-Simons Fellowship in Neuroscience.

2020 MacVicar Faculty Fellows named

The Office of the Vice Chancellor and the Registrar’s Office have announced this year’s Margaret MacVicar Faculty Fellows: materials science and engineering Professor Polina Anikeeva, literature Professor Mary Fuller, chemical engineering Professor William Tisdale, and electrical engineering and computer science Professor Jacob White.

Role models both in and out of the classroom, the new fellows have tirelessly sought to improve themselves, their students, and the Institute writ large. They have reimagined curricula, crossed disciplines, and pushed the boundaries of what education can be. They join a matchless academy of scholars committed to exceptional instruction and innovation.

Vice Chancellor Ian Waitz will honor the fellows at this year’s MacVicar Day symposium, “Learning through Experience: Education for a Fulfilling and Engaged Life.” In a series of lightning talks, student and faculty speakers will examine how MIT — through its many opportunities for experiential learning — supports students’ aspirations and encourages them to become engaged citizens and thoughtful leaders.

The event will be held on March 13 from 2:30-4 p.m. in Room 6-120. A reception will follow in Room 2-290. All in the MIT community are welcome to attend.

For nearly three decades, the MacVicar Faculty Fellows Program has been recognizing exemplary undergraduate teaching and advising around the Institute. The program was named after Margaret MacVicar, the first dean for undergraduate education and founder of the Undergraduate Research Opportunities Program (UROP). Nominations are made by departments and include letters of support from colleagues, students, and alumni. Fellows are appointed to 10-year terms in which they receive $10,000 per year of discretionary funds.

Polina Anikeeva

“I’m speechless,” Polina Anikeeva, associate professor of materials science and engineering and brain and cognitive sciences, says of becoming a MacVicar Fellow. “In my opinion, this is the greatest honor one could have at MIT.”

Anikeeva received her PhD from MIT in 2009 and became a professor in the Department of Materials Science and Engineering two years later. She attended St. Petersburg State Polytechnic University for her undergraduate education. Through her research — which combines materials science, electronics, and neurobiology — she works to better understand and treat brain disorders.

Anikeeva’s colleague Christopher Schuh says, “Her ability and willingness to work with students however and whenever they need help, her engaging classroom persona, and her creative solutions to real-time challenges all culminate in one of MIT’s most talented and beloved undergraduate professors.”

As an instructor, advisor, and marathon runner, Anikeeva has learned the importance of finding balance. Her colleague Lionel Kimerling reflects on this delicate equilibrium: “As a teacher, Professor Anikeeva is among the elite who instruct, inspire, and nurture at the same time. It is a difficult task to demand rigor with a gentle mentoring hand.”

Students call her classes “incredibly hard” but fun and exciting at the same time. She is “the consummate scientist, splitting her time evenly between honing her craft, sharing knowledge with students and colleagues, and mentoring aspiring researchers,” wrote one.

Her passion for her work and her devotion to her students are evident in the nomination letters. One student recounted their first conversation: “We spoke for 15 minutes, and after talking to her about her research and materials science, I had never been so viscerally excited about anything.” This same student described the guidance and support Anikeeva provided her throughout her time at MIT.

After working with Anikeeva to apply what she learned in the classroom to a real-world problem, this student recalled, “I honestly felt like an engineer and a scientist for the first time ever. I have never felt so fulfilled and capable. And I realize that’s what I want for the rest of my life — to feel the highs and lows of discovery.”

Anikeeva champions her students in faculty and committee meetings as well. She is a “reliable advocate for student issues,” says Caroline Ross, associate department head and professor in DMSE. “Professor Anikeeva is always engaged with students, committed to student well-being, and passionate about education.”

“Undergraduate teaching has always been a crucial part of my MIT career and life,” Anikeeva reflects. “I derive my enthusiasm and energy from the incredibly talented MIT students — every year they surprise me with their ability to rise to ever-expanding intellectual challenges. Watching them grow as scientists, engineers, and — most importantly — people is like nothing else.”

Mary Fuller

Experimentation is synonymous with education at MIT and it is a crucial part of literature Professor Mary Fuller’s classes. As her colleague Arthur Bahr notes, “Mary’s habit of starting with a discrete practical challenge can yield insights into much broader questions.”

Fuller attended Dartmouth College as an undergraduate, then received both her MA and PhD in English and American literature from The Johns Hopkins University. She began teaching at MIT in 1989. From 2013 to 2019, Fuller was head of the Literature Section. Her successor in the role, Shankar Raman, says that her nominators “found [themselves] repeatedly surprised by the different ways Mary has pushed the limits of her teaching here, going beyond her own comfort zones to experiment with new texts and techniques.”

“Probably the most significant thing I’ve learned in 30 years of teaching here is how to ask more and better questions,” says Fuller. As part of a series of discussions on ethics and computing, she has explored the possibilities of artificial intelligence from a literary perspective. She is also developing a tool for the edX platform called PoetryViz, which would allow MIT students and students around the world to practice close reading through poetry annotation in an entirely new way.

“We all innovate in our teaching. Every year. But, some of us innovate more than others,” Krishna Rajagopal, dean for digital learning, observes. “In addition to being an outstanding innovator, Mary is one of those colleagues who weaves the fabric of undergraduate education across the Institute.”

Lessons learned in Fuller’s class also underline the importance of a well-rounded education. As one alumna reflected, “Mary’s teaching carried a compassion and ethic which enabled non-humanities students to appreciate literature as a diverse, valuable, and rewarding resource for personal and social reflection.”

Professor Fuller, another student remarked, has created “an environment where learning is not merely the digestion of rote knowledge, but instead the broad-based exploration of ideas and the works connected to them.”

“Her imagination is capacious, her knowledge is deep, and students trust her — so that they follow her eagerly into new and exploratory territory,” says Professor of Literature Stephen Tapscott.

Fuller praises her students’ willingness to take that journey with her, saying, “None of my classes are required, and none are technical, so I feel that students have already shown a kind of intellectual generosity by putting themselves in the room to do the work.”

For students, the hard work is worth it. Mary Fuller, one nominator declared, is exactly “the type of deeply impactful professor that I attended MIT hoping to learn from.”

William Tisdale

William Tisdale is the ARCO Career Development Professor of chemical engineering and, according to his colleagues, a “true star” in the department.

A member of the faculty since 2012, he received his undergraduate degree from the University of Delaware and his PhD from the University of Minnesota. After a year as a postdoc at MIT, Tisdale became an assistant professor. His research interests include nanotechnology and energy transport.

Tisdale’s colleague Kristala Prather calls him a “curriculum fixer.” During an internal review of Course 10 subjects, the department discovered that 10.213 (Chemical and Biological Engineering) was the least popular subject in the major and needed to be revised. After carefully evaluating the coursework, and despite having never taught 10.213 himself, Tisdale envisioned a novel way of teaching it. With his suggestions, the class went from being “despised” to loved, with subject evaluations improving by 70 percent from one spring to the next. “I knew Will could make a difference, but I had no idea he could make that big of a difference in just one year,” remarks Prather.

One student nominator even went so far as to call 10.213, as taught by Tisdale, “one of my best experiences at MIT.”

Always patient, kind, and adaptable, Tisdale’s willingness to tackle difficult problems is reflected in his teaching. “While the class would occasionally start to mutiny when faced with a particularly confusing section, Prof. Tisdale would take our groans on with excitement,” wrote one student. “His attitude made us feel like we could all get through the class together.” Regardless of how they performed on a test, wrote another, Tisdale “clearly sent the message that we all always have so much more to learn, but that first and foremost he respected you as a person.”

“I don’t think I could teach the way I teach at many other universities,” Tisdale says. “MIT students show up on the first day of class with an innate desire to understand the world around them; all I have to do is pull back the curtain!”

“Professor Tisdale remains the best teacher, mentor, and role model that I have encountered,” one student remarked. “He has truly changed the course of my life.”

“I am extremely thankful to be at a university that values undergraduate education so highly,” Tisdale says. “Those of us who devote ourselves to undergraduate teaching and mentoring do so out of a strong sense of responsibility to the students as well as a genuine love of learning. There are few things more validating than being rewarded for doing something that already brings you joy.”

Jacob White

Jacob White is the Cecil H. Green Professor of Electrical Engineering and Computer Science (EECS) and chair of the Committee on Curricula. After completing his undergraduate degree at MIT, he received a master’s degree and doctorate from the University of California at Berkeley. He has been a member of the Course 6 faculty since 1987.

Colleagues and students alike observed White’s dedication not just to teaching, but to improving teaching throughout the Institute. As Luca Daniel and Asu Ozdaglar of the EECS department noted in their nomination letter, “Jacob completely understands that the most efficient way to make his passion and ideas for undergraduate education have a real lasting impact is to ‘teach it to the teachers!’”

One student wrote that White “has spent significant time and effort educating the lab assistants” of 6.302 (Feedback System Design). As one of these teaching assistants confirmed, White’s “enthusiastic spirit” inspired them to spend hours discussing how to best teach the subject. “Many people might think this is not how they want to spend their Thursday nights,” the student wrote. “I can speak for myself and the other TAs when I say that it was an incredibly fun and educational experience.”

His work to improve instruction has even expanded to other departments. A colleague describes White’s efforts to revamp 8.02 (Physics II) as “Herculean.” Working with a group of students and postdocs to develop experiments for this subject, “he seemed to be everywhere at once … while simultaneously teaching his own class.” Iterations took place over a year and a half, after which White trained the subject’s TAs as well. Hundreds of students are benefitting from these improved experiments.

White is, according to Daniel and Ozdaglar, “a colleague who sincerely, genuinely, and enormously cares about our undergraduate students and their education, not just in our EECS department, but also in our entire MIT home.”

When he’s not fine-tuning pedagogy or conducting teacher training, he is personally supporting his students. A visiting student described White’s attention: “He would regularly meet with us in groups of two to make sure we were learning. In a class of about 80 students in a huge lecture hall, it really felt like he cared for each of us.”

And his zeal has rubbed off: “He made me feel like being excited about the material was the most important thing,” one student wrote.
The significance of such a spark is not lost on White.

“As an MIT freshman in the late 1970s, I joined an undergraduate research program being pioneered by Professor Margaret MacVicar,” he says. “It was Professor MacVicar and UROP that put me on the academic’s path of looking for interesting problems with instructive solutions. It is a path I have walked for decades, with extraordinary colleagues and incredible students. So, being selected as a MacVicar Fellow? No honor could mean more to me.”

Empowering faculty partnerships across the globe

MIT faculty share their creative and technical talent on campus as well as across the globe, compounding the Institute’s impact through strong international partnerships. Thanks to the MIT Global Seed Funds (GSF) program, managed by the MIT International Science and Technology Initiatives (MISTI), more of these faculty members will be able to build on these relationships to develop ideas and create new projects.

“This MISTI fund was extremely helpful in consolidating our collaboration and has been the start of a long-term interaction between the two teams,” says 2017 GSF awardee Mehrdad Jazayeri, associate professor of brain and cognitive sciences and investigator at the McGovern Institute for Brain Research. “We have already submitted multiple abstracts to conferences together, mapped out several ongoing projects, and secured international funding thanks to the preliminary progress this seed fund enabled.”

This year, the 28 funds that comprise MISTI GSF received 232 MIT applications. Over $2.3 million was awarded to 107 projects from 23 departments across the entire Institute. This brings the amount awarded to $22 million over the 12-year life of the program. Besides supporting faculty, these funds also provide meaningful educational opportunities for students. The majority of GSF teams include students from MIT and international collaborators, bolstering both their research portfolios and global experience.

“This project has had important impact on my grad student’s education and development. She was able to apply techniques she has learned to a new and challenging system, mentor an international student, participate in a major international meeting, and visit CEA,” says Professor of Chemistry Elizabeth Nolan, a 2017 GSF awardee.

On top of these academic and research goals, students are actively broadening their cultural experience and scope. “The environment at CEA differs enormously from MIT because it is a national lab and because lab structure and graduate education in France is markedly different than at MIT,” Nolan continues. “At CEA, she had the opportunity to present research to distinguished international colleagues.”

These impactful partnerships unite faculty teams behind common goals to tackle worldwide challenges, helping to develop solutions that would not be possible without international collaboration. 2017 GSF winner Emilio Bizzi, professor emeritus of brain and cognitive sciences and emeritus investigator at the McGovern Institute, articulated the advantage of combining these individual skills within a high-level team. “The collaboration among researchers was valuable in sharing knowledge, experience, skills and techniques … as well as offering the probability of future development of systems to aid in rehabilitation of patients suffering TBI.”

The research opportunities that grow from these seed funds often lead to published papers and additional funding leveraged from early results. The next call for proposals will be in mid-May.

MISTI creates applied international learning opportunities for MIT students that increase their ability to understand and address real-world problems. MISTI collaborates with partners at MIT and beyond, serving as a vital nexus of international activity and bolstering the Institute’s research mission by promoting collaborations between MIT faculty members and their counterparts abroad.

Uncovering the functional architecture of a historic brain area

In 1840 a patient named Leborgne was admitted to a hospital near Paris: he was only able repeat the word “Tan.” This loss of speech drew the attention of Paul Broca who, after Leborgne’s death, identified lesions in his frontal lobe in the left hemisphere. These results echoed earlier findings from French neurologist Marc Dax. Now known as “Broca’s area,” the roles of this brain region have been extended to mental functions far beyond speech articulation. So much so, that the underlying functional organization of Broca’s area has become a source of discussion and some confusion.

McGovern Investigator Ev Fedorenko is now calling, in a paper at Trends in Cognitive Sciences, for recognition that Broca’s area consists of functionally distinct, specialized regions, with one sub-region very much dedicated to language processing.

“Broca’s area is one of the first regions you learn about in introductory psychology and neuroscience classes, and arguably laid the foundation for human cognitive neuroscience,” explains Ev Fedorenko, who is also an assistant professor in MIT’s Department of Brain and Cognitive Sciences. “This patch of cortex and its connections with other brain areas and networks provides a microcosm for probing some core questions about the human brain.”

Broca’s area, shown in red. Image: Wikimedia

Language is a uniquely human capability, and thus the discovery of Broca’s area immediately captured the attention of researchers.

“Because language is universal across cultures, but unique to the human species, studying Broca’s area and constraining theories of language accordingly promises to provide a window into one of the central abilities that make humans so special,” explains co-author Idan Blank, a former postdoc at the McGovern Institute who is now an assistant professor of psychology at UCLA.

Function over form

Broca’s area is found in the posterior portion of the left inferior frontal gyrus (LIFG). Arguments and theories abound as to its function. Some consider the region as dedicated to language or syntactic processing, others argue that it processes multiple types of inputs, and still others argue it is working at a high level, implementing working memory and cognitive control. Is Broca’s area a highly specialized circuit, dedicated to the human-specific capacity for language and largely independent from the rest high-level cognition, or is it a CPU-like region, overseeing diverse aspects of the mind and orchestrating their operations?

“Patient investigations and neuroimaging studies have now associated Broca’s region with many processes,” explains Blank. “On the one hand, its language-related functions have expanded far beyond articulation, on the other, non-linguistic functions within Broca’s area—fluid intelligence and problem solving, working memory, goal-directed behavior, inhibition, etc.—are fundamental to ‘all of cognition.’”

While brain anatomy is a common path to defining subregions in Broca’s area, Fedorenko and Blank argue that instead this approach can muddy the water. In fact, the anatomy of the brain, in terms of cortical folds and visible landmarks that originally stuck out to anatomists, vary from individual to individual in terms of their alignment with the underlying functions of brain regions. While these variations might seem small, they potentially have a huge impact on conclusions about functional regions based on traditional analysis methods. This means that the same bit of anatomy (like, say, the posterior portion of a gyrus) could be doing different things in different brains.

“In both investigations of patients with brain damage and much of brain imaging work, a lot of confusion has stemmed from the use of macroanatomical areas (like the inferior frontal gyrus (IFG)) as ‘units of analysis’,” explains Fedorenko. “When some researchers found IFG activation for a syntactic manipulation, and others for a working memory manipulation, the field jumped to the conclusion that syntactic processing relies on working memory. But these effects might actually be arising in totally distinct parts of the IFG.”

The only way to circumvent this problem is to turn to functional data and aggregate information from functionally defined areas across individuals. Using this approach, across four lines of evidence from the last decade, Fedorenko and Blank came to a clear conclusion: Broca’s area is not a monolithic region with a single function, but contains distinct areas, one dedicated to language processing, and another that supports domain-general functions like working memory.

“We just have to stop referring to macroanatomical brain regions (like gyri and sulci, or their parts) when talking about the functional architecture of the brain,” explains Fedorenko. “I am delighted to see that more and more labs across the world are recognizing the inter-individual variability that characterizes the human brain– this shift is putting us on the right path to making fundamental discoveries about how our brain works.”

Indeed, accounting for distinct functional regions, within Broca’s area and elsewhere, seems essential going forward if we are to truly understand the complexity of the human brain.

Study explores brain basis of special interests

Did you know that 88% of children on the autism spectrum have an affinity — or special interest that they are particularly passionate about?

We are curious about this.

The Gabrieli lab is exploring the brain basis of these special interests in kids with and without autism. The PAL (Project on Affinities and Language) study uses noninvasive and child-friendly fMRI methods to study whether affinities can activate language regions of the brain. The lab is currently looking for 7–12-year-old children with and without autism who have a special interest or passion.

Interested in participating?

Sign up here

Embracing neurodiversity to better understand autism

Researchers often approach autism spectrum disorder (ASD) through the lens of what might “break down.” While this approach has value, autism is an extremely heterogeneous condition, and diagnosed individuals have a broad range of abilities.

The Gabrieli lab is embracing this diversity and leveraging the strengths of diagnosed individuals by researching their specific “affinities.”

Affinities involve a strong passion for specific topics, ranging from insects to video game characters, and can include impressive feats of knowledge and focus.

The biological basis of these affinities and associated abilities remains unclear, which is intriguing to John Gabrieli and his lab.

“A striking aspect of autism is the great variation from individual to individual,” explains McGovern Investigator John Gabrieli. “Understanding what motivates an individual child may inform how to best help that child reach his or her communicative potential.”

Doug Tan is an artist on the autism spectrum who has a particular interest in Herbie, the fictional Volkswagen Beetle. Nearly all of Tan’s works include a visual reference to his “affinity” (shown here in black). Image: Doug Tan

Affinities have traditionally been seen as a distraction “interfering” with conventional teaching and learning. This mindset was upended by the 2014 book Life Animated by Ron Suskind, whose autistic son Owen seemingly lost his ability to speak around age three. Despite this setback, Owen maintained a deep affinity for Disney movies and characters. Rather than extinguishing this passion, the Suskinds embraced it as a path to connection.

Reframing such affinities as a strength not a frustration, and a path to communication rather than a roadblock, caught the attention of Kristy Johnson, a PhD student at the MIT Media Lab, who also has a non-verbal child with autism.

“My interest is in empowering and understanding populations that have traditionally been hard to study, including those with non-verbal and minimally verbal autism,” explains Johnson. “One way to do that is through affinities.”

But even identifying affinities is difficult. An interest in “trains” might mean 18th-century smokestacks to one child, and the purple line of the MBTA commuter rail to another. Serendipitously, she mentioned her interest to Gabrieli one day. He slammed his hands on the table, jumped up, and ran to find lab members Anila D’Mello and Halie Olson, who were gearing up to pursue the neural basis of affinities in autism. A collaboration was born.

Scientific challenge

What followed was six months of intense discussion. How can an affinity be accurately defined? How can individually tailored experiments be adequately controlled? What makes a robust comparison group? How can task-related performance differences between individuals with autism be accounted for?

The handful of studies that had used fMRI neuroimaging to examine affinities in autism had focused on the brain’s reward circuitry. D’Mello and Olson wanted to examine the language network of the brain — a well-defined network of brain regions whose activation can be measured by fMRI. Affinities trigger communication in some individuals with autism (Suskind’s family were using Disney characters to engage and communicate, not simply as a reward). Was the language network being engaged by affinities? Could these results point to a way of tailoring learning for all types of development?

“The language network involves lots of regions across the brain, including temporal, parietal, frontal, and subcortical areas, which play specific roles in different aspects of language processing” explains Olson. “We were interested in a task that used affinities to tap the language network.”

fMRI reveals regions of the brain that show increased activity for stories related to affinities versus neutral stories; these include regions important for language processing. Image: Anila D’Mello

By studying this network, the team is testing whether affinities can elicit “typical” activation in regions of the brain that are sometimes assumed to not be engaged in autism. The approach may help develop better paradigms for studying other tasks with individuals with autism. Regardless of whether there are differences between the group diagnosed with autism and typically developing children, insight will likely be gained into how personalized special interests influence engagement of the language network.

The resulting study is task-free, removing the variable of differing motor or cognitive skill sets. Kids watch videos of their individual affinity in the fMRI scanner, and then listen to stories based on that affinity. They also watch and listen to “neutral” videos and stories about nature that are consistent across all children. Identifying affinities robustly so that the right stimulus can be presented is critical. Rather than an interest in bugs, affinities are often very specific (bugs that eat other bugs). But identifying and cross-checking affinities is something the group is becoming adept at. The results are emerging, but the effects that the team are seeing are significant, and preliminary data suggest that affinities engage networks beyond reward circuits.

“We have a small sample right now, but across the sample, there seems to be a difference in activation in the brain’s language network when listening to affinity stories compared to neutral stories,” explains D’Mello. “The biggest surprise is that the differences are evident in single subjects.”

Future forward

The work is already raising exciting new questions. Are there other brain regions engaged by affinities? How would such information inform education and intervention paradigms? In addition, the team is showing it’s possible to derive information from individualized, naturalistic experimental paradigms, a message for brain imaging and behavioral studies in general. The researchers also hope the results inspire parents, teachers, and psychologists to perceive and engage with an individual’s affinities in new ways.

“This could really help teach us to communicate with and motivate very young and non-verbal kids on the spectrum in a way that is interesting and meaningful to them,” D’Mello explains.

By studying the strengths of individuals with autism, these researchers are showing that, through embracing neurodiversity, we can enhance science, our understanding of the brain, and perhaps even our understanding of ourselves.

Learn about autism studies at MIT