Bold new microscopies for the brain

McGovern researchers create unexpected new approaches to microscopy that are changing the way scientists look at the brain.

Ask McGovern Investigator Ed Boyden about his ten-year plan and you’ll get an immediate and straight-faced answer: “We would like to understand the brain.”

He means it. Boyden intends to map all of the cells in a brain, all of their connections, and even all of the molecules that form those connections and determine their strengths. He also plans to study how information flows through the brain and to use this to generate a working model. “I’d love to be able to load a map of an entire brain into a computer and see if we can simulate the brain,” he says.

Boyden likens the process to reverse-engineering a computer by opening it up and looking inside. The analogy, though not perfect, provides a sense of the enormity of the task ahead. As complicated as computers are, brains are far more complex, and they are also much harder to visualize, given the need to see features at multiple scales. For example, signals travel from cell to cell through synaptic connections that are measured in nanometers, but the signals are then propagated along nerve fibers that may span several centimeters—a difference of more than a million-fold. Modern microscopes make it possible to study features at one scale or the other, but not both together. Similarly, there are methods for visualizing electrical activity in single neurons or in whole brains, but there is no way to see both at once. So Boyden is building his own tools, and in the process is pushing the limits of imagination. “Our group is often trying to do the opposite of what other people do,” Boyden says.

Boyden’s new methods are part of a broader push to understand the brain’s connectivity, an objective that gained impetus two years ago with the President’s BRAIN Initiative, and with allied efforts such as the NIH-funded Human Connectome Project. Hundreds of researchers have already downloaded Boyden’s recently published protocols, including colleagues at the McGovern Institute who are using them to advance their studies of brain function and disease.

Just add water

Under the microscope, the brain section prepared by Jill Crittenden looks like a tight bundle of threads. The nerve fibers are from a mouse brain, from a region known to degenerate in humans with Parkinson’s disease. The loss of the tiny synaptic connections between these fibers may be the earliest signs of degeneration, so Crittenden, a research scientist who has been studying this disease for several years in the lab of McGovern Investigator Ann Graybiel, wants to be able to see them.

But she can’t. They are far too small— smaller than a wavelength of light, meaning they are beyond the limit for optical microscopy. To bring these structures into view, one of Boyden’s technologies, called expansion microscopy (ExM), simply makes the specimen bigger, allowing it to be viewed on a conventional laboratory microscope.

The idea is at once obvious and fantastical. “Expansion microscopy is the kind of thing scientists daydream about,” says Paul Tillberg, a graduate student in Boyden’s lab. “You either shrink the scientist or expand the specimen.”

Leaving Crittenden’s sample in place, Tillberg adds water. Minutes later, the tissue has expanded and become transparent, a ghostly and larger version of its former self.

Crittenden takes another look through the scope. “It’s like someone has loosened up all the fibers. I can see each one independently, and see them interconnecting,” she says. “ExM will add a lot of power to the tools we’ve developed for visualizing the connections we think are degenerating.”

It took Tillberg and his fellow graduate student Fei Chen several months of brainstorming to find a plausible way to make ExM a reality. They had found inspiration in the work of MIT physicist Toyoichi Tanaka, who in the 1970s had studied smart gels, polymers that rapidly expand in response to a change in environment. One familiar example is the absorbent material in baby diapers, and Boyden’s team turned to this substance for the expansion technique.

The process they devised involves several steps. The tissue is first labeled using fluorescent antibodies that bind to molecules of interest, and then it is impregnated with the gel-forming material. Once the gel has set, the fluorescent markers are anchored to the gel, and the original tissue sample is digested, allowing the gel to stretch evenly in all directions.

When water is added, the gel expands and the fluorescent markers spread out like a picture on a balloon. Remarkably, the 3D shapes of even the finest structures are faithfully preserved during the expansion, making it possible to see them using a conventional microscope. By labeling molecules with different colors, the researchers can even distinguish pre-synaptic from post-synaptic structures. Boyden plans eventually to use hundreds, possibly thousands, of colors, and to increase the expansion factor to 10 times original size, equivalent to a 1000-fold increase in volume.

ExM is not the only way to see fine structures such as synapses; they can also be visualized by electron microcopy, or by recently-developed ‘super-resolution’ optical methods that garnered a 2014 Nobel Prize. These techniques, however, require expensive equipment, and the images are very time-consuming to produce.

“With ExM, because the sample is physically bigger, you can scan it very quickly using just a regular microscope,” says Boyden.

Boyden is already talking to other leading researchers in the field, including Kwanghun Chung at MIT and George Church at Harvard, about ways to further enhance the ExM method. Within the McGovern Institute, among those who expect to benefit from these advances is Guoping Feng, who is developing mouse models of autism, schizophrenia and other disorders by introducing some of the same genetic changes seen in humans with these disorders. Many of the genes associated with autism and schizophrenia play a role in the formation of synapses, but even with the mouse models at his disposal, Feng isn’t sure what goes wrong with them because they are so hard to see. “If we can make parts of the brain bigger, we might be able to see how the assembly of this synaptic machinery changes in different disorders,” he says.

3D Movies Without Special Glasses

Another challenge facing Feng and many other researchers is that many brain functions, and many brain diseases, are not confined to one area, but are widely distributed across the brain. Trying to understand these processes by looking through a small microscopic window has been compared to watching a soccer game by observing just a single square foot of the playing field.

No current technology can capture millisecond-by-millisecond electrical events across the entire living brain, so Boyden and collaborators in Vienna, Austria, decided to develop one. They turned to a method called light field microscopy (LFM) as a way to capture 3D movies of an animal’s thoughts as they flash through the entire nervous system.

The idea is mind-boggling to imagine, but the hardware is quite simple. The instrument records images in depth the same way humans do, using multiple ‘eyes’ to send slightly offset 2D images to a computer that can reconstruct a 3D image of the world. (The idea had been developed in the 1990s by Boyden’s MIT colleague Ted Adelson, and a similar method was used to create Google Street View.) Boyden and his collaborators started with a microscope of standard design, attached a video camera, and inserted between them a six-by-six array of miniature lenses, designed in Austria, that projects a grid of offset images into the camera and the computer.

The rest is math. “We take the multiple, superimposed flat images projected through the lens array and combine them into a volume,” says Young-Gyu Yoon, a graduate student in the Boyden lab who designed and wrote the software.

Another graduate student, Nikita Pak, used the new method to measure neural activity in C. elegans, a tiny worm whose entire nervous system consists of just 302 neurons. By using a worm that had been genetically engineered so that its neurons light up when they become electrically active, Pak was able to make 3D movies of the activity in the entire nervous system. “The setup is just so simple,” he says. “Every time I use it, I think it’s cool.”

The team then tested their method on a larger brain, that of the larval zebra fish. They presented the larvae with a noxious odor, and found that it triggered activity in around 5000 neurons, over a period of about three minutes. Even with this relatively simple example, activity is distributed widely throughout the brain, and would be difficult to detect with previous techniques. Boyden is now working towards recording activity over much longer timespans, and he also envisions scaling it up to image the much more complex brains of mammals.

He hopes to start with the smallest known mammal, the Etruscan shrew. This animal resembles a mouse, but it is ten times smaller, no bigger than a thimble. Its brain is also much smaller, with only a few million neurons, compared to 100 million in a mouse.

Whole brain imaging in this tiny creature could provide an unprecedented view of mammalian brain activity, including its disruption in disease states. Feng cites sensory overload in autism as an example. “If we can see how sensory activity spreads through the brain, we can start to understand how overload starts and how it spills over to other brain areas,” he says.

Visions of Convergence

While Boyden’s microscopy technologies are providing his colleagues with new ways to study brain disorders, Boyden himself hopes to use them to understand the brain as a whole. He plans to use ExM to map connections and identify which molecules are where; 3D whole-brain imaging to trace brain activity as it unfolds in real time, and optogenetics techniques to stimulate the brain and directly record the resulting activity. By combining all three tools together, he hopes to pin stimuli and activity to the molecules and connections on the map and then use that to build a computational model that simulates brain activity.

The plan is grandiose, and the tools aren’t all ready yet, but to make the scheme plausible in the proposed timeframe, Boyden is adhering to a few principles. His methods are fast, capturing information-dense images rapidly rather than scanning over days, and inclusive, imaging whole brains rather than chunks that need to be assembled. They are also accessible, so researchers don’t need to spend large sums to acquire specialized equipment or expertise in-house.

The challenges ahead might appear insurmountable at times, but Boyden is undeterred. He moves forward, his mind open to even the most far-fetched ideas, because they just might work.

MIT team enlarges brain samples, making them easier to image

Beginning with the invention of the first microscope in the late 1500s, scientists have been trying to peer into preserved cells and tissues with ever-greater magnification. The latest generation of so-called “super-resolution” microscopes can see inside cells with resolution better than 250 nanometers.

A team of researchers from MIT has now taken a novel approach to gaining such high-resolution images: Instead of making their microscopes more powerful, they have discovered a method that enlarges tissue samples by embedding them in a polymer that swells when water is added. This allows specimens to be physically magnified, and then imaged at a much higher resolution.

This technique, which uses inexpensive, commercially available chemicals and microscopes commonly found in research labs, should give many more scientists access to super-resolution imaging, the researchers say.

“Instead of acquiring a new microscope to take images with nanoscale resolution, you can take the images on a regular microscope. You physically make the sample bigger, rather than trying to magnify the rays of light that are emitted by the sample,” says Ed Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT.

Boyden is the senior author of a paper describing the new method in the Jan. 15 online edition of Science. Lead authors of the paper are graduate students Fei Chen and Paul Tillberg.

Physical magnification

Most microscopes work by using lenses to focus light emitted from a sample into a magnified image. However, this approach has a fundamental limit known as the diffraction limit, which means that it can’t be used to visualize objects much smaller than the wavelength of the light being used. For example, if you are using blue-green light with a wavelength of 500 nanometers, you can’t see anything smaller than 250 nanometers.

“Unfortunately, in biology that’s right where things get interesting,” says Boyden, who is a member of MIT’s Media Lab and McGovern Institute for Brain Research. Protein complexes, molecules that transport payloads in and out of cells, and other cellular activities are all organized at the nanoscale.

Scientists have come up with some “really clever tricks” to overcome this limitation, Boyden says. However, these super-resolution techniques work best with small, thin samples, and take a long time to image large samples. “If you want to map the brain, or understand how cancer cells are organized in a metastasizing tumor, or how immune cells are configured in an autoimmune attack, you have to look at a large piece of tissue with nanoscale precision,” he says.

To achieve this, the MIT team focused its attention on the sample rather than the microscope. Their idea was to make specimens easier to image at high resolution by embedding them in an expandable polymer gel made of polyacrylate, a very absorbent material commonly found in diapers.

Before enlarging the tissue, the researchers first label the cell components or proteins that they want to examine, using an antibody that binds to the chosen targets. This antibody is linked to a fluorescent dye, as well as a chemical anchor that can attach the dye to the polyacrylate chain.

Once the tissue is labeled, the researchers add the precursor to the polyacrylate gel and heat it to form the gel. They then digest the proteins that hold the specimen together, allowing it to expand uniformly. The specimen is then washed in salt-free water to induce a 100-fold expansion in volume. Even though the proteins have been broken apart, the original location of each fluorescent label stays the same relative to the overall structure of the tissue because it is anchored to the polyacrylate gel.

“What you’re left with is a three-dimensional, fluorescent cast of the original material. And the cast itself is swollen, unimpeded by the original biological structure,” Tillberg says.

The MIT team imaged this “cast” with commercially available confocal microscopes, commonly used for fluorescent imaging but usually limited to a resolution of hundreds of nanometers. With their enlarged samples, the researchers achieved resolution down to 70 nanometers. “The expansion microscopy process … should be compatible with many existing microscope designs and systems already in laboratories,” Chen adds.

Large tissue samples

Using this technique, the MIT team was able to image a section of brain tissue 500 by 200 by 100 microns with a standard confocal microscope. Imaging such large samples would not be feasible with other super-resolution techniques, which require minutes to image a tissue slice only 1 micron thick and are limited in their ability to image large samples by optical scattering and other aberrations.

“The exciting part is that this approach can acquire data at the same high speed per pixel as conventional microscopy, contrary to most other methods that beat the diffraction limit for microscopy, which can be 1,000 times slower per pixel,” says George Church, a professor of genetics at Harvard Medical School who was not part of the research team.

“The other methods currently have better resolution, but are harder to use, or slower,” Tillberg says. “The benefits of our method are the ease of use and, more importantly, compatibility with large volumes, which is challenging with existing technologies.”

The researchers envision that this technology could be very useful to scientists trying to image brain cells and map how they connect to each other across large regions.

“There are lots of biological questions where you have to understand a large structure,” Boyden says. “Especially for the brain, you have to be able to image a large volume of tissue, but also to see where all the nanoscale components are.”

While Boyden’s team is focused on the brain, other possible applications for this technique include studying tumor metastasis and angiogenesis (growth of blood vessels to nourish a tumor), or visualizing how immune cells attack specific organs during autoimmune disease.

The research was funded by the National Institutes of Health, the New York Stem Cell Foundation, Jeremy and Joyce Wertheimer, the National Science Foundation, and the Fannie and John Hertz Foundation.

Fifteen MIT scientists receive NIH BRAIN Initiative grants

Today, the National Institutes of Health (NIH) announced their first round of BRAIN Initiative award recipients. Six teams and 15 researchers from the Massachusetts Institute of Technology were recipients.

Mriganka Sur, principal investigator at the Picower Institute for Learning and Memory and the Paul E. Newton Professor of Neuroscience in MIT’s Department of Brain and Cognitive Sciences (BCS) leads a team studying cortical circuits and information flow during memory-guided perceptual decisions. Co-principal investigators include Emery Brown, BCS professor of computational neuroscience and the Edward Hood Taplin Professor of Medical Engineering; Kwanghun Chung, Picower Institute principal investigator and assistant professor in the Department of Chemical Engineering and the Institute for Medical Engineering and Science (IMES); and Ian Wickersham, research scientist at the McGovern Institute for Brain Research and head of MIT’s Genetic Neuroengineering Group.

Elly Nedivi, Picower Institute principal investigator and professor in BCS and the Department of Biology, leads a team studying new methods for high-speed monitoring of sensory-driven synaptic activity across all inputs to single living neurons in the context of the intact cerebral cortex. Her co-principal investigator is Peter So, professor of mechanical and biological engineering, and director of the MIT Laser Biomedical Research Center.

Ian Wickersham will lead a team looking at novel technologies for nontoxic transsynaptic tracing. His co-principal investigators include Robert Desimone, director of the McGovern Institute and the Doris and Don Berkey Professor of Neuroscience in BCS; Li-Huei Tsai, director of the Picower Institute and the Picower Professor of Neuroscience in BCS; and Kay Tye, Picower Institute principal investigator and assistant professor of neuroscience in BCS.

Robert Desimone will lead a team studying vascular interfaces for brain imaging and stimulation. Co-principal investigators include Ed Boyden, associate professor at the MIT Media Lab, McGovern Institute, and departments of BCS and Biological Engineering; head of MIT’s Synthetic Neurobiology Group, and co-director of MIT’s Center for Neurobiological Engineering; and Elazer Edelman, the Thomas D. and Virginia W. Cabot Professor of Health Sciences and Technology in IMES and director of the Harvard-MIT Biomedical Engineering Center. Collaborators on this project include: Rodolfo Llinas (New York University), George Church (Harvard University), Jan Rabaey (University of California at Berkeley), Pablo Blinder (Tel Aviv University), Eric Leuthardt (Washington University/St. Louis), Michel Maharbiz (Berkeley), Jose Carmena (Berkeley), Elad Alon (Berkeley), Colin Derdeyn (Washington University in St. Louis), Lowell Wood (Bill and Melinda Gates Foundation), Xue Han (Boston University), and Adam Marblestone (MIT).

Ed Boyden will be co-principal investigator with Mark Bathe, associate professor of biological engineering, and Peng Yin of Harvard on a project to study ultra-multiplexed nanoscale in situ proteomics for understanding synapse types.

Alan Jasanoff, associate professor of biological engineering and director of the MIT Center for Neurobiological Engineering, will lead a team looking at calcium sensors for molecular fMRI. Stephen Lippard, the Arthur Amos Noyes Professor of Chemistry, is co-principal investigator.

In addition, Sur and Wickersham also received BRAIN Early Concept Grants for Exploratory Research (EAGER) from the National Science Foundation (NSF). Sur will focus on massive-scale multi-area single neuron recordings to reveal circuits underlying short-term memory. Wickersham, in collaboration with Li-Huei Tsai, Kay Tye, and Robert Desimone, will develop cell-type specific optogenetics in wild-type animals. Additional information about NSF support of the BRAIN initiative can be found at NSF.gov/brain.

The BRAIN Initiative, spearheaded by President Obama in April 2013, challenges the nation’s leading scientists to advance our sophisticated understanding of the human mind and discover new ways to treat, prevent, and cure neurological disorders like Alzheimer’s, schizophrenia, autism, and traumatic brain injury. The scientific community is charged with accelerating the invention of cutting-edge technologies that can produce dynamic images of complex neural circuits and illuminate the interaction of lightning-fast brain cells. The new capabilities are expected to provide greater insights into how brain functionality is linked to behavior, learning, memory, and the underlying mechanisms of debilitating disease. BRAIN was launched with approximately $100 million in initial investments from the NIH, the National Science Foundation, and the Defense Advanced Research Projects Agency (DARPA).

BRAIN Initiative scientists are engaged in a challenging and transformative endeavor to explore how our minds instantaneously processes, store, and retrieve vast quantities of information. Their discoveries will unlock many of the remaining mysteries inherent in the brain’s billions of neurons and trillions of connections, leading to a deeper understanding of the underlying causes of many neurological and psychiatric conditions. Their findings will enable scientists and doctors to develop the groundbreaking arsenal of tools and technologies required to more effectively treat those suffering from these devastating disorders.

MEG matters

Somewhere nearby, most likely, sits a coffee mug. Give it a glance. An image of that mug travels from desktop to retina and into the brain, where it is processed, categorized and recognized, within a fraction of a second.

All this feels effortless to us, but programming a computer to do the same reveals just how complex that process is. Computers can handle simple objects in expected positions, such as an upright mug. But tilt that cup on its side? “That messes up a lot of standard computer vision algorithms,” says Leyla Isik, a graduate student in Tomaso Poggio’s lab at the McGovern Institute.

For her thesis research, Isik is working to build better computer vision models, inspired by how human brains recognize objects. But to track this process, she needed an imaging tool that could keep up with the brain’s astonishing speed. In 2011, soon after Isik arrived at MIT, the McGovern Institute opened its magnetoencephalography (MEG) lab, one of only a few dozens in the entire country. MEG operates on the same timescale as the human brain. Now, with easy access to a MEG facility dedicated to brain research, neuroscientists at McGovern and across MIT—even those like Isik who had never scanned human subjects—are delving into human neural processing in ways never possible before.

The making of…

MEG was developed at MIT in the early 1970s by physicist David Cohen. He was searching for the tiny magnetic fields that were predicted to arise within electrically active tissues such as the brain. Magnetic fields can travel unimpeded through the skull, so Cohen hoped it might be possible to detect them noninvasively. Because the signals are so small—a billion times weaker than the magnetic field of the Earth—Cohen experimented with a newly invented device called a SQUID (short for superconducting quantum interference device), a highly sensitive magnetometer. In 1972, he succeeded in recording alpha waves, brain rhythms that occur when the eyes close. The recording scratched out on yellow graph paper with notes scrawled in the margins, led to a seminal paper that launched a new field. Cohen’s prototype has now evolved into a sophisticated machine with an array of 306 SQUID detectors contained within a helmet that sits over the subject’s head like a giant hairdryer.

As MEG technology advanced, neuroscientists watched with growing interest. Animal studies were revealing the importance of high-frequency electrical oscillations such as gamma waves, which appear to have a key role in the communication between different brain regions. But apart from occasional neurosurgery patients, it was very difficult to study these signals in the human brain or to understand how they might contribute to human cognition. The most widely used imaging method, functional magnetic resonance imaging (fMRI) could provide precise spatial localization, but it could not detect events on the necessary millisecond timescale. “We needed to bridge that gap,” says Robert Desimone, director of the McGovern Institute.

Desimone decided to make MEG a priority, and with support from donors including Thomas F. Peterson, Jr., Edward and Kay Poitras, and the Simons Foundation, the institute was able to purchase a Triux scanner from Elekta, the newest model on the market and the first to be installed in North America.

One challenge was the high level of magnetic background noise from the surrounding environment, and so the new scanner was installed in a 13-ton shielded room that deflects interference away from the scanner. “We have a challenging location, but we were able to work with it and to get clear signals,” says Desimone.

“An engineer might have picked a different site, but we cannot overstate the importance of having MEG right here, next to the MRI scanners and easily accessible for our researchers.”

To run the new lab, Desimone recruited Dimitrios Pantazis, an expert in MEG signal processing from the University of Southern California. Pantazis knew a lot about MEG data analysis, but he had never actually scanned human subjects himself. In March 2011, he watched in anticipation as Elekta engineers uncrated the new system. Within a few months, he had the lab up and running.

Computer vision quest

When the MEG lab opened, Isik attended a training session. Like Pantazis, she had no previous experience scanning human subjects, but MEG seemed an ideal tool for teasing out the complexities of human object recognition.

She recorded the brain activity of volunteers as they viewed images of objects in various orientations. She also asked them to track the color of a cross on each image, partly to keep their eyes on the screen and partly to keep them alert. “It’s a dark and quiet room and a comfy chair,” she says. “You have to give them something to do to keep them awake.”

To process the data, Isik used a computational tool called a machine learning classifier, which learns to recognize patterns of brain activity evoked by different stimuli. By comparing responses to different types of objects, or similar objects from different viewpoints (such as a cup lying on its side), she was able to show that the human visual system processes objects in stages, starting with the specific view and then generalizing to features that are independent of the size and position of the object.

Isik is now working to develop a computer model that simulates this step-wise processing. “Having this data to work with helps ground my models,” she says. Meanwhile, Pantazis was impressed by the power of machine learning classifiers to make sense of the huge quantities of data produced by MEG studies. With support from the National Science Foundation, he is working to incorporate them into a software analysis package that is widely used by the MEG community.

Mixology

Because fMRI and MEG provide complementary information, it was natural that researchers would want to combine them. This is a computationally challenging task, but MIT research scientist Aude Oliva and postdoc Radoslaw Cichy, in collaboration with Pantazis, have developed a new way to do so. They presented 92 images to volunteers subjects, once in the MEG scanner, and then again in the MRI scanner across the hall. For each data set, they looked for patterns of similarity between responses to different stimuli. Then, by aligning the two ‘similarity maps,’ they could determine which MEG signals correspond to which fMRI signals, providing information about the location and timing of brain activity that could not be revealed by either method in isolation. “We could see how visual information flows from the rear of the brain to the more anterior regions where objects are recognized and categorized,” says Pantazis. “It all happens within a few hundred milliseconds. You could not see this level of detail without the combination of fMRI and MEG.”

Another study combining fMRI and MEG data focused on attention, a longstanding research interest for Desimone. Daniel Baldauf, a postdoc in Desimone’s lab, shares that fascination. “Our visual experience is amazingly rich,” says Baldauf. “Most mysteries about how we deal with all this information boil down to attention.”

Baldauf set out to study how the brain switches attention between two well-studied object categories, faces and houses. These stimuli are known to be processed by different brain areas, and Baldauf wanted to understand how signals might be routed to one area or the other during shifts of attention. By scanning subjects with MEG and fMRI, Baldauf identified a brain region, the inferior frontal junction (IFJ), that synchronizes its gamma oscillations with either the face or house areas depending on which stimulus the subject was attending to—akin to tuning a radio to a particular station.

Having found a way to trace attention within the brain, Desimone and his colleagues are now testing whether MEG can be used to improve attention. Together with Baldauf and two visiting students, Yasaman Bagherzadeh and Ben Lu, he has rigged the scanner so that subjects can be given feedback on their own activity on a screen in real time as it is being recorded. “By concentrating on a task, participants can learn to steer their own brain activity,” says Baldauf, who hopes to determine whether these exercises can help people perform better on everyday tasks that require attention.

Comfort zone

In addition to exploring basic questions about brain function, MEG is also a valuable tool for studying brain disorders such as autism. Margaret Kjelgaard, a clinical researcher at Massachusetts General Hospital, is collaborating with MIT faculty member Pawan Sinha to understand why people with autism often have trouble tolerating sounds, smells, and lights. This is difficult to study using fMRI, because subjects are often unable to tolerate the noise of the scanner, whereas they find MEG much more comfortable.

“Big things are probably going to happen here.”
— David Cohen, inventor of MEG technology

In the scanner, subjects listened to brief repetitive sounds as their brain responses were recorded. In healthy controls, the responses became weaker with repetition as the subjects adapted to the sounds. Those with autism, however, did not adapt. The results are still preliminary and as-yet unpublished, but Kjelgaard hopes that the work will lead to a biomarker for autism, and perhaps eventually for other disorders. In 2012, the McGovern Institute organized a symposium to mark the opening of the new lab. Cohen, who had invented MEG forty years earlier, spoke at the event and made a prediction: “Big things are probably going to happen here.” Two years on, researchers have pioneered new MEG data analysis techniques, invented novel ways to combine MEG and fMRI, and begun to explore the neural underpinnings of autism. Odds are, there are more big things to come.

Try, try again? Study says no

When it comes to learning languages, adults and children have different strengths. Adults excel at absorbing the vocabulary needed to navigate a grocery store or order food in a restaurant, but children have an uncanny ability to pick up on subtle nuances of language that often elude adults. Within months of living in a foreign country, a young child may speak a second language like a native speaker.

Brain structure plays an important role in this “sensitive period” for learning language, which is believed to end around adolescence. The young brain is equipped with neural circuits that can analyze sounds and build a coherent set of rules for constructing words and sentences out of those sounds. Once these language structures are established, it’s difficult to build another one for a new language.

In a new study, a team of neuroscientists and psychologists led by Amy Finn, a postdoc at MIT’s McGovern Institute for Brain Research, has found evidence for another factor that contributes to adults’ language difficulties: When learning certain elements of language, adults’ more highly developed cognitive skills actually get in the way. The researchers discovered that the harder adults tried to learn an artificial language, the worse they were at deciphering the language’s morphology — the structure and deployment of linguistic units such as root words, suffixes, and prefixes.

“We found that effort helps you in most situations, for things like figuring out what the units of language that you need to know are, and basic ordering of elements. But when trying to learn morphology, at least in this artificial language we created, it’s actually worse when you try,” Finn says.

Finn and colleagues from the University of California at Santa Barbara, Stanford University, and the University of British Columbia describe their findings in the July 21 issue of PLoS One. Carla Hudson Kam, an associate professor of linguistics at British Columbia, is the paper’s senior author.

Too much brainpower

Linguists have known for decades that children are skilled at absorbing certain tricky elements of language, such as irregular past participles (examples of which, in English, include “gone” and “been”) or complicated verb tenses like the subjunctive.

“Children will ultimately perform better than adults in terms of their command of the grammar and the structural components of language — some of the more idiosyncratic, difficult-to-articulate aspects of language that even most native speakers don’t have conscious awareness of,” Finn says.

In 1990, linguist Elissa Newport hypothesized that adults have trouble learning those nuances because they try to analyze too much information at once. Adults have a much more highly developed prefrontal cortex than children, and they tend to throw all of that brainpower at learning a second language. This high-powered processing may actually interfere with certain elements of learning language.

“It’s an idea that’s been around for a long time, but there hasn’t been any data that experimentally show that it’s true,” Finn says.

Finn and her colleagues designed an experiment to test whether exerting more effort would help or hinder success. First, they created nine nonsense words, each with two syllables. Each word fell into one of three categories (A, B, and C), defined by the order of consonant and vowel sounds.

Study subjects listened to the artificial language for about 10 minutes. One group of subjects was told not to overanalyze what they heard, but not to tune it out either. To help them not overthink the language, they were given the option of completing a puzzle or coloring while they listened. The other group was told to try to identify the words they were hearing.

Each group heard the same recording, which was a series of three-word sequences — first a word from category A, then one from category B, then category C — with no pauses between words. Previous studies have shown that adults, babies, and even monkeys can parse this kind of information into word units, a task known as word segmentation.

Subjects from both groups were successful at word segmentation, although the group that tried harder performed a little better. Both groups also performed well in a task called word ordering, which required subjects to choose between a correct word sequence (ABC) and an incorrect sequence (such as ACB) of words they had previously heard.

The final test measured skill in identifying the language’s morphology. The researchers played a three-word sequence that included a word the subjects had not heard before, but which fit into one of the three categories. When asked to judge whether this new word was in the correct location, the subjects who had been asked to pay closer attention to the original word stream performed much worse than those who had listened more passively.

Turning off effort

The findings support a theory of language acquisition that suggests that some parts of language are learned through procedural memory, while others are learned through declarative memory. Under this theory, declarative memory, which stores knowledge and facts, would be more useful for learning vocabulary and certain rules of grammar. Procedural memory, which guides tasks we perform without conscious awareness of how we learned them, would be more useful for learning subtle rules related to language morphology.

“It’s likely to be the procedural memory system that’s really important for learning these difficult morphological aspects of language. In fact, when you use the declarative memory system, it doesn’t help you, it harms you,” Finn says.

Still unresolved is the question of whether adults can overcome this language-learning obstacle. Finn says she does not have a good answer yet but she is now testing the effects of “turning off” the adult prefrontal cortex using a technique called transcranial magnetic stimulation. Other interventions she plans to study include distracting the prefrontal cortex by forcing it to perform other tasks while language is heard, and treating subjects with drugs that impair activity in that brain region.

The research was funded by the National Institute of Child Health and Human Development and the National Science Foundation.

When good people do bad things

When people get together in groups, unusual things can happen — both good and bad. Groups create important social institutions that an individual could not achieve alone, but there can be a darker side to such alliances: Belonging to a group makes people more likely to harm others outside the group.

“Although humans exhibit strong preferences for equity and moral prohibitions against harm in many contexts, people’s priorities change when there is an ‘us’ and a ‘them,’” says Rebecca Saxe, an associate professor of cognitive neuroscience at MIT. “A group of people will often engage in actions that are contrary to the private moral standards of each individual in that group, sweeping otherwise decent individuals into ‘mobs’ that commit looting, vandalism, even physical brutality.”

Several factors play into this transformation. When people are in a group, they feel more anonymous, and less likely to be caught doing something wrong. They may also feel a diminished sense of personal responsibility for collective actions.

Saxe and colleagues recently studied a third factor that cognitive scientists believe may be involved in this group dynamic: the hypothesis that when people are in groups, they “lose touch” with their own morals and beliefs, and become more likely to do things that they would normally believe are wrong.

In a study that recently went online in the journal NeuroImage, the researchers measured brain activity in a part of the brain involved in thinking about oneself. They found that in some people, this activity was reduced when the subjects participated in a competition as part of a group, compared with when they competed as individuals. Those people were more likely to harm their competitors than people who did not exhibit this decreased brain activity.

“This process alone does not account for intergroup conflict: Groups also promote anonymity, diminish personal responsibility, and encourage reframing harmful actions as ‘necessary for the greater good.’ Still, these results suggest that at least in some cases, explicitly reflecting on one’s own personal moral standards may help to attenuate the influence of ‘mob mentality,’” says Mina Cikara, a former MIT postdoc and lead author of the NeuroImage paper.

Group dynamics

Cikara, who is now an assistant professor at Carnegie Mellon University, started this research project after experiencing the consequences of a “mob mentality”: During a visit to Yankee Stadium, her husband was ceaselessly heckled by Yankees fans for wearing a Red Sox cap. “What I decided to do was take the hat from him, thinking I would be a lesser target by virtue of the fact that I was a woman,” Cikara says. “I was so wrong. I have never been called names like that in my entire life.”

The harassment, which continued throughout the trip back to Manhattan, provoked a strong reaction in Cikara, who isn’t even a Red Sox fan.

“It was a really amazing experience because what I realized was I had gone from being an individual to being seen as a member of ‘Red Sox Nation.’ And the way that people responded to me, and the way I felt myself responding back, had changed, by virtue of this visual cue — the baseball hat,” she says. “Once you start feeling attacked on behalf of your group, however arbitrary, it changes your psychology.”

Cikara, then a third-year graduate student at Princeton University, started to investigate the neural mechanisms behind the group dynamics that produce bad behavior. In the new study, done at MIT, Cikara, Saxe (who is also an associate member of MIT’s McGovern Institute for Brain Research), former Harvard University graduate student Anna Jenkins, and former MIT lab manager Nicholas Dufour focused on a part of the brain called the medial prefrontal cortex. When someone is reflecting on himself or herself, this part of the brain lights up in functional magnetic resonance imaging (fMRI) brain scans.

A couple of weeks before the study participants came in for the experiment, the researchers surveyed each of them about their social-media habits, as well as their moral beliefs and behavior. This allowed the researchers to create individualized statements for each subject that were true for that person — for example, “I have stolen food from shared refrigerators” or “I always apologize after bumping into someone.”

When the subjects arrived at the lab, their brains were scanned as they played a game once on their own and once as part of a team. The purpose of the game was to press a button if they saw a statement related to social media, such as “I have more than 600 Facebook friends.”

The subjects also saw their personalized moral statements mixed in with sentences about social media. Brain scans revealed that when subjects were playing for themselves, the medial prefrontal cortex lit up much more when they read moral statements about themselves than statements about others, consistent with previous findings. However, during the team competition, some people showed a much smaller difference in medial prefrontal cortex activation when they saw the moral statements about themselves compared to those about other people.

Those people also turned out to be much more likely to harm members of the competing group during a task performed after the game. Each subject was asked to select photos that would appear with the published study, from a set of four photos apiece of two teammates and two members of the opposing team. The subjects with suppressed medial prefrontal cortex activity chose the least flattering photos of the opposing team members, but not of their own teammates.

“This is a nice way of using neuroimaging to try to get insight into something that behaviorally has been really hard to explore,” says David Rand, an assistant professor of psychology at Yale University who was not involved in the research. “It’s been hard to get a direct handle on the extent to which people within a group are tapping into their own understanding of things versus the group’s understanding.”

Getting lost

The researchers also found that after the game, people with reduced medial prefrontal cortex activity had more difficulty remembering the moral statements they had heard during the game.

“If you need to encode something with regard to the self and that ability is somehow undermined when you’re competing with a group, then you should have poor memory associated with that reduction in medial prefrontal cortex signal, and that’s exactly what we see,” Cikara says.

Cikara hopes to follow up on these findings to investigate what makes some people more likely to become “lost” in a group than others. She is also interested in studying whether people are slower to recognize themselves or pick themselves out of a photo lineup after being absorbed in a group activity.

The research was funded by the Eunice Kennedy Shriver National Institute of Child Health and Human Development, the Air Force Office of Scientific Research, and the Packard Foundation.

Inside the adult ADHD brain

About 11 percent of school-age children in the United States have been diagnosed with attention deficit hyperactivity disorder (ADHD). While many of these children eventually “outgrow” the disorder, some carry their difficulties into adulthood: About 10 million American adults are currently diagnosed with ADHD.

In the first study to compare patterns of brain activity in adults who recovered from childhood ADHD and those who did not, MIT neuroscientists have discovered key differences in a brain communication network that is active when the brain is at wakeful rest and not focused on a particular task. The findings offer evidence of a biological basis for adult ADHD and should help to validate the criteria used to diagnose the disorder, according to the researchers.

Diagnoses of adult ADHD have risen dramatically in the past several years, with symptoms similar to those of childhood ADHD: a general inability to focus, reflected in difficulty completing tasks, listening to instructions, or remembering details.

“The psychiatric guidelines for whether a person’s ADHD is persistent or remitted are based on lots of clinical studies and impressions. This new study suggests that there is a real biological boundary between those two sets of patients,” says MIT’s John Gabrieli, the Grover M. Hermann Professor of Health Sciences and Technology, professor of brain and cognitive sciences, and an author of the study, which appears in the June 10 issue of the journal Brain.

Shifting brain patterns

This study focused on 35 adults who were diagnosed with ADHD as children; 13 of them still have the disorder, while the rest have recovered. “This sample really gave us a unique opportunity to ask questions about whether or not the brain basis of ADHD is similar in the remitted-ADHD and persistent-ADHD cohorts,” says Aaron Mattfeld, a postdoc at MIT’s McGovern Institute for Brain Research and the paper’s lead author.

The researchers used a technique called resting-state functional magnetic resonance imaging (fMRI) to study what the brain is doing when a person is not engaged in any particular activity. These patterns reveal which parts of the brain communicate with each other during this type of wakeful rest.

“It’s a different way of using functional brain imaging to investigate brain networks,” says Susan Whitfield-Gabrieli, a research scientist at the McGovern Institute and the senior author of the paper. “Here we have subjects just lying in the scanner. This method reveals the intrinsic functional architecture of the human brain without invoking any specific task.”

In people without ADHD, when the mind is unfocused, there is a distinctive synchrony of activity in brain regions known as the default mode network. Previous studies have shown that in children and adults with ADHD, two major hubs of this network — the posterior cingulate cortex and the medial prefrontal cortex — no longer synchronize.

In the new study, the MIT team showed for the first time that in adults who had been diagnosed with ADHD as children but no longer have it, this normal synchrony pattern is restored. “Their brains now look like those of people who never had ADHD,” Mattfeld says.

“This finding is quite intriguing,” says Francisco Xavier Castellanos, a professor of child and adolescent psychiatry at New York University who was not involved in the research. “If it can be confirmed, this pattern could become a target for potential modification to help patients learn to compensate for the disorder without changing their genetic makeup.”

Lingering problems

However, in another measure of brain synchrony, the researchers found much more similarity between both groups of ADHD patients.

In people without ADHD, when the default mode network is active, another network, called the task positive network, is suppressed. When the brain is performing tasks that require focus, the task positive network takes over and suppresses the default mode network. If this reciprocal relationship degrades, the ability to focus declines.

Both groups of adult ADHD patients, including those who had recovered, showed patterns of simultaneous activation of both networks. This is thought to be a sign of impairment in executive function — the management of cognitive tasks — that is separate from ADHD, but occurs in about half of ADHD patients. All of the ADHD patients in this study performed poorly on tests of executive function. “Once you have executive function problems, they seem to hang in there,” says Gabrieli, who is a member of the McGovern Institute.

The researchers now plan to investigate how ADHD medications influence the brain’s default mode network, in hopes that this might allow them to predict which drugs will work best for individual patients. Currently, about 60 percent of patients respond well to the first drug they receive.

“It’s unknown what’s different about the other 40 percent or so who don’t respond very much,” Gabrieli says. “We’re pretty excited about the possibility that some brain measurement would tell us which child or adult is most likely to benefit from a treatment.”

The research was funded by the Poitras Center for Affective Disorders Research at the McGovern Institute.

Delving deep into the brain

Launched in 2013, the national BRAIN Initiative aims to revolutionize our understanding of cognition by mapping the activity of every neuron in the human brain, revealing how brain circuits interact to create memories, learn new skills, and interpret the world around us.

Before that can happen, neuroscientists need new tools that will let them probe the brain more deeply and in greater detail, says Alan Jasanoff, an MIT associate professor of biological engineering. “There’s a general recognition that in order to understand the brain’s processes in comprehensive detail, we need ways to monitor neural function deep in the brain with spatial, temporal, and functional precision,” he says.

Jasanoff and colleagues have now taken a step toward that goal: They have established a technique that allows them to track neural communication in the brain over time, using magnetic resonance imaging (MRI) along with a specialized molecular sensor. This is the first time anyone has been able to map neural signals with high precision over large brain regions in living animals, offering a new window on brain function, says Jasanoff, who is also an associate member of MIT’s McGovern Institute for Brain Research.

His team used this molecular imaging approach, described in the May 1 online edition of Science, to study the neurotransmitter dopamine in a region called the ventral striatum, which is involved in motivation, reward, and reinforcement of behavior. In future studies, Jasanoff plans to combine dopamine imaging with functional MRI techniques that measure overall brain activity to gain a better understanding of how dopamine levels influence neural circuitry.

“We want to be able to relate dopamine signaling to other neural processes that are going on,” Jasanoff says. “We can look at different types of stimuli and try to understand what dopamine is doing in different brain regions and relate it to other measures of brain function.”

Tracking dopamine

Dopamine is one of many neurotransmitters that help neurons to communicate with each other over short distances. Much of the brain’s dopamine is produced by a structure called the ventral tegmental area (VTA). This dopamine travels through the mesolimbic pathway to the ventral striatum, where it combines with sensory information from other parts of the brain to reinforce behavior and help the brain learn new tasks and motor functions. This circuit also plays a major role in addiction.
To track dopamine’s role in neural communication, the researchers used an MRI sensor they had previously designed, consisting of an iron-containing protein that acts as a weak magnet. When the sensor binds to dopamine, its magnetic interactions with the surrounding tissue weaken, which dims the tissue’s MRI signal. This allows the researchers to see where in the brain dopamine is being released. The researchers also developed an algorithm that lets them calculate the precise amount of dopamine present in each fraction of a cubic millimeter of the ventral striatum.

After delivering the MRI sensor to the ventral striatum of rats, Jasanoff’s team electrically stimulated the mesolimbic pathway and was able to detect exactly where in the ventral striatum dopamine was released. An area known as the nucleus accumbens core, known to be one of the main targets of dopamine from the VTA, showed the highest levels. The researchers also saw that some dopamine is released in neighboring regions such as the ventral pallidum, which regulates motivation and emotions, and parts of the thalamus, which relays sensory and motor signals in the brain.

Each dopamine stimulation lasted for 16 seconds and the researchers took an MRI image every eight seconds, allowing them to track how dopamine levels changed as the neurotransmitter was released from cells and then disappeared. “We could divide up the map into different regions of interest and determine dynamics separately for each of those regions,” Jasanoff says.

He and his colleagues plan to build on this work by expanding their studies to other parts of the brain, including the areas most affected by Parkinson’s disease, which is caused by the death of dopamine-generating cells. Jasanoff’s lab is also working on sensors to track other neurotransmitters, allowing them to study interactions between neurotransmitters during different tasks.

The paper’s lead author is postdoc Taekwan Lee. Technical assistant Lili Cai and postdocs Victor Lelyveld and Aviad Hai also contributed to the research, which was funded by the National Institutes of Health and the Defense Advanced Research Projects Agency.

How the brain pays attention

Picking out a face in the crowd is a complicated task: Your brain has to retrieve the memory of the face you’re seeking, then hold it in place while scanning the crowd, paying special attention to finding a match.

A new study by MIT neuroscientists reveals how the brain achieves this type of focused attention on faces or other objects: A part of the prefrontal cortex known as the inferior frontal junction (IFJ) controls visual processing areas that are tuned to recognize a specific category of objects, the researchers report in the April 10 online edition of Science.

Scientists know much less about this type of attention, known as object-based attention, than spatial attention, which involves focusing on what’s happening in a particular location. However, the new findings suggest that these two types of attention have similar mechanisms involving related brain regions, says Robert Desimone, the Doris and Don Berkey Professor of Neuroscience, director of MIT’s McGovern Institute for Brain Research, and senior author of the paper.

“The interactions are surprisingly similar to those seen in spatial attention,” Desimone says. “It seems like it’s a parallel process involving different areas.”

In both cases, the prefrontal cortex — the control center for most cognitive functions — appears to take charge of the brain’s attention and control relevant parts of the visual cortex, which receives sensory input. For spatial attention, that involves regions of the visual cortex that map to a particular area within the visual field.

In the new study, the researchers found that IFJ coordinates with a brain region that processes faces, known as the fusiform face area (FFA), and a region that interprets information about places, known as the parahippocampal place area (PPA). The FFA and PPA were first identified in the human cortex by Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience at MIT.

The IFJ has previously been implicated in a cognitive ability known as working memory, which is what allows us to gather and coordinate information while performing a task — such as remembering and dialing a phone number, or doing a math problem.

For this study, the researchers used magnetoencephalography (MEG) to scan human subjects as they viewed a series of overlapping images of faces and houses. Unlike functional magnetic resonance imaging (fMRI), which is commonly used to measure brain activity, MEG can reveal the precise timing of neural activity, down to the millisecond. The researchers presented the overlapping streams at two different rhythms — two images per second and 1.5 images per second — allowing them to identify brain regions responding to those stimuli.

“We wanted to frequency-tag each stimulus with different rhythms. When you look at all of the brain activity, you can tell apart signals that are engaged in processing each stimulus,” says Daniel Baldauf, a postdoc at the McGovern Institute and the lead author of the paper.

Each subject was told to pay attention to either faces or houses; because the houses and faces were in the same spot, the brain could not use spatial information to distinguish them. When the subjects were told to look for faces, activity in the FFA and the IFJ became synchronized, suggesting that they were communicating with each other. When the subjects paid attention to houses, the IFJ synchronized instead with the PPA.

The researchers also found that the communication was initiated by the IFJ and the activity was staggered by 20 milliseconds — about the amount of time it would take for neurons to electrically convey information from the IFJ to either the FFA or PPA. The researchers believe that the IFJ holds onto the idea of the object that the brain is looking for and directs the correct part of the brain to look for it.
Further bolstering this idea, the researchers used an MRI-based method to measure the white matter that connects different brain regions and found that the IFJ is highly connected with both the FFA and PPA.

Members of Desimone’s lab are now studying how the brain shifts its focus between different types of sensory input, such as vision and hearing. They are also investigating whether it might be possible to train people to better focus their attention by controlling the brain interactions  involved in this process.

“You have to identify the basic neural mechanisms and do basic research studies, which sometimes generate ideas for things that could be of practical benefit,” Desimone says. “It’s too early to say whether this training is even going to work at all, but it’s something that we’re actively pursuing.”

The research was funded by the National Institutes of Health and the National Science Foundation.

MRI reveals genetic activity

Doctors commonly use magnetic resonance imaging (MRI) to diagnose tumors, damage from stroke, and many other medical conditions. Neuroscientists also rely on it as a research tool for identifying parts of the brain that carry out different cognitive functions.

Now, a team of biological engineers at MIT is trying to adapt MRI to a much smaller scale, allowing researchers to visualize gene activity inside the brains of living animals. Tracking these genes with MRI would enable scientists to learn more about how the genes control processes such as forming memories and learning new skills, says Alan Jasanoff, an MIT associate professor of biological engineering and leader of the research team.

“The dream of molecular imaging is to provide information about the biology of intact organisms, at the molecule level,” says Jasanoff, who is also an associate member of MIT’s McGovern Institute for Brain Research. “The goal is to not have to chop up the brain, but instead to actually see things that are happening inside.”

To help reach that goal, Jasanoff and colleagues have developed a new way to image a “reporter gene” — an artificial gene that turns on or off to signal events in the body, much like an indicator light on a car’s dashboard. In the new study, the reporter gene encodes an enzyme that interacts with a magnetic contrast agent injected into the brain, making the agent visible with MRI. This approach, described in a recent issue of the journal Chemical Biology, allows researchers to determine when and where that reporter gene is turned on.

An on/off switch

MRI uses magnetic fields and radio waves that interact with protons in the body to produce detailed images of the body’s interior. In brain studies, neuroscientists commonly use functional MRI to measure blood flow, which reveals which parts of the brain are active during a particular task. When scanning other organs, doctors sometimes use magnetic “contrast agents” to boost the visibility of certain tissues.

The new MIT approach includes a contrast agent called a manganese porphyrin and the new reporter gene, which codes for a genetically engineered enzyme that alters the electric charge on the contrast agent. Jasanoff and colleagues designed the contrast agent so that it is soluble in water and readily eliminated from the body, making it difficult to detect by MRI. However, when the engineered enzyme, known as SEAP, slices phosphate molecules from the manganese porphyrin, the contrast agent becomes insoluble and starts to accumulate in brain tissues, allowing it to be seen.

The natural version of SEAP is found in the placenta, but not in other tissues. By injecting a virus carrying the SEAP gene into the brain cells of mice, the researchers were able to incorporate the gene into the cells’ own genome. Brain cells then started producing the SEAP protein, which is secreted from the cells and can be anchored to their outer surfaces. That’s important, Jasanoff says, because it means that the contrast agent doesn’t have to penetrate the cells to interact with the enzyme.

Researchers can then find out where SEAP is active by injecting the MRI contrast agent, which spreads throughout the brain but accumulates only near cells producing the SEAP protein.

Exploring brain function

In this study, which was designed to test this general approach, the detection system revealed only whether the SEAP gene had been successfully incorporated into brain cells. However, in future studies, the researchers intend to engineer the SEAP gene so it is only active when a particular gene of interest is turned on.

Jasanoff first plans to link the SEAP gene with so-called “early immediate genes,” which are necessary for brain plasticity — the weakening and strengthening of connections between neurons, which is essential to learning and memory.

“As people who are interested in brain function, the top questions we want to address are about how brain function changes patterns of gene expression in the brain,” Jasanoff says. “We also imagine a future where we might turn the reporter enzyme on and off when it binds to neurotransmitters, so we can detect changes in neurotransmitter levels as well.”

Assaf Gilad, an assistant professor of radiology at Johns Hopkins University, says the MIT team has taken a “very creative approach” to developing noninvasive, real-time imaging of gene activity. “These kinds of genetically engineered reporters have the potential to revolutionize our understanding of many biological processes,” says Gilad, who was not involved in the study.

The research was funded by the Raymond and Beverly Sackler Foundation, the National Institutes of Health, and an MIT-Germany Seed Fund grant. The paper’s lead author is former MIT postdoc Gil Westmeyer; other authors are former MIT technical assistant Yelena Emer and Jutta Lintelmann of the German Research Center for Environmental Health.