Four new faces in the School of Science faculty

This fall, the School of Science will welcome four new members joining the faculty in the departments of Biology, Brain and Cognitive Sciences, and Chemistry.

Evelina Fedorenko investigates how our brains process language. She has developed novel analytic approaches for functional magnetic resonance imaging (fMRI) and other brain imaging techniques to help answer the questions of how the language processing network functions and how it relates to other networks in the brain. She works with both neurotypical individuals and individuals with brain disorders. Fedorenko joins the Department of Brain and Cognitive Sciences as an assistant professor. She received her BA from Harvard University in linguistics and psychology and then completed her doctoral studies at MIT in 2007. After graduating from MIT, Fedorenko worked as a postdoc and then as a research scientist at the McGovern Institute for Brain Research. In 2014, she joined the faculty at Massachusetts General Hospital and Harvard Medical School, where she was an associate researcher and an assistant professor, respectively. She is also a member of the McGovern Institute.

Morgan Sheng focuses on the structure, function, and turnover of synapses, the junctions that allow communication between brain cells. His discoveries have improved our understanding of the molecular basis of cognitive function and diseases of the nervous system, such as autism, Alzheimer’s disease, and dementia. Being both a physician and a scientist, he incorporates genetic as well as biological insights to aid the study and treatment of mental illnesses and neurodegenerative diseases. He rejoins the Department of Brain and Cognitive Sciences (BCS), returning as a professor of neuroscience, a position he also held from 2001 to 2008. At that time, he was a member of the Picower Institute for Learning and Memory, a joint appointee in the Department of Biology, and an investigator of the Howard Hughes Medical Institute. Sheng earned his PhD from Harvard University in 1990, completed a postdoc at the University of California at San Francisco in 1994, and finished his medical training with a residency in London in 1986. From 1994 to 2001, he researched molecular and cellular neuroscience at Massachusetts General Hospital and Harvard Medical School. From 2008 to 2019 he was vice president of neuroscience at Genentech, a leading biotech company. In addition to his faculty appointment in BCS, Sheng is core institute member and co-director of the Stanley Center for Psychiatric Research at the Broad Institute of MIT and Harvard, as well as an affiliate member of the McGovern Institute and the Picower Institute.

Seychelle Vos studies genome organization and its effect on gene expression at the intersection of biochemistry and genetics. Vos uses X-ray crystallography, cryo-electron microscopy, and biophysical approaches to understand how transcription is physically coupled to the genome’s organization and structure. She joins the Department of Biology as an assistant professor after completing a postdoc at the Max Plank Institute for Biophysical Chemistry. Vos received her BS in genetics in 2008 from the University of Georgia and her PhD in molecular and cell biology in 2013 from the University of California at Berkeley.

Xiao Wang is a chemist and molecular engineer working to improve our understanding of biology and human health. She focuses on brain function and dysfunction, producing and applying new chemical, biophysical, and genomic tools at the molecular level. Previously, she focused on RNA modifications and how they impact cellular function. Wang is joining MIT as an assistant professor in the Department of Chemistry. She was previously a postdoc of the Life Science Research Foundation at Stanford University. Wang received her BS in chemistry and molecular engineering from Peking University in 2010 and her PhD in chemistry from the University of Chicago in 2015. She is also a core member of the Broad Institute of MIT and Harvard.

Ed Boyden wins premier Royal Society honor

Edward S. Boyden, the Y. Eva Tan Professor in Neurotechnology at MIT, has been awarded the 2019 Croonian Medal and Lecture by the Royal Society. Twenty-four medals and awards are announced by the Royal Society each year, honoring exceptional researchers who are making outstanding contributions to science.

“The Royal Society gives an array of medals and awards to scientists who have done exceptional, ground-breaking work,” explained Sir Venki Ramakrishnan, President of the Royal Society. “This year, it is again a pleasure to see these awards bestowed on scientists who have made such distinguished and far-reaching contributions in their fields. I congratulate and thank them for their efforts.”

Boyden wins the medal and lecture in recognition of his research that is expanding our understanding of the brain. This includes his critical role in the development of optogenetics, a technique for controlling brain activity with light, and his invention of expansion microscopy. Croonian Medal laureates include notable luminaries of science and neurobiology.

“It is a great honor to be selected to receive this medal, especially
since it was also given to people such as Santiago Ramon y Cajal, the
founder of modern neuroscience,” says Boyden. “This award reflects the great work of many fantastic students, postdocs, and collaborators who I’ve had the privilege to work with over the years.”

The award includes an invitation to deliver the premier British lecture in the biological sciences, given annually at the Royal Society in London. At the lecture, the winner is awarded a medal and a gift of £10,000. This announcement comes shortly after Boyden was co-awarded the Warren Alpert Prize for his role in developing optogenetics.

History of the Croonian Medal and Lecture

William Croone, pictured, envisioned an annual lecture that is the premier biological sciences medal and lecture at the Royal Society
William Croone, FRS Photo credit: Royal College of Physicians, London

The lectureship was conceived by William Croone FRS, one of the original Fellows of the Society based in London. Among the papers left on his death in 1684 were plans to endow two lectureships, one at the Royal Society and the other at the Royal College of Physicians. His widow later bequeathed the means to carry out the scheme. The lecture series began in 1738.

 

 

Ed Boyden holds the titles of Investigator, McGovern Institute; Y. Eva Tan Professor in Neurotechnology at MIT; Leader, Synthetic Neurobiology Group, MIT Media Lab; Professor, Biological Engineering, Brain and Cognitive Sciences, MIT Media Lab; Co-Director, MIT Center for Neurobiological Engineering; Member, MIT Center for Environmental Health Sciences, Computational and Systems Biology Initiative, and Koch Institute.

Ed Boyden receives 2019 Warren Alpert Prize

The 2019 Warren Alpert Foundation Prize has been awarded to four scientists, including Ed Boyden, for pioneering work that launched the field of optogenetics, a technique that uses light-sensitive channels and pumps to control the activity of neurons in the brain with a flick of a switch. He receives the prize alongside Karl Deisseroth, Peter Hegemann, and Gero Miesenböck, as outlined by The Warren Alpert Foundation in their announcement.

Harnessing light and genetics, the approach illuminates and modulates the activity of neurons, enables study of brain function and behavior, and helps reveal activity patterns that can overcome brain diseases.

Boyden’s work was key to envisioning and developing optogenetics, now a core method in neuroscience. The method allows brain circuits linked to complex behavioral processes, such as those involved in decision-making, feeding, and sleep, to be unraveled in genetic models. It is also helping to elucidate the mechanisms underlying neuropsychiatric disorders, and has the potential to inspire new strategies to overcome brain disorders.

“It is truly an honor to be included among the extremely distinguished list of winners of the Alpert Award,” says Boyden, the Y. Eva Tan Professor in Neurotechnology at the McGovern Institute, MIT. “To me personally, it is exciting to see the relatively new field of neurotechnology recognized. The brain implements our thoughts and feelings. It makes us who we are. This mysteries and challenge requires new technologies to make the brain understandable and repairable. It is a great honor that our technology of optogenetics is being thus recognized.”

While they were students, Boyden, and fellow awardee Karl Deisseroth, brainstormed about how microbial opsins could be used to mediate optical control of neural activity. In mid-2004, the pair collaborated to show that microbial opsins can be used to optically control neural activity. Upon launching his lab at MIT, Boyden’s team developed the first optogenetic silencing tool, the first effective optogenetic silencing in live mammals, noninvasive optogenetic silencing, and single-cell optogenetic control.

“The discoveries made by this year’s four honorees have fundamentally changed the landscape of neuroscience,” said George Q. Daley, dean of Harvard Medical School. “Their work has enabled scientists to see, understand and manipulate neurons, providing the foundation for understanding the ultimate enigma—the human brain.”

Beyond optogenetics, Boyden has pioneered transformative technologies that image, record, and manipulate complex systems, including expansion microscopy, robotic patch clamping, and even shrinking objects to the nanoscale. He was elected this year to the ranks of the National Academy of Sciences, and selected as an HHMI Investigator. Boyden has received numerous awards for this work, including the 2018 Gairdner International Prize and the 2016 Breakthrough Prize in Life Sciences.

The Warren Alpert Foundation, in association with Harvard Medical School, honors scientists whose work has improved the understanding, prevention, treatment or cure of human disease. Prize recipients are selected by the foundation’s scientific advisory board, which is composed of distinguished biomedical scientists and chaired by the dean of Harvard Medical School. The honorees will share a $500,000 prize and will be recognized at a daylong symposium on Oct. 3 at Harvard Medical School.

Ed Boyden holds the titles of Investigator, McGovern Institute; Y. Eva Tan Professor in Neurotechnology at MIT; Leader, Synthetic Neurobiology Group, Media Lab; Associate Professor, Biological Engineering, Brain and Cognitive Sciences, Media Lab; Co-Director, MIT Center for Neurobiological Engineering; Member, MIT Center for Environmental Health Sciences, Computational and Systems Biology Initiative, and Koch Institute.

How expectation influences perception

For decades, research has shown that our perception of the world is influenced by our expectations. These expectations, also called “prior beliefs,” help us make sense of what we are perceiving in the present, based on similar past experiences. Consider, for instance, how a shadow on a patient’s X-ray image, easily missed by a less experienced intern, jumps out at a seasoned physician. The physician’s prior experience helps her arrive at the most probable interpretation of a weak signal.

The process of combining prior knowledge with uncertain evidence is known as Bayesian integration and is believed to widely impact our perceptions, thoughts, and actions. Now, MIT neuroscientists have discovered distinctive brain signals that encode these prior beliefs. They have also found how the brain uses these signals to make judicious decisions in the face of uncertainty.

“How these beliefs come to influence brain activity and bias our perceptions was the question we wanted to answer,” says Mehrdad Jazayeri, the Robert A. Swanson Career Development Professor of Life Sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

The researchers trained animals to perform a timing task in which they had to reproduce different time intervals. Performing this task is challenging because our sense of time is imperfect and can go too fast or too slow. However, when intervals are consistently within a fixed range, the best strategy is to bias responses toward the middle of the range. This is exactly what animals did. Moreover, recording from neurons in the frontal cortex revealed a simple mechanism for Bayesian integration: Prior experience warped the representation of time in the brain so that patterns of neural activity associated with different intervals were biased toward those that were within the expected range.

MIT postdoc Hansem Sohn, former postdoc Devika Narain, and graduate student Nicolas Meirhaeghe are the lead authors of the study, which appears in the July 15 issue of Neuron.

Ready, set, go

Statisticians have known for centuries that Bayesian integration is the optimal strategy for handling uncertain information. When we are uncertain about something, we automatically rely on our prior experiences to optimize behavior.

“If you can’t quite tell what something is, but from your prior experience you have some expectation of what it ought to be, then you will use that information to guide your judgment,” Jazayeri says. “We do this all the time.”

In this new study, Jazayeri and his team wanted to understand how the brain encodes prior beliefs, and put those beliefs to use in the control of behavior. To that end, the researchers trained animals to reproduce a time interval, using a task called “ready-set-go.” In this task, animals measure the time between two flashes of light (“ready” and “set”) and then generate a “go” signal by making a delayed response after the same amount of time has elapsed.

They trained the animals to perform this task in two contexts. In the “Short” scenario, intervals varied between 480 and 800 milliseconds, and in the “Long” context, intervals were between 800 and 1,200 milliseconds. At the beginning of the task, the animals were given the information about the context (via a visual cue), and therefore knew to expect intervals from either the shorter or longer range.

Jazayeri had previously shown that humans performing this task tend to bias their responses toward the middle of the range. Here, they found that animals do the same. For example, if animals believed the interval would be short, and were given an interval of 800 milliseconds, the interval they produced was a little shorter than 800 milliseconds. Conversely, if they believed it would be longer, and were given the same 800-millisecond interval, they produced an interval a bit longer than 800 milliseconds.

“Trials that were identical in almost every possible way, except the animal’s belief led to different behaviors,” Jazayeri says. “That was compelling experimental evidence that the animal is relying on its own belief.”

Once they had established that the animals relied on their prior beliefs, the researchers set out to find how the brain encodes prior beliefs to guide behavior. They recorded activity from about 1,400 neurons in a region of the frontal cortex, which they have previously shown is involved in timing.

During the “ready-set” epoch, the activity profile of each neuron evolved in its own way, and about 60 percent of the neurons had different activity patterns depending on the context (Short versus Long). To make sense of these signals, the researchers analyzed the evolution of neural activity across the entire population over time, and found that prior beliefs bias behavioral responses by warping the neural representation of time toward the middle of the expected range.

“We have never seen such a concrete example of how the brain uses prior experience to modify the neural dynamics by which it generates sequences of neural activities, to correct for its own imprecision. This is the unique strength of this paper: bringing together perception, neural dynamics, and Bayesian computation into a coherent framework, supported by both theory and measurements of behavior and neural activities,” says Mate Lengyel, a professor of computational neuroscience at Cambridge University, who was not involved in the study.

Embedded knowledge

Researchers believe that prior experiences change the strength of connections between neurons. The strength of these connections, also known as synapses, determines how neurons act upon one another and constrains the patterns of activity that a network of interconnected neurons can generate. The finding that prior experiences warp the patterns of neural activity provides a window onto how experience alters synaptic connections. “The brain seems to embed prior experiences into synaptic connections so that patterns of brain activity are appropriately biased,” Jazayeri says.

As an independent test of these ideas, the researchers developed a computer model consisting of a network of neurons that could perform the same ready-set-go task. Using techniques borrowed from machine learning, they were able to modify the synaptic connections and create a model that behaved like the animals.

These models are extremely valuable as they provide a substrate for the detailed analysis of the underlying mechanisms, a procedure that is known as “reverse-engineering.” Remarkably, reverse-engineering the model revealed that it solved the task the same way the monkeys’ brain did. The model also had a warped representation of time according to prior experience.

The researchers used the computer model to further dissect the underlying mechanisms using perturbation experiments that are currently impossible to do in the brain. Using this approach, they were able to show that unwarping the neural representations removes the bias in the behavior. This important finding validated the critical role of warping in Bayesian integration of prior knowledge.

The researchers now plan to study how the brain builds up and slowly fine-tunes the synaptic connections that encode prior beliefs as an animal is learning to perform the timing task.

The research was funded by the Center for Sensorimotor Neural Engineering, the Netherlands Scientific Organization, the Marie Sklodowska Curie Reintegration Grant, the National Institutes of Health, the Sloan Foundation, the Klingenstein Foundation, the Simons Foundation, the McKnight Foundation, and the McGovern Institute.

New CRISPR platform expands RNA editing capabilities

CRISPR-based tools have revolutionized our ability to target disease-linked genetic mutations. CRISPR technology comprises a growing family of tools that can manipulate genes and their expression, including by targeting DNA with the enzymes Cas9 and Cas12 and targeting RNA with the enzyme Cas13. This collection offers different strategies for tackling mutations. Targeting disease-linked mutations in RNA, which is relatively short-lived, would avoid making permanent changes to the genome. In addition, some cell types, such as neurons, are difficult to edit using CRISPR/Cas9-mediated editing, and new strategies are needed to treat devastating diseases that affect the brain.

McGovern Institute Investigator and Broad Institute of MIT and Harvard core member Feng Zhang and his team have now developed one such strategy, called RESCUE (RNA Editing for Specific C to U Exchange), described in the journal Science.

Zhang and his team, including first co-authors Omar Abudayyeh and Jonathan Gootenberg (both now McGovern Fellows), made use of a deactivated Cas13 to guide RESCUE to targeted cytosine bases on RNA transcripts, and used a novel, evolved, programmable enzyme to convert unwanted cytosine into uridine — thereby directing a change in the RNA instructions. RESCUE builds on REPAIR, a technology developed by Zhang’s team that changes adenine bases into inosine in RNA.

RESCUE significantly expands the landscape that CRISPR tools can target to include modifiable positions in proteins, such as phosphorylation sites. Such sites act as on/off switches for protein activity and are notably found in signaling molecules and cancer-linked pathways.

“To treat the diversity of genetic changes that cause disease, we need an array of precise technologies to choose from. By developing this new enzyme and combining it with the programmability and precision of CRISPR, we were able to fill a critical gap in the toolbox,” says Zhang, the James and Patricia Poitras Professor of Neuroscience at MIT. Zhang also has appointments in MIT’s departments of Brain and Cognitive Sciences and Biological Engineering.

Expanding the reach of RNA editing to new targets

The previously developed REPAIR platform used the RNA-targeting CRISPR/Cas13 to direct the active domain of an RNA editor, ADAR2, to specific RNA transcripts where it could convert the nucleotide base adenine to inosine, or letters A to I. Zhang and colleagues took the REPAIR fusion, and evolved it in the lab until it could change cytosine to uridine, or C to U.

RESCUE can be guided to any RNA of choice, then perform a C-to-U edit through the evolved ADAR2 component of the platform. The team took the new platform into human cells, showing that they could target natural RNAs in the cell as well as 24 clinically relevant mutations in synthetic RNAs. They then further optimized RESCUE to reduce off-target editing, while minimally disrupting on-target editing.

New targets in sight

Expanded targeting by RESCUE means that sites regulating activity and function of many proteins through post-translational modifications, such as phosphorylation, glycosylation, and methylation can now be more readily targeted for editing.

A major advantage of RNA editing is its reversibility, in contrast to changes made at the DNA level, which are permanent. Thus, RESCUE could be deployed transiently in situations where a modification may be desirable temporarily, but not permanently. To demonstrate this, the team showed that in human cells, RESCUE can target specific sites in the RNA encoding β-catenin, that are known to be phosphorylated on the protein product, leading to a temporary increase in β-catenin activation and cell growth. If such a change was made permanently, it could predispose cells to uncontrolled cell growth and cancer, but by using RESCUE, transient cell growth could potentially stimulate wound healing in response to acute injuries.

The researchers also targeted a pathogenic gene variant, APOE4.  The APOE4 allele has consistently emerged as a genetic risk factor for the development of late-onset Alzheimer’s Disease. Isoform APOE4 differs from APOE2, which is not a risk factor, by just two differences (both C in APOE4 vs. U in APOE2). Zhang and colleagues introduced the risk-associated APOE4 RNA into cells, and showed that RESCUE can convert its signature C’s to an APOE2 sequence, essentially converting a risk to a non-risk variant.

To facilitate additional work that will push RESCUE toward the clinic as well as enable researchers to use RESCUE as a tool to better understand disease-causing mutations, the Zhang lab plans to share the RESCUE system broadly, as they have with previously developed CRISPR tools. The technology will be freely available for academic research through the non-profit plasmid repository Addgene. Additional information can be found on the Zhang lab’s webpage.

Support for the study was provided by The Phillips Family; J. and P. Poitras; the Poitras Center for Psychiatric Disorders Research; Hock E. Tan and K. Lisa Yang Center for Autism Research.; Robert Metcalfe; David Cheng; a NIH F30 NRSA 1F30-CA210382 to Omar Abudayyeh. F.Z. is a New York Stem Cell Foundation–Robertson Investigator. F.Z. is supported by NIH grants (1R01-HG009761, 1R01-222 MH110049, and 1DP1-HL141201); the Howard Hughes Medical Institute; the New York Stem Cell Foundation and G. Harold and Leila Mathers Foundations.

Artificial “muscles” achieve powerful pulling force

As a cucumber plant grows, it sprouts tightly coiled tendrils that seek out supports in order to pull the plant upward. This ensures the plant receives as much sunlight exposure as possible. Now, researchers at MIT have found a way to imitate this coiling-and-pulling mechanism to produce contracting fibers that could be used as artificial muscles for robots, prosthetic limbs, or other mechanical and biomedical applications.

While many different approaches have been used for creating artificial muscles, including hydraulic systems, servo motors, shape-memory metals, and polymers that respond to stimuli, they all have limitations, including high weight or slow response times. The new fiber-based system, by contrast, is extremely lightweight and can respond very quickly, the researchers say. The findings are being reported today in the journal Science.

The new fibers were developed by MIT postdoc Mehmet Kanik and MIT graduate student Sirma Örgüç, working with professors Polina Anikeeva, Yoel Fink, Anantha Chandrakasan, and C. Cem Taşan, and five others, using a fiber-drawing technique to combine two dissimilar polymers into a single strand of fiber.

The key to the process is mating together two materials that have very different thermal expansion coefficients — meaning they have different rates of expansion when they are heated. This is the same principle used in many thermostats, for example, using a bimetallic strip as a way of measuring temperature. As the joined material heats up, the side that wants to expand faster is held back by the other material. As a result, the bonded material curls up, bending toward the side that is expanding more slowly.

Credit: Courtesy of the researchers

Using two different polymers bonded together, a very stretchable cyclic copolymer elastomer and a much stiffer thermoplastic polyethylene, Kanik, Örgüç and colleagues produced a fiber that, when stretched out to several times its original length, naturally forms itself into a tight coil, very similar to the tendrils that cucumbers produce. But what happened next actually came as a surprise when the researchers first experienced it. “There was a lot of serendipity in this,” Anikeeva recalls.

As soon as Kanik picked up the coiled fiber for the first time, the warmth of his hand alone caused the fiber to curl up more tightly. Following up on that observation, he found that even a small increase in temperature could make the coil tighten up, producing a surprisingly strong pulling force. Then, as soon as the temperature went back down, the fiber returned to its original length. In later testing, the team showed that this process of contracting and expanding could be repeated 10,000 times “and it was still going strong,” Anikeeva says.

Credit: Courtesy of the researchers

One of the reasons for that longevity, she says, is that “everything is operating under very moderate conditions,” including low activation temperatures. Just a 1-degree Celsius increase can be enough to start the fiber contraction.

The fibers can span a wide range of sizes, from a few micrometers (millionths of a meter) to a few millimeters (thousandths of a meter) in width, and can easily be manufactured in batches up to hundreds of meters long. Tests have shown that a single fiber is capable of lifting loads of up to 650 times its own weight. For these experiments on individual fibers, Örgüç and Kanik have developed dedicated, miniaturized testing setups.

Credit: Courtesy of the researchers

The degree of tightening that occurs when the fiber is heated can be “programmed” by determining how much of an initial stretch to give the fiber. This allows the material to be tuned to exactly the amount of force needed and the amount of temperature change needed to trigger that force.

The fibers are made using a fiber-drawing system, which makes it possible to incorporate other components into the fiber itself. Fiber drawing is done by creating an oversized version of the material, called a preform, which is then heated to a specific temperature at which the material becomes viscous. It can then be pulled, much like pulling taffy, to create a fiber that retains its internal structure but is a small fraction of the width of the preform.

For testing purposes, the researchers coated the fibers with meshes of conductive nanowires. These meshes can be used as sensors to reveal the exact tension experienced or exerted by the fiber. In the future, these fibers could also include heating elements such as optical fibers or electrodes, providing a way of heating it internally without having to rely on any outside heat source to activate the contraction of the “muscle.”

Such fibers could find uses as actuators in robotic arms, legs, or grippers, and in prosthetic limbs, where their slight weight and fast response times could provide a significant advantage.

Some prosthetic limbs today can weigh as much as 30 pounds, with much of the weight coming from actuators, which are often pneumatic or hydraulic; lighter-weight actuators could thus make life much easier for those who use prosthetics. Such fibers might also find uses in tiny biomedical devices, such as a medical robot that works by going into an artery and then being activated,” Anikeeva suggests. “We have activation times on the order of tens of milliseconds to seconds,” depending on the dimensions, she says.

To provide greater strength for lifting heavier loads, the fibers can be bundled together, much as muscle fibers are bundled in the body. The team successfully tested bundles of 100 fibers. Through the fiber drawing process, sensors could also be incorporated in the fibers to provide feedback on conditions they encounter, such as in a prosthetic limb. Örgüç says bundled muscle fibers with a closed-loop feedback mechanism could find applications in robotic systems where automated and precise control are required.

Kanik says that the possibilities for materials of this type are virtually limitless, because almost any combination of two materials with different thermal expansion rates could work, leaving a vast realm of possible combinations to explore. He adds that this new finding was like opening a new window, only to see “a bunch of other windows” waiting to be opened.

“The strength of this work is coming from its simplicity,” he says.

The team also included MIT graduate student Georgios Varnavides, postdoc Jinwoo Kim, and undergraduate students Thomas Benavides, Dani Gonzalez, and Timothy Akintlio. The work was supported by the National Institute of Neurological Disorders and Stroke and the National Science Foundation.

Speaking many languages

Ev Fedorenko studies the cognitive processes and brain regions underlying language, a signature cognitive skill that is uniquely and universally human. She investigates both people with linguistic impairments, and those that have exceptional language skills: hyperpolyglots, or people that are fluent in over a dozen languages. Indeed, she was recently interviewed for a BBC documentary about superlinguists as well as the New Yorker, for an article covering people with exceptional language skills.

When Fedorenko, an associate investigator at the McGovern Institute and assistant professor in the Department of Brain and Cognitive Sciences at MIT, came to the field, neuroscientists were still debating whether high-level cognitive skills such as language, are processed by multi-functional or dedicated brain regions. Using fMRI, Fedorenko and colleagues compared engagement of brain regions when individuals were engaged in linguistic vs. other high level cognitive tasks, such as arithmetic or music. Their data revealed a clear distinction between language and other cognitive processes, showing that our brains have dedicated language regions.

Here is my basic question. How do I get a thought from my mind into yours?

In the time since this key study, Fedorenko has continued to unpack language in the brain. How does the brain process the overarching rules and structure of language (syntax), as opposed to meanings of words? How do we construct complex meanings? What might underlie communicative difficulties in individuals diagnosed with autism? How does the aphasic brain recover language? Intriguingly, in contrast to individuals with linguistic difficulties, there are also individuals that stand out as being able to master many languages, so-called hyperpolyglots.

In 2013, she came across a young adult that had mastered over 30 languages, a prodigy in languages. To facilitate her analysis of processing of different languages Fedorenko has collected dozens of translations of Alice in Wonderland, for her ‘Alice in the language localizer Wonderland‘ project. She has already found that hyperpolyglots tend to show less activity in linguistic processing regions when reading in, or listening to, their native language, compared to carefully matched controls, perhaps indexing more efficient processing mechanisms. Fedorenko continues to study hyperpolyglots, along with other exciting new avenues of research. Stay tuned for upcoming advances in our understanding of the brain and language.

Bridging the gap between research and the classroom

In a moment more reminiscent of a Comic-Con event than a typical MIT symposium, Shawn Robinson, senior research associate at the University of Wisconsin at Madison, helped kick off the first-ever MIT Science of Reading event dressed in full superhero attire as Doctor Dyslexia Dude — the star of a graphic novel series he co-created to engage and encourage young readers, rooted in his own experiences as a student with dyslexia.

The event, co-sponsored by the MIT Integrated Learning Initiative (MITili) and the McGovern Institute for Brain Research at MIT, took place earlier this month and brought together researchers, educators, administrators, parents, and students to explore how scientific research can better inform educational practices and policies — equipping teachers with scientifically-based strategies that may lead to better outcomes for students.

Professor John Gabrieli, MITili director, explained the great need to focus the collective efforts of educators and researchers on literacy.

“Reading is critical to all learning and all areas of knowledge. It is the first great educational experience for all children, and can shape a child’s first sense of self,” he said. “If reading is a challenge or a burden, it affects children’s social and emotional core.”

A great divide

Reading is also a particularly important area to address because so many American students struggle with this fundamental skill. More than six out of every 10 fourth graders in the United States are not proficient readers, and changes in reading scores for fourth and eighth graders have increased only slightly since 1992, according to the National Assessment of Education Progress.

Gabrieli explained that, just as with biomedical research, where there can be a “valley of death” between basic research and clinical application, the same seems to apply to education. Although there is substantial current research aiming to better understand why students might have difficulty reading in the ways they are currently taught, the research often does not necessarily shape the practices of teachers — or how the teachers themselves are trained to teach.

This divide between the research and practical applications in the classroom might stem from a variety of factors. One issue might be the inaccessibility of research publications that are available for free to all — as well as the general need for scientific findings to be communicated in a clear, accessible, engaging way that can lead to actual implementation. Another challenge is the stark difference in pacing between scientific research and classroom teaching. While research can take years to complete and publish, teachers have classrooms full of students — all with different strengths and challenges — who urgently need to learn in real time.

Natalie Wexler, author of “The Knowledge Gap,” described some of the obstacles to getting the findings of cognitive science integrated into the classroom as matters of “head, heart, and habit.” Teacher education programs tend to focus more on some of the outdated psychological models, like Piaget’s theory of cognitive development, and less on recent cognitive science research. Teachers also have to face the emotional realities of working with their students, and might be concerned that a new approach would cause students to feel bored or frustrated. In terms of habit, some new, evidence-based approaches may be, in a practical sense, difficult for teachers to incorporate into the classroom.

“Teaching is an incredibly complex activity,” noted Wexler.

From labs to classrooms

Throughout the day, speakers and panelists highlighted some key insights gained from literacy research, along with some of the implications these might have on education.

Mark Seidenberg, professor of psychology at the University of Wisconsin at Madison and author of “Language at the Speed of Sight,” discussed studies indicating the strong connection between spoken and printed language.

“Reading depends on speech,” said Seidenberg. “Writing systems are codes for expressing spoken language … Spoken language deficits have an enormous impact on children’s reading.”

The integration of speech and reading in the brain increases with reading skill. For skilled readers, the patterns of brain activity (measured using functional magnetic resonance imaging) while comprehending spoken and written language are very similar. Becoming literate affects the neural representation of speech, and knowledge of speech affects the representation of print — thus the two become deeply intertwined.

In addition, researchers have found that the language of books, even for young children, include words and expressions that are rarely encountered in speech to children. Therefore, reading aloud to children exposes them to a broader range of linguistic expressions — including more complex ones that are usually only taught much later. Thus reading to children can be especially important, as research indicates that better knowledge of spoken language facilitates learning to read.

Although behavior and performance on tests are often used as indicators of how well a student can read, neuroscience data can now provide additional information. Neuroimaging of children and young adults identifies brain regions that are critical for integrating speech and print, and can spot differences in the brain activity of a child who might be especially at-risk for reading difficulties. Brain imaging can also show how readers’ brains respond to certain reading and comprehension tasks, and how they adapt to different circumstances and challenges.

“Brain measures can be more sensitive than behavioral measures in identifying true risk,” said Ola Ozernov-Palchik, a postdoc at the McGovern Institute.

Ozernov-Palchik hopes to apply what her team is learning in their current studies to predict reading outcomes for other children, as well as continue to investigate individual differences in dyslexia and dyslexia-risk using behavior and neuroimaging methods.

Identifying certain differences early on can be tremendously helpful in providing much-needed early interventions and tailored solutions. Many speakers noted the problem with the current “wait-to-fail” model of noticing that a child has a difficult time reading in second or third grade, and then intervening. Research suggests that earlier intervention could help the child succeed much more than later intervention.

Speakers and panelists spoke about current efforts, including Reach Every Reader (a collaboration between MITili, the Harvard Graduate School of Education, and the Florida Center for Reading Research), that seek to provide support to students by bringing together education practitioners and scientists.

“We have a lot of information, but we have the challenge of how to enact it in the real world,” said Gabrieli, noting that he is optimistic about the potential for the additional conversations and collaborations that might grow out of the discussions of the Science of Reading event. “We know a lot of things can be better and will require partnerships, but there is a path forward.”

Mark Harnett receives a 2019 McKnight Scholar Award

McGovern Institute investigator Mark Harnett is one of six young researchers selected to receive a prestigious 2019 McKnight Scholar Award. The award supports his research “studying how dendrites, the antenna-like input structures of neurons, contribute to computation in neural networks.”

Harnett examines the biophysical properties of single neurons, ultimately aiming to understand how these relate to the complex computations that underlie behavior. His lab was the first to examine the biophysical properties of human dendrites. The Harnett lab found that human neurons have distinct properties, including increased dendritic compartmentalization that could allow more complex computations within single neurons. His lab recently discovered that such dendritic computations are not rare, or confined to specific behaviors, but are a widespread and general feature of neuronal activity.

“As a young investigator, it is hard to prioritize so many exciting directions and ideas,” explains Harnett. “I really want to thank the McKnight Foundation, both for the support, but also for the hard work the award committee puts into carefully thinking about and giving feedback on proposals. It means a lot to get this type of endorsement from a seriously committed and distinguished committee, and their support gives even stronger impetus to pursue this research direction.”

The McKnight Foundation has supported neuroscience research since 1977, and provides three prominent awards, with the Scholar award aimed at supporting young scientists, and drawing applications from the strongest young neuroscience faculty across the US. William L. McKnight (1887-1979) was an early leader of the 3M Company and had a personal interest in memory and brain diseases. The McKnight Foundation was established with this focus in mind, and the Scholar Award provides $75,000 per year for three years to support cutting edge neuroscience research.

 

A chemical approach to imaging cells from the inside

A team of researchers at the McGovern Institute and Broad Institute of MIT and Harvard have developed a new technique for mapping cells. The approach, called DNA microscopy, shows how biomolecules such as DNA and RNA are organized in cells and tissues, revealing spatial and molecular information that is not easily accessible through other microscopy methods. DNA microscopy also does not require specialized equipment, enabling large numbers of samples to be processed simultaneously.

“DNA microscopy is an entirely new way of visualizing cells that captures both spatial and genetic information simultaneously from a single specimen,” says first author Joshua Weinstein, a postdoctoral associate at the Broad Institute. “It will allow us to see how genetically unique cells — those comprising the immune system, cancer, or the gut, for instance — interact with one another and give rise to complex multicellular life.”

The new technique is described in Cell. Aviv Regev, core institute member and director of the Klarman Cell Observatory at the Broad Institute and professor of biology at MIT, and Feng Zhang, core institute member of the Broad Institute, investigator at the McGovern Institute for Brain Research at MIT, and the James and Patricia Poitras Professor of Neuroscience at MIT, are co-authors. Regev and Zhang are also Howard Hughes Medical Institute Investigators.

The evolution of biological imaging

In recent decades, researchers have developed tools to collect molecular information from tissue samples, data that cannot be captured by either light or electron microscopes. However, attempts to couple this molecular information with spatial data — to see how it is naturally arranged in a sample — are often machinery-intensive, with limited scalability.

DNA microscopy takes a new approach to combining molecular information with spatial data, using DNA itself as a tool.

To visualize a tissue sample, researchers first add small synthetic DNA tags, which latch on to molecules of genetic material inside cells. The tags are then replicated, diffusing in “clouds” across cells and chemically reacting with each other, further combining and creating more unique DNA labels. The labeled biomolecules are collected, sequenced, and computationally decoded to reconstruct their relative positions and a physical image of the sample.

The interactions between these DNA tags enable researchers to calculate the locations of the different molecules — somewhat analogous to cell phone towers triangulating the locations of different cell phones in their vicinity. Because the process only requires standard lab tools, it is efficient and scalable.

In this study, the authors demonstrate the ability to molecularly map the locations of individual human cancer cells in a sample by tagging RNA molecules. DNA microscopy could be used to map any group of molecules that will interact with the synthetic DNA tags, including cellular genomes, RNA, or proteins with DNA-labeled antibodies, according to the team.

“DNA microscopy gives us microscopic information without a microscope-defined coordinate system,” says Weinstein. “We’ve used DNA in a way that’s mathematically similar to photons in light microscopy. This allows us to visualize biology as cells see it and not as the human eye does. We’re excited to use this tool in expanding our understanding of genetic and molecular complexity.”

Funding for this study was provided by the Simons Foundation, Klarman Cell Observatory, NIH (R01HG009276, 1R01- HG009761, 1R01- MH110049, and 1DP1-HL141201), New York Stem Cell Foundation, Simons Foundation, Paul G. Allen Family Foundation, Vallee Foundation, the Poitras Center for Affective Disorders Research at MIT, the Hock E. Tan and K. Lisa Yang Center for Autism Research at MIT, J. and P. Poitras, and R. Metcalfe. 

The authors have applied for a patent on this technology.