Hearing through the clatter

In a busy coffee shop, our eardrums are inundated with sound waves – people chatting, the clatter of cups, music playing – yet our brains somehow manage to untangle relevant sounds, like a barista announcing that our “coffee is ready,” from insignificant noise. A new McGovern Institute study sheds light on how the brain accomplishes the task of extracting meaningful sounds from background noise – findings that could one day help to build artificial hearing systems and aid development of targeted hearing prosthetics.

“These findings reveal a neural correlate of our ability to listen in noise, and at the same time demonstrate functional differentiation between different stages of auditory processing in the cortex,” explains Josh McDermott, an associate professor of brain and cognitive sciences at MIT, a member of the McGovern Institute and the Center for Brains, Minds and Machines, and the senior author of the study.

The auditory cortex, a part of the brain that responds to sound, has long been known to have distinct anatomical subregions, but the role these areas play in auditory processing has remained a mystery. In their study published today in Nature Communications, McDermott and former graduate student Alex Kell, discovered that these subregions respond differently to the presence of background noise, suggesting that auditory processing occurs in steps that progressively hone in on and isolate a sound of interest.

Background check

Previous studies have shown that the primary and non-primary subregions of the auditory cortex respond to sound with different dynamics, but these studies were largely based on brain activity in response to speech or simple synthetic sounds (such as tones and clicks). Little was known about how these regions might work to subserve everyday auditory behavior.

To test these subregions under more realistic conditions, McDermott and Kell, who is now a postdoctoral researcher at Columbia University, assessed changes in human brain activity while subjects listened to natural sounds with and without background noise.

While lying in an MRI scanner, subjects listened to 30 different natural sounds, ranging from meowing cats to ringing phones, that were presented alone or embedded in real-world background noise such as heavy rain.

“When I started studying audition,” explains Kell, “I started just sitting around in my day-to-day life, just listening, and was astonished at the constant background noise that seemed to usually be filtered out by default. Most of these noises tended to be pretty stable over time, suggesting we could experimentally separate them. The project flowed from there.”

To their surprise, Kell and McDermott found that the primary and non-primary regions of the auditory cortex responded differently to natural sound depending upon whether background noise was present.

brain regions responding to sound
Primary auditory cortex (outlined in white) responses change (blue) when background noise is present, whereas non-primary activity is robust to background noise (yellow). Image: Alex Kell

They found that activity of the primary auditory cortex was altered when background noise is present, suggesting that this region has not yet differentiated between meaningful sounds and background noise. Non-primary regions, however, respond similarly to natural sounds irrespective of whether noise is present, suggesting that cortical signals generated by sound are transformed or “cleaned up” to remove background noise by the time they reach the non-primary auditory cortex.

“We were surprised by how big the difference was between primary and non-primary areas,” explained Kell, “so we ran a bunch more subjects but kept seeing the same thing. We had a ton of questions about what might be responsible for this difference, and that’s why we ended up running all these follow-up experiments.”

A general principle

Kell and McDermott went on to test whether these responses were specific to particular sounds, and discovered that the above effect remained stable no matter the source or type of sound activity. Music, speech, or a squeaky toy, all activated the non-primary cortex region similarly, whether or not background noise was present.

The authors also tested whether attention is relevant. Even when the researchers sneakily distracted subjects with a visual task in the scanner, the cortical subregions responded to meaningful sound and background noise in the same way, showing that attention is not driving this aspect of sound processing. In other words, even when we are focused on reading a book, our brain is diligently sorting the sound of our meowing cat from the patter of heavy rain outside.

Future directions

The McDermott lab is now building computational models of the so-called “noise robustness” found in the Nature Communications study and Kell is pursuing a finer-grained understanding of sound processing in his postdoctoral work at Columbia, by exploring the neural circuit mechanisms underlying this phenomenon.

By gaining a deeper understanding of how the brain processes sound, the researchers hope their work will contribute to improve diagnoses and treatment of hearing dysfunction. Such research could help to reveal the origins of listening difficulties that accompany developmental disorders or age-related hearing loss. For instance, if hearing loss results from dysfunction in sensory processing, this could be caused by abnormal noise robustness in the auditory cortex. Normal noise robustness might instead suggest that there are impairments elsewhere in the brain, for example a break down in higher executive function.

“In the future,” McDermott says, “we hope these noninvasive measures of auditory function may become valuable tools for clinical assessment.”

Call for Nominations: 2020 Scolnick Prize in Neuroscience

The McGovern Institute is now accepting nominations for the Scolnick Prize in Neuroscience, which recognizes an outstanding discovery or significant advance in any field of neuroscience, until December 15, 2019.

About the Scolnick Prize

The prize is named in honor of Edward M. Scolnick, who stepped down as president of Merck Research Laboratories in December 2002 after holding Merck’s top research post for 17 years. The prize, which is endowed through a gift from Merck to the McGovern Institute, consists of a $150,000 award, plus an inscribed gift. The recipient presents a public lecture at MIT, hosted by the McGovern Institute and followed by a dinner in Spring 2020.

Nomination Process

Candidates for the award must be nominated by individuals affiliated with universities, hospitals, medical schools, or research institutes, with a background in neuroscience. Self-nomination is not permitted. Each nomination should include a biosketch or CV of the nominee and a letter of nomination with a summary and analysis of the nominee’s major contributions to the field of neuroscience. Up to two representative reprints will be accepted. The winner, selected by a committee appointed by the director of the McGovern Institute, will be announced in January 2020.

More information about the Scolnick Prize, including details about the nomination process, selection committee, and past Scolnick Prize recipients, can be found on our website.

submit nomination

Finding the brain’s compass

The world is constantly bombarding our senses with information, but the ways in which our brain extracts meaning from this information remains elusive. How do neurons transform raw visual input into a mental representation of an object – like a chair or a dog?

In work published today in Nature Neuroscience, MIT neuroscientists have identified a brain circuit in mice that distills “high-dimensional” complex information about the environment into a simple abstract object in the brain.

“There are no degree markings in the external world, our current head direction has to be extracted, computed, and estimated by the brain,” explains Ila Fiete, an associate member of the McGovern Institute and senior author of the paper. “The approaches we used allowed us to demonstrate the emergence of a low-dimensional concept, essentially an abstract compass in the brain.”

This abstract compass, according to the researchers, is a one-dimensional ring that represents the current direction of the head relative to the external world.

Schooling fish

Trying to show that a data cloud has a simple shape, like a ring, is a bit like watching a school of fish. By tracking one or two sardines, you might not see a pattern. But if you could map all of the sardines, and transform the noisy dataset into points representing the positions of the whole school of sardines over time, and where each fish is relative to its neighbors, a pattern would emerge. This model would reveal a ring shape, a simple shape formed by the activity of hundreds of individual fish.

Fiete, who is also an associate professor in MIT’s Department of Brain and Cognitive Sciences, used a similar approach, called topological modeling, to transform the activity of large populations of noisy neurons into a data cloud the shape of a ring.

Simple and persistent ring

Previous work in fly brains revealed a physical ellipsoid ring of neurons representing changes in the direction of the fly’s head, and researchers suspected that such a system might also exist in mammals.

In this new mouse study, Fiete and her colleagues measured hours of neural activity from scores of neurons in the anterodorsal thalamic nucleus (ADN) – a region believed to play a role in spatial navigation – as the animals moved freely around their environment. They mapped how the neurons in the ADN circuit fired as the animal’s head changed direction.

Together these data points formed a cloud in the shape of a simple and persistent ring.

“In the absence of this ring,” Fiete explains, “we would be lost in the world.”

“This tells us a lot about how neural networks are organized in the brain,” explains Edvard Moser, Director of the Kavli Institute of Systems Neuroscience in Norway, who was not involved in the study. “Past data have indirectly pointed towards such a ring-like organization but only now has it been possible, with the right cell numbers and methods, to demonstrate it convincingly,” says Moser.

Their method for characterizing the shape of the data cloud allowed Fiete and colleagues to determine which variable the circuit was devoted to representing, and to decode this variable over time, using only the neural responses.

“The animal’s doing really complicated stuff,” explains Fiete, “but this circuit is devoted to integrating the animal’s speed along a one-dimensional compass that encodes head direction,” explains Fiete. “Without a manifold approach, which captures the whole state space, you wouldn’t know that this circuit of thousands of neurons is encoding only this one aspect of the complex behavior, and not encoding any other variables at the same time.”

Even during sleep, when the circuit is not being bombarded with external information, this circuit robustly traces out the same one-dimensional ring, as if dreaming of past head direction trajectories.

Further analysis revealed that the ring acts an attractor. If neurons stray off trajectory, they are drawn back to it, quickly correcting the system. This attractor property of the ring means that the representation of head direction in abstract space is reliably stable over time, a key requirement if we are to understand and maintain a stable sense of where our head is relative to the world around us.

“In the absence of this ring,” Fiete explains, “we would be lost in the world.”

Shaping the future

Fiete’s work provides a first glimpse into how complex sensory information is distilled into a simple concept in the mind, and how that representation autonomously corrects errors, making it exquisitely stable.

But the implications of this study go beyond coding of head direction.

“Similar organization is probably present for other cognitive functions so the paper is likely to inspire numerous new studies,” says Moser.

Fiete sees these analyses and related studies carried out by colleagues at the Norwegian University of Science and Technology, Princeton University, the Weitzman Institute, and elsewhere as fundamental to the future of neural decoding studies.

With this approach, she explains, it is possible to extract abstract representations of the mind from the brain, potentially even thoughts and dreams.

“We’ve found that the brain deconstructs and represents complex things in the world with simple shapes,” explains Fiete. “Manifold-level analysis can help us to find those shapes, and they almost certainly exist beyond head direction circuits.”

Do thoughts have mass?

As part of our Ask the Brain series, we received the question, “Do thoughts have mass?” The following is a guest blog post by Michal De-Medonsa, technical associate and manager of the Jazayeri lab, who tapped into her background in philosophy to answer this intriguing question.

_____

Portrat of Michal De-Medonsa
Jazayeri lab manager (and philosopher) Michal De-Medonsa.

To answer the question, “Do thoughts have mass?” we must, like any good philosopher, define something that already has a definition – “thoughts.”

Logically, we can assert that thoughts are either metaphysical or physical (beyond that, we run out of options). If our definition of thought is metaphysical, it is safe to say that metaphysical thoughts do not have mass since they are by definition not physical, and mass is a property of a physical things. However, if we define a thought as a physical thing, it becomes a little trickier to determine whether or not it has mass.

A physical definition of thoughts falls into (at least) two subgroups – physical processes and physical parts. Take driving a car, for example – a parts definition describes the doors, motor, etc. and has mass. A process definition of a car being driven, turning the wheel, moving from point A to point B, etc. does not have mass. The process of driving is a physical process that involves moving physical matter, but we wouldn’t say that the act of driving has mass. The car itself, however, is an example of physical matter, and as any cyclist in the city of Boston is well aware  – cars have mass. It’s clear that if we define a thought as a process, it does not have mass, and if we define a thought as physical parts, it does have mass – so, which one is it? In order to resolve our issue, we have to be incredibly precise with our definition. Is a thought a process or parts? That is, is a thought more like driving or more like a car?

In order to resolve our issue, we have to be incredibly precise with our definition of the word thought.

Both physical definitions (process and parts) have merit. For a parts definition, we can look at what is required for a thought – neurons, electrical signals, and neurochemicals, etc. This type of definition becomes quite imprecise and limiting. It doesn’t seem too problematic to say that the neurons, neurochemicals, etc. are themselves the thought, but this style of definition starts to fall apart when we try to include all the parts involved (e.g. blood flow, connective tissue, outside stimuli). When we look at a face, the stimuli received by the visual cortex is part of the thought – is the face part of a thought? When we look at our phone, is the phone itself part of a thought? A parts definition either needs an arbitrary limit, or we end up having to include all possible parts involved in the thought, ending up with an incredibly convoluted and effectively useless definition.

A process definition is more versatile and precise, and it allows us to include all the physical parts in a more elegant way. We can now say that all the moving parts are included in the process without saying that they themselves are the thought. That is, we can say blood flow is included in the process without saying that blood flow itself is part of the thought. It doesn’t sound ridiculous to say that a phone is part of the thought process. If we subscribe to the parts definition, however, we’re forced to say that part of the mass of a thought comes from the mass of a phone. A process definition allows us to be precise without being convoluted, and allows us to include outside influences without committing to absurd definitions.

Typical of a philosophical endeavor, we’re left with more questions and no simple answer. However, we can walk away with three conclusions.

  1. A process definition of “thought” allows for elegance and the involvement of factors outside the “vacuum” of our physical body, however, we lose out on some function by not describing a thought by its physical parts.
  2. The colloquial definition of “thought” breaks down once we invite a philosopher over to break it down, but this is to be expected – when we try to break something down, sometimes, it will break down. What we should be aware of is that if we want to use the word in a rigorous scientific framework, we need a rigorous scientific definition.
  3. Most importantly, it’s clear that we need to put a lot of work into defining exactly what we mean by “thought” – a job well suited to a scientifically-informed philosopher.

Michal De-Medonsa earned her bachelor’s degree in neuroscience and philosophy from Johns Hopkins University in 2012 and went on to receive her master’s degree in history and philosophy of science at the University of Pittsburgh in 2015. She joined the Jazayeri lab in 2018 as a lab manager/technician and spends most of her free time rock climbing, doing standup comedy, and woodworking at the MIT Hobby Shop. 

_____

Do you have a question for The Brain? Ask it here.

Ed Boyden wins premier Royal Society honor

Edward S. Boyden, the Y. Eva Tan Professor in Neurotechnology at MIT, has been awarded the 2019 Croonian Medal and Lecture by the Royal Society. Twenty-four medals and awards are announced by the Royal Society each year, honoring exceptional researchers who are making outstanding contributions to science.

“The Royal Society gives an array of medals and awards to scientists who have done exceptional, ground-breaking work,” explained Sir Venki Ramakrishnan, President of the Royal Society. “This year, it is again a pleasure to see these awards bestowed on scientists who have made such distinguished and far-reaching contributions in their fields. I congratulate and thank them for their efforts.”

Boyden wins the medal and lecture in recognition of his research that is expanding our understanding of the brain. This includes his critical role in the development of optogenetics, a technique for controlling brain activity with light, and his invention of expansion microscopy. Croonian Medal laureates include notable luminaries of science and neurobiology.

“It is a great honor to be selected to receive this medal, especially
since it was also given to people such as Santiago Ramon y Cajal, the
founder of modern neuroscience,” says Boyden. “This award reflects the great work of many fantastic students, postdocs, and collaborators who I’ve had the privilege to work with over the years.”

The award includes an invitation to deliver the premier British lecture in the biological sciences, given annually at the Royal Society in London. At the lecture, the winner is awarded a medal and a gift of £10,000. This announcement comes shortly after Boyden was co-awarded the Warren Alpert Prize for his role in developing optogenetics.

History of the Croonian Medal and Lecture

William Croone, pictured, envisioned an annual lecture that is the premier biological sciences medal and lecture at the Royal Society
William Croone, FRS Photo credit: Royal College of Physicians, London

The lectureship was conceived by William Croone FRS, one of the original Fellows of the Society based in London. Among the papers left on his death in 1684 were plans to endow two lectureships, one at the Royal Society and the other at the Royal College of Physicians. His widow later bequeathed the means to carry out the scheme. The lecture series began in 1738.

 

 

Ed Boyden holds the titles of Investigator, McGovern Institute; Y. Eva Tan Professor in Neurotechnology at MIT; Leader, Synthetic Neurobiology Group, MIT Media Lab; Professor, Biological Engineering, Brain and Cognitive Sciences, MIT Media Lab; Co-Director, MIT Center for Neurobiological Engineering; Member, MIT Center for Environmental Health Sciences, Computational and Systems Biology Initiative, and Koch Institute.

Ed Boyden receives 2019 Warren Alpert Prize

The 2019 Warren Alpert Foundation Prize has been awarded to four scientists, including Ed Boyden, for pioneering work that launched the field of optogenetics, a technique that uses light-sensitive channels and pumps to control the activity of neurons in the brain with a flick of a switch. He receives the prize alongside Karl Deisseroth, Peter Hegemann, and Gero Miesenböck, as outlined by The Warren Alpert Foundation in their announcement.

Harnessing light and genetics, the approach illuminates and modulates the activity of neurons, enables study of brain function and behavior, and helps reveal activity patterns that can overcome brain diseases.

Boyden’s work was key to envisioning and developing optogenetics, now a core method in neuroscience. The method allows brain circuits linked to complex behavioral processes, such as those involved in decision-making, feeding, and sleep, to be unraveled in genetic models. It is also helping to elucidate the mechanisms underlying neuropsychiatric disorders, and has the potential to inspire new strategies to overcome brain disorders.

“It is truly an honor to be included among the extremely distinguished list of winners of the Alpert Award,” says Boyden, the Y. Eva Tan Professor in Neurotechnology at the McGovern Institute, MIT. “To me personally, it is exciting to see the relatively new field of neurotechnology recognized. The brain implements our thoughts and feelings. It makes us who we are. This mysteries and challenge requires new technologies to make the brain understandable and repairable. It is a great honor that our technology of optogenetics is being thus recognized.”

While they were students, Boyden, and fellow awardee Karl Deisseroth, brainstormed about how microbial opsins could be used to mediate optical control of neural activity. In mid-2004, the pair collaborated to show that microbial opsins can be used to optically control neural activity. Upon launching his lab at MIT, Boyden’s team developed the first optogenetic silencing tool, the first effective optogenetic silencing in live mammals, noninvasive optogenetic silencing, and single-cell optogenetic control.

“The discoveries made by this year’s four honorees have fundamentally changed the landscape of neuroscience,” said George Q. Daley, dean of Harvard Medical School. “Their work has enabled scientists to see, understand and manipulate neurons, providing the foundation for understanding the ultimate enigma—the human brain.”

Beyond optogenetics, Boyden has pioneered transformative technologies that image, record, and manipulate complex systems, including expansion microscopy, robotic patch clamping, and even shrinking objects to the nanoscale. He was elected this year to the ranks of the National Academy of Sciences, and selected as an HHMI Investigator. Boyden has received numerous awards for this work, including the 2018 Gairdner International Prize and the 2016 Breakthrough Prize in Life Sciences.

The Warren Alpert Foundation, in association with Harvard Medical School, honors scientists whose work has improved the understanding, prevention, treatment or cure of human disease. Prize recipients are selected by the foundation’s scientific advisory board, which is composed of distinguished biomedical scientists and chaired by the dean of Harvard Medical School. The honorees will share a $500,000 prize and will be recognized at a daylong symposium on Oct. 3 at Harvard Medical School.

Ed Boyden holds the titles of Investigator, McGovern Institute; Y. Eva Tan Professor in Neurotechnology at MIT; Leader, Synthetic Neurobiology Group, Media Lab; Associate Professor, Biological Engineering, Brain and Cognitive Sciences, Media Lab; Co-Director, MIT Center for Neurobiological Engineering; Member, MIT Center for Environmental Health Sciences, Computational and Systems Biology Initiative, and Koch Institute.

New CRISPR platform expands RNA editing capabilities

CRISPR-based tools have revolutionized our ability to target disease-linked genetic mutations. CRISPR technology comprises a growing family of tools that can manipulate genes and their expression, including by targeting DNA with the enzymes Cas9 and Cas12 and targeting RNA with the enzyme Cas13. This collection offers different strategies for tackling mutations. Targeting disease-linked mutations in RNA, which is relatively short-lived, would avoid making permanent changes to the genome. In addition, some cell types, such as neurons, are difficult to edit using CRISPR/Cas9-mediated editing, and new strategies are needed to treat devastating diseases that affect the brain.

McGovern Institute Investigator and Broad Institute of MIT and Harvard core member Feng Zhang and his team have now developed one such strategy, called RESCUE (RNA Editing for Specific C to U Exchange), described in the journal Science.

Zhang and his team, including first co-authors Omar Abudayyeh and Jonathan Gootenberg (both now McGovern Fellows), made use of a deactivated Cas13 to guide RESCUE to targeted cytosine bases on RNA transcripts, and used a novel, evolved, programmable enzyme to convert unwanted cytosine into uridine — thereby directing a change in the RNA instructions. RESCUE builds on REPAIR, a technology developed by Zhang’s team that changes adenine bases into inosine in RNA.

RESCUE significantly expands the landscape that CRISPR tools can target to include modifiable positions in proteins, such as phosphorylation sites. Such sites act as on/off switches for protein activity and are notably found in signaling molecules and cancer-linked pathways.

“To treat the diversity of genetic changes that cause disease, we need an array of precise technologies to choose from. By developing this new enzyme and combining it with the programmability and precision of CRISPR, we were able to fill a critical gap in the toolbox,” says Zhang, the James and Patricia Poitras Professor of Neuroscience at MIT. Zhang also has appointments in MIT’s departments of Brain and Cognitive Sciences and Biological Engineering.

Expanding the reach of RNA editing to new targets

The previously developed REPAIR platform used the RNA-targeting CRISPR/Cas13 to direct the active domain of an RNA editor, ADAR2, to specific RNA transcripts where it could convert the nucleotide base adenine to inosine, or letters A to I. Zhang and colleagues took the REPAIR fusion, and evolved it in the lab until it could change cytosine to uridine, or C to U.

RESCUE can be guided to any RNA of choice, then perform a C-to-U edit through the evolved ADAR2 component of the platform. The team took the new platform into human cells, showing that they could target natural RNAs in the cell as well as 24 clinically relevant mutations in synthetic RNAs. They then further optimized RESCUE to reduce off-target editing, while minimally disrupting on-target editing.

New targets in sight

Expanded targeting by RESCUE means that sites regulating activity and function of many proteins through post-translational modifications, such as phosphorylation, glycosylation, and methylation can now be more readily targeted for editing.

A major advantage of RNA editing is its reversibility, in contrast to changes made at the DNA level, which are permanent. Thus, RESCUE could be deployed transiently in situations where a modification may be desirable temporarily, but not permanently. To demonstrate this, the team showed that in human cells, RESCUE can target specific sites in the RNA encoding β-catenin, that are known to be phosphorylated on the protein product, leading to a temporary increase in β-catenin activation and cell growth. If such a change was made permanently, it could predispose cells to uncontrolled cell growth and cancer, but by using RESCUE, transient cell growth could potentially stimulate wound healing in response to acute injuries.

The researchers also targeted a pathogenic gene variant, APOE4.  The APOE4 allele has consistently emerged as a genetic risk factor for the development of late-onset Alzheimer’s Disease. Isoform APOE4 differs from APOE2, which is not a risk factor, by just two differences (both C in APOE4 vs. U in APOE2). Zhang and colleagues introduced the risk-associated APOE4 RNA into cells, and showed that RESCUE can convert its signature C’s to an APOE2 sequence, essentially converting a risk to a non-risk variant.

To facilitate additional work that will push RESCUE toward the clinic as well as enable researchers to use RESCUE as a tool to better understand disease-causing mutations, the Zhang lab plans to share the RESCUE system broadly, as they have with previously developed CRISPR tools. The technology will be freely available for academic research through the non-profit plasmid repository Addgene. Additional information can be found on the Zhang lab’s webpage.

Support for the study was provided by The Phillips Family; J. and P. Poitras; the Poitras Center for Psychiatric Disorders Research; Hock E. Tan and K. Lisa Yang Center for Autism Research.; Robert Metcalfe; David Cheng; a NIH F30 NRSA 1F30-CA210382 to Omar Abudayyeh. F.Z. is a New York Stem Cell Foundation–Robertson Investigator. F.Z. is supported by NIH grants (1R01-HG009761, 1R01-222 MH110049, and 1DP1-HL141201); the Howard Hughes Medical Institute; the New York Stem Cell Foundation and G. Harold and Leila Mathers Foundations.

Speaking many languages

Ev Fedorenko studies the cognitive processes and brain regions underlying language, a signature cognitive skill that is uniquely and universally human. She investigates both people with linguistic impairments, and those that have exceptional language skills: hyperpolyglots, or people that are fluent in over a dozen languages. Indeed, she was recently interviewed for a BBC documentary about superlinguists as well as the New Yorker, for an article covering people with exceptional language skills.

When Fedorenko, an associate investigator at the McGovern Institute and assistant professor in the Department of Brain and Cognitive Sciences at MIT, came to the field, neuroscientists were still debating whether high-level cognitive skills such as language, are processed by multi-functional or dedicated brain regions. Using fMRI, Fedorenko and colleagues compared engagement of brain regions when individuals were engaged in linguistic vs. other high level cognitive tasks, such as arithmetic or music. Their data revealed a clear distinction between language and other cognitive processes, showing that our brains have dedicated language regions.

Here is my basic question. How do I get a thought from my mind into yours?

In the time since this key study, Fedorenko has continued to unpack language in the brain. How does the brain process the overarching rules and structure of language (syntax), as opposed to meanings of words? How do we construct complex meanings? What might underlie communicative difficulties in individuals diagnosed with autism? How does the aphasic brain recover language? Intriguingly, in contrast to individuals with linguistic difficulties, there are also individuals that stand out as being able to master many languages, so-called hyperpolyglots.

In 2013, she came across a young adult that had mastered over 30 languages, a prodigy in languages. To facilitate her analysis of processing of different languages Fedorenko has collected dozens of translations of Alice in Wonderland, for her ‘Alice in the language localizer Wonderland‘ project. She has already found that hyperpolyglots tend to show less activity in linguistic processing regions when reading in, or listening to, their native language, compared to carefully matched controls, perhaps indexing more efficient processing mechanisms. Fedorenko continues to study hyperpolyglots, along with other exciting new avenues of research. Stay tuned for upcoming advances in our understanding of the brain and language.

Mark Harnett receives a 2019 McKnight Scholar Award

McGovern Institute investigator Mark Harnett is one of six young researchers selected to receive a prestigious 2019 McKnight Scholar Award. The award supports his research “studying how dendrites, the antenna-like input structures of neurons, contribute to computation in neural networks.”

Harnett examines the biophysical properties of single neurons, ultimately aiming to understand how these relate to the complex computations that underlie behavior. His lab was the first to examine the biophysical properties of human dendrites. The Harnett lab found that human neurons have distinct properties, including increased dendritic compartmentalization that could allow more complex computations within single neurons. His lab recently discovered that such dendritic computations are not rare, or confined to specific behaviors, but are a widespread and general feature of neuronal activity.

“As a young investigator, it is hard to prioritize so many exciting directions and ideas,” explains Harnett. “I really want to thank the McKnight Foundation, both for the support, but also for the hard work the award committee puts into carefully thinking about and giving feedback on proposals. It means a lot to get this type of endorsement from a seriously committed and distinguished committee, and their support gives even stronger impetus to pursue this research direction.”

The McKnight Foundation has supported neuroscience research since 1977, and provides three prominent awards, with the Scholar award aimed at supporting young scientists, and drawing applications from the strongest young neuroscience faculty across the US. William L. McKnight (1887-1979) was an early leader of the 3M Company and had a personal interest in memory and brain diseases. The McKnight Foundation was established with this focus in mind, and the Scholar Award provides $75,000 per year for three years to support cutting edge neuroscience research.

 

A chemical approach to imaging cells from the inside

A team of researchers at the McGovern Institute and Broad Institute of MIT and Harvard have developed a new technique for mapping cells. The approach, called DNA microscopy, shows how biomolecules such as DNA and RNA are organized in cells and tissues, revealing spatial and molecular information that is not easily accessible through other microscopy methods. DNA microscopy also does not require specialized equipment, enabling large numbers of samples to be processed simultaneously.

“DNA microscopy is an entirely new way of visualizing cells that captures both spatial and genetic information simultaneously from a single specimen,” says first author Joshua Weinstein, a postdoctoral associate at the Broad Institute. “It will allow us to see how genetically unique cells — those comprising the immune system, cancer, or the gut, for instance — interact with one another and give rise to complex multicellular life.”

The new technique is described in Cell. Aviv Regev, core institute member and director of the Klarman Cell Observatory at the Broad Institute and professor of biology at MIT, and Feng Zhang, core institute member of the Broad Institute, investigator at the McGovern Institute for Brain Research at MIT, and the James and Patricia Poitras Professor of Neuroscience at MIT, are co-authors. Regev and Zhang are also Howard Hughes Medical Institute Investigators.

The evolution of biological imaging

In recent decades, researchers have developed tools to collect molecular information from tissue samples, data that cannot be captured by either light or electron microscopes. However, attempts to couple this molecular information with spatial data — to see how it is naturally arranged in a sample — are often machinery-intensive, with limited scalability.

DNA microscopy takes a new approach to combining molecular information with spatial data, using DNA itself as a tool.

To visualize a tissue sample, researchers first add small synthetic DNA tags, which latch on to molecules of genetic material inside cells. The tags are then replicated, diffusing in “clouds” across cells and chemically reacting with each other, further combining and creating more unique DNA labels. The labeled biomolecules are collected, sequenced, and computationally decoded to reconstruct their relative positions and a physical image of the sample.

The interactions between these DNA tags enable researchers to calculate the locations of the different molecules — somewhat analogous to cell phone towers triangulating the locations of different cell phones in their vicinity. Because the process only requires standard lab tools, it is efficient and scalable.

In this study, the authors demonstrate the ability to molecularly map the locations of individual human cancer cells in a sample by tagging RNA molecules. DNA microscopy could be used to map any group of molecules that will interact with the synthetic DNA tags, including cellular genomes, RNA, or proteins with DNA-labeled antibodies, according to the team.

“DNA microscopy gives us microscopic information without a microscope-defined coordinate system,” says Weinstein. “We’ve used DNA in a way that’s mathematically similar to photons in light microscopy. This allows us to visualize biology as cells see it and not as the human eye does. We’re excited to use this tool in expanding our understanding of genetic and molecular complexity.”

Funding for this study was provided by the Simons Foundation, Klarman Cell Observatory, NIH (R01HG009276, 1R01- HG009761, 1R01- MH110049, and 1DP1-HL141201), New York Stem Cell Foundation, Simons Foundation, Paul G. Allen Family Foundation, Vallee Foundation, the Poitras Center for Affective Disorders Research at MIT, the Hock E. Tan and K. Lisa Yang Center for Autism Research at MIT, J. and P. Poitras, and R. Metcalfe. 

The authors have applied for a patent on this technology.