Controlling our internal world

Olympic skaters can launch, perform multiple aerial turns, and land gracefully, anticipating imperfections and reacting quickly to correct course. To make such elegant movements, the brain must have an internal model of the body to control, predict, and make almost instantaneous adjustments to motor commands. So-called “internal models” are a fundamental concept in engineering and have long been suggested to underlie control of movement by the brain, but what about processes that occur in the absence of movement, such as contemplation, anticipation, planning?

Using a novel combination of task design, data analysis, and modeling, MIT neuroscientist Mehrdad Jazayeri and colleagues now provide compelling evidence that the core elements of an internal model also control purely mental processes in a study published in Nature Neuroscience.

“During my thesis I realized that I’m interested, not so much in how our senses react to sensory inputs, but instead in how my internal model of the world helps me make sense of those inputs,”says Jazayeri, the Robert A. Swanson Career Development Professor of Life Sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Indeed, understanding the building blocks exerting control of such mental processes could help to paint a better picture of disruptions in mental disorders, such as schizophrenia.

Internal models for mental processes

Scientists working on the motor system have long theorized that the brain overcomes noisy and slow signals using an accurate internal model of the body. This internal model serves three critical functions: it provides motor to control movement, simulates upcoming movement to overcome delays, and uses feedback to make real-time adjustments.

“The framework that we currently use to think about how the brain controls our actions is one that we have borrowed from robotics: we use controllers, simulators, and sensory measurements to control machines and train operators,” explains Reza Shadmehr, a professor at the Johns Hopkins School of Medicine who was not involved with the study. “That framework has largely influenced how we imagine our brain controlling our movements.”

Jazazyeri and colleagues wondered whether the same framework might explain the control principles governing mental states in the absence of any movement.

“When we’re simply sitting, thoughts and images run through our heads and, fundamental to intellect, we can control them,” explains lead author Seth Egger, a former postdoctoral associate in the Jazayeri lab and now at Duke University.

“We wanted to find out what’s happening between our ears when we are engaged in thinking,” says Egger.

Imagine, for example, a sign language interpreter keeping up with a fast speaker. To track speech accurately, the translator continuously anticipates where the speech is going, rapidly adjusting when the actual words deviate from the prediction. The interpreter could be using an internal model to anticipate upcoming words, and use feedback to make adjustments on the fly.

1-2-3…Go

Hypothesizing about how the components of an internal model function in scenarios such as translation is one thing. Cleanly measuring and proving the existence of these elements is much more complicated as the activity of the controller, simulator, and feedback are intertwined. To tackle this problem, Jazayeri and colleagues devised a clever task with primate models in which the controller, simulator, and feedback act at distinct times.

In this task, called “1-2-3-Go,” the animal sees three consecutive flashes (1, 2, and 3) that form a regular beat, and learns to make an eye movement (Go) when they anticipate the 4th flash should occur. During the task, researchers measured neural activity in a region of the frontal cortex they had previously linked to the timing of movement.

Jazayeri and colleagues had clear predictions about when the controller would act (between the third flash and “Go”) and when feedback would be engaged (with each flash of light). The key surprise came when researchers saw evidence for the simulator anticipating the third flash. This unexpected neural activity has dynamics that resemble the controller, but was not associated with a response. In other words, the researchers uncovered a covert plan that functions as the simulator, thus uncovering all three elements of an internal model for a mental process, the planning and anticipation of “Go” in the “1-2-3-Go” sequence.

“Jazayeri’s work is important because it demonstrates how to study mental simulation in animals,” explains Shadmehr, “and where in the brain that simulation is taking place.”

Having found how and where to measure an internal model in action, Jazayeri and colleagues now plan to ask whether these control strategies can explain how primates effortlessly generalize their knowledge from one behavioral context to another. For example, how does an interpreter rapidly adjust when someone with widely different speech habits takes the podium? This line of investigation promises to shed light on high-level mental capacities of the primate brain that simpler animals seem to lack, that go awry in mental disorders, and that designers of artificial intelligence systems so fondly seek.

What is the social brain?

As part of our Ask the Brain series, Anila D’Mello, a postdoctoral fellow in John Gabrieli’s lab answers the question,”What is the social brain?”

_____

Anila D'Mello portrait
Anila D’Mello is the Simons Center for the Social Brain Postdoctoral Fellow in John Gabrieli’s lab at the McGovern Institute.

“Knock Knock.”
“Who’s there?”
“The Social Brain.”
“The Social Brain, who?”

Call and response jokes, like the “Knock Knock” joke above, leverage our common understanding of how a social interaction typically proceeds. Joke telling allows us to interact socially with others based on our shared experiences and understanding of the world. But where do these abilities “live” in the brain and how does the social brain develop?

Neuroimaging and lesion studies have identified a network of brain regions that support social interaction, including the ability to understand and partake in jokes – we refer to this as the “social brain.” This social brain network is made up of multiple regions throughout the brain that together support complex social interactions. Within this network, each region likely contributes to a specific type of social processing. The right temporo-parietal junction, for instance, is important for thinking about another person’s mental state, whereas the amygdala is important for the interpretation of emotional facial expressions and fear processing. Damage to these brain regions can have striking effects on social behaviors. One recent study even found that individuals with bigger amygdala volumes had larger and more complex social networks!

Though social interaction is such a fundamental human trait, we aren’t born with a prewired social brain.

Much of our social ability is grown and honed over time through repeated social interactions. Brain networks that support social interaction continue to specialize into adulthood. Neuroimaging work suggests that though newborn infants may have all the right brain parts to support social interaction, these regions may not yet be specialized or connected in the right way. This means that early experiences and environments can have large influences on the social brain. For instance, social neglect, especially very early in development, can have negative impacts on social behaviors and on how the social brain is wired. One prominent example is that of children raised in orphanages or institutions, who are sometimes faced with limited adult interaction or access to language. Children raised in these conditions are more likely to have social challenges including difficulties forming attachments. Prolonged lack of social stimulation also alters the social brain in these children resulting in changes in amygdala size and connections between social brain regions.

The social brain is not just a result of our environment. Genetics and biology also contribute to the social brain in ways we don’t yet fully understand. For example, individuals with autism / autistic individuals may experience difficulties with social interaction and communication. This may include challenges with things like understanding the punchline of a joke. These challenges in autism have led to the hypothesis that there may be differences in the social brain network in autism. However, despite documented behavioral differences in social tasks, there is conflicting brain imaging evidence for whether differences exist between people with and without autism in the social brain network.

Examples such as that of autism imply that the reality of the social brain is probably much more complex than the story painted here. It is likely that social interaction calls upon many different parts of the brain, even beyond those that we have termed the “social brain,” that must work in concert to support this highly complex set of behaviors. These include regions of the brain important for listening, seeing, speaking, and moving. In addition, it’s important to remember that the social brain and regions that make it up do not stand alone. Regions of the social brain also play an intimate role in language, humor, and other cognitive processes.

“Knock Knock”
“Who’s there?”
“The Social Brain”
“The Social Brain, who?”
“I just told you…didn’t you read what I wrote?”

Anila D’Mello earned her bachelor’s degree in psychology from Georgetown University in 2012, and went on to receive her PhD in Behavior, Cognition, and Neuroscience from American University in 2017. She joined the Gabrieli lab as a postdoc in 2017 and studies the neural correlates of social communication in autism.

_____

Do you have a question for The Brain? Ask it here.

Can I rewire my brain?

As part of our Ask the Brain series, Halie Olson, a graduate student in the labs of John Gabrieli and Rebecca Saxe, pens her answer to the question,”Can I rewire my brain?”

_____

Yes, kind of, sometimes – it all depends on what you mean by “rewiring” the brain.

Halie Olson, a graduate student in the Gabrieli and Saxe labs.

If you’re asking whether you can remove all memories of your ex from your head, then no. (That’s probably for the best – just watch Eternal Sunshine of the Spotless Mind.) However, if you’re asking whether you can teach a dog new tricks – that have a physical implementation in the brain – then yes.

To embrace the analogy that “rewiring” alludes to, let’s imagine you live in an old house with outlets in less-than-optimal locations. You really want your brand-new TV to be plugged in on the far side of the living room, but there is no outlet to be found. So you call up your electrician, she pops over, and moves some wires around in the living room wall to give you a new outlet. No sweat!

Local changes in neural connectivity happen throughout the lifespan. With over 100 billion neurons and 100 trillion connections – or synapses – between these neurons in the adult human brain, it is unsurprising that some pathways end up being more important than others. When we learn something new, the connections between relevant neurons communicating with each other are strengthened. To paraphrase Donald Hebb, one of the most influential psychologists of the twentieth century, “neurons that fire together, wire together” – by forming new synapses or more efficiently connecting the ones that are already there. This ability to rewire neural connections at a local level is a key feature of the brain, enabling us to tailor our neural infrastructure to our needs.

Plasticity in our brain allows us to learn, adjust, and thrive in our environments.

We can also see this plasticity in the brain at a larger scale. My favorite example of “rewiring” in the brain is when children learn to read. Our brains did not evolve to enable us to read – there is no built-in “reading region” that magically comes online when a child enters school. However, if you stick a proficient reader in an MRI scanner, you will see a region in the left lateral occipitotemporal sulcus (that is, the back bottom left of your cortex) that is particularly active when you read written text. Before children learn to read, this region – known as the visual word form area – is not exceptionally interested in words, but as children get acquainted with written language and start connecting letters with sounds, it becomes selective for familiar written language – no matter the font, CaPItaLIZation, or size.

Now, let’s say that you wake up in the middle of the night with a desire to move your oven and stovetop from the kitchen into your swanky new living room with the TV. You call up your electrician – she tells you this is impossible, and to stop calling her in the middle of the night.

Similarly, your brain comes with a particular infrastructure – a floorplan, let’s call it – that cannot be easily adjusted when you are an adult. Large lesions tend to have large consequences. For instance, an adult who suffers a serious stroke in their left hemisphere will likely struggle with language, a condition called aphasia. Young children’s brains, on the other hand, can sometimes rewire in profound ways. An entire half of the brain can be damaged early on with minimal functional consequences. So if you’re going for a remodel? Better do it really early.

Plasticity in our brain allows us to learn, adjust, and thrive in our environments. It also gives neuroscientists like me something to study – since clearly I would fail as an electrician.

Halie Olson earned her bachelor’s degree in neurobiology from Harvard College in 2017. She is currently a graduate student in MIT’s Department of Brain and Cognitive Sciences working with John Gabrieli and Rebecca Saxe. She studies how early life experiences and environments impact brain development, particularly in the context of reading and language, and what this means for children’s educational outcomes.

_____

Do you have a question for The Brain? Ask it here.

Hearing through the clatter

In a busy coffee shop, our eardrums are inundated with sound waves – people chatting, the clatter of cups, music playing – yet our brains somehow manage to untangle relevant sounds, like a barista announcing that our “coffee is ready,” from insignificant noise. A new McGovern Institute study sheds light on how the brain accomplishes the task of extracting meaningful sounds from background noise – findings that could one day help to build artificial hearing systems and aid development of targeted hearing prosthetics.

“These findings reveal a neural correlate of our ability to listen in noise, and at the same time demonstrate functional differentiation between different stages of auditory processing in the cortex,” explains Josh McDermott, an associate professor of brain and cognitive sciences at MIT, a member of the McGovern Institute and the Center for Brains, Minds and Machines, and the senior author of the study.

The auditory cortex, a part of the brain that responds to sound, has long been known to have distinct anatomical subregions, but the role these areas play in auditory processing has remained a mystery. In their study published today in Nature Communications, McDermott and former graduate student Alex Kell, discovered that these subregions respond differently to the presence of background noise, suggesting that auditory processing occurs in steps that progressively hone in on and isolate a sound of interest.

Background check

Previous studies have shown that the primary and non-primary subregions of the auditory cortex respond to sound with different dynamics, but these studies were largely based on brain activity in response to speech or simple synthetic sounds (such as tones and clicks). Little was known about how these regions might work to subserve everyday auditory behavior.

To test these subregions under more realistic conditions, McDermott and Kell, who is now a postdoctoral researcher at Columbia University, assessed changes in human brain activity while subjects listened to natural sounds with and without background noise.

While lying in an MRI scanner, subjects listened to 30 different natural sounds, ranging from meowing cats to ringing phones, that were presented alone or embedded in real-world background noise such as heavy rain.

“When I started studying audition,” explains Kell, “I started just sitting around in my day-to-day life, just listening, and was astonished at the constant background noise that seemed to usually be filtered out by default. Most of these noises tended to be pretty stable over time, suggesting we could experimentally separate them. The project flowed from there.”

To their surprise, Kell and McDermott found that the primary and non-primary regions of the auditory cortex responded differently to natural sound depending upon whether background noise was present.

brain regions responding to sound
Primary auditory cortex (outlined in white) responses change (blue) when background noise is present, whereas non-primary activity is robust to background noise (yellow). Image: Alex Kell

They found that activity of the primary auditory cortex was altered when background noise is present, suggesting that this region has not yet differentiated between meaningful sounds and background noise. Non-primary regions, however, respond similarly to natural sounds irrespective of whether noise is present, suggesting that cortical signals generated by sound are transformed or “cleaned up” to remove background noise by the time they reach the non-primary auditory cortex.

“We were surprised by how big the difference was between primary and non-primary areas,” explained Kell, “so we ran a bunch more subjects but kept seeing the same thing. We had a ton of questions about what might be responsible for this difference, and that’s why we ended up running all these follow-up experiments.”

A general principle

Kell and McDermott went on to test whether these responses were specific to particular sounds, and discovered that the above effect remained stable no matter the source or type of sound activity. Music, speech, or a squeaky toy, all activated the non-primary cortex region similarly, whether or not background noise was present.

The authors also tested whether attention is relevant. Even when the researchers sneakily distracted subjects with a visual task in the scanner, the cortical subregions responded to meaningful sound and background noise in the same way, showing that attention is not driving this aspect of sound processing. In other words, even when we are focused on reading a book, our brain is diligently sorting the sound of our meowing cat from the patter of heavy rain outside.

Future directions

The McDermott lab is now building computational models of the so-called “noise robustness” found in the Nature Communications study and Kell is pursuing a finer-grained understanding of sound processing in his postdoctoral work at Columbia, by exploring the neural circuit mechanisms underlying this phenomenon.

By gaining a deeper understanding of how the brain processes sound, the researchers hope their work will contribute to improve diagnoses and treatment of hearing dysfunction. Such research could help to reveal the origins of listening difficulties that accompany developmental disorders or age-related hearing loss. For instance, if hearing loss results from dysfunction in sensory processing, this could be caused by abnormal noise robustness in the auditory cortex. Normal noise robustness might instead suggest that there are impairments elsewhere in the brain, for example a break down in higher executive function.

“In the future,” McDermott says, “we hope these noninvasive measures of auditory function may become valuable tools for clinical assessment.”

Call for Nominations: 2020 Scolnick Prize in Neuroscience

The McGovern Institute is now accepting nominations for the Scolnick Prize in Neuroscience, which recognizes an outstanding discovery or significant advance in any field of neuroscience, until December 15, 2019.

About the Scolnick Prize

The prize is named in honor of Edward M. Scolnick, who stepped down as president of Merck Research Laboratories in December 2002 after holding Merck’s top research post for 17 years. The prize, which is endowed through a gift from Merck to the McGovern Institute, consists of a $150,000 award, plus an inscribed gift. The recipient presents a public lecture at MIT, hosted by the McGovern Institute and followed by a dinner in Spring 2020.

Nomination Process

Candidates for the award must be nominated by individuals affiliated with universities, hospitals, medical schools, or research institutes, with a background in neuroscience. Self-nomination is not permitted. Each nomination should include a biosketch or CV of the nominee and a letter of nomination with a summary and analysis of the nominee’s major contributions to the field of neuroscience. Up to two representative reprints will be accepted. The winner, selected by a committee appointed by the director of the McGovern Institute, will be announced in January 2020.

More information about the Scolnick Prize, including details about the nomination process, selection committee, and past Scolnick Prize recipients, can be found on our website.

submit nomination

Finding the brain’s compass

The world is constantly bombarding our senses with information, but the ways in which our brain extracts meaning from this information remains elusive. How do neurons transform raw visual input into a mental representation of an object – like a chair or a dog?

In work published today in Nature Neuroscience, MIT neuroscientists have identified a brain circuit in mice that distills “high-dimensional” complex information about the environment into a simple abstract object in the brain.

“There are no degree markings in the external world, our current head direction has to be extracted, computed, and estimated by the brain,” explains Ila Fiete, an associate member of the McGovern Institute and senior author of the paper. “The approaches we used allowed us to demonstrate the emergence of a low-dimensional concept, essentially an abstract compass in the brain.”

This abstract compass, according to the researchers, is a one-dimensional ring that represents the current direction of the head relative to the external world.

Schooling fish

Trying to show that a data cloud has a simple shape, like a ring, is a bit like watching a school of fish. By tracking one or two sardines, you might not see a pattern. But if you could map all of the sardines, and transform the noisy dataset into points representing the positions of the whole school of sardines over time, and where each fish is relative to its neighbors, a pattern would emerge. This model would reveal a ring shape, a simple shape formed by the activity of hundreds of individual fish.

Fiete, who is also an associate professor in MIT’s Department of Brain and Cognitive Sciences, used a similar approach, called topological modeling, to transform the activity of large populations of noisy neurons into a data cloud the shape of a ring.

Simple and persistent ring

Previous work in fly brains revealed a physical ellipsoid ring of neurons representing changes in the direction of the fly’s head, and researchers suspected that such a system might also exist in mammals.

In this new mouse study, Fiete and her colleagues measured hours of neural activity from scores of neurons in the anterodorsal thalamic nucleus (ADN) – a region believed to play a role in spatial navigation – as the animals moved freely around their environment. They mapped how the neurons in the ADN circuit fired as the animal’s head changed direction.

Together these data points formed a cloud in the shape of a simple and persistent ring.

“In the absence of this ring,” Fiete explains, “we would be lost in the world.”

“This tells us a lot about how neural networks are organized in the brain,” explains Edvard Moser, Director of the Kavli Institute of Systems Neuroscience in Norway, who was not involved in the study. “Past data have indirectly pointed towards such a ring-like organization but only now has it been possible, with the right cell numbers and methods, to demonstrate it convincingly,” says Moser.

Their method for characterizing the shape of the data cloud allowed Fiete and colleagues to determine which variable the circuit was devoted to representing, and to decode this variable over time, using only the neural responses.

“The animal’s doing really complicated stuff,” explains Fiete, “but this circuit is devoted to integrating the animal’s speed along a one-dimensional compass that encodes head direction,” explains Fiete. “Without a manifold approach, which captures the whole state space, you wouldn’t know that this circuit of thousands of neurons is encoding only this one aspect of the complex behavior, and not encoding any other variables at the same time.”

Even during sleep, when the circuit is not being bombarded with external information, this circuit robustly traces out the same one-dimensional ring, as if dreaming of past head direction trajectories.

Further analysis revealed that the ring acts an attractor. If neurons stray off trajectory, they are drawn back to it, quickly correcting the system. This attractor property of the ring means that the representation of head direction in abstract space is reliably stable over time, a key requirement if we are to understand and maintain a stable sense of where our head is relative to the world around us.

“In the absence of this ring,” Fiete explains, “we would be lost in the world.”

Shaping the future

Fiete’s work provides a first glimpse into how complex sensory information is distilled into a simple concept in the mind, and how that representation autonomously corrects errors, making it exquisitely stable.

But the implications of this study go beyond coding of head direction.

“Similar organization is probably present for other cognitive functions so the paper is likely to inspire numerous new studies,” says Moser.

Fiete sees these analyses and related studies carried out by colleagues at the Norwegian University of Science and Technology, Princeton University, the Weitzman Institute, and elsewhere as fundamental to the future of neural decoding studies.

With this approach, she explains, it is possible to extract abstract representations of the mind from the brain, potentially even thoughts and dreams.

“We’ve found that the brain deconstructs and represents complex things in the world with simple shapes,” explains Fiete. “Manifold-level analysis can help us to find those shapes, and they almost certainly exist beyond head direction circuits.”

Do thoughts have mass?

As part of our Ask the Brain series, we received the question, “Do thoughts have mass?” The following is a guest blog post by Michal De-Medonsa, technical associate and manager of the Jazayeri lab, who tapped into her background in philosophy to answer this intriguing question.

_____

Portrat of Michal De-Medonsa
Jazayeri lab manager (and philosopher) Michal De-Medonsa.

To answer the question, “Do thoughts have mass?” we must, like any good philosopher, define something that already has a definition – “thoughts.”

Logically, we can assert that thoughts are either metaphysical or physical (beyond that, we run out of options). If our definition of thought is metaphysical, it is safe to say that metaphysical thoughts do not have mass since they are by definition not physical, and mass is a property of a physical things. However, if we define a thought as a physical thing, it becomes a little trickier to determine whether or not it has mass.

A physical definition of thoughts falls into (at least) two subgroups – physical processes and physical parts. Take driving a car, for example – a parts definition describes the doors, motor, etc. and has mass. A process definition of a car being driven, turning the wheel, moving from point A to point B, etc. does not have mass. The process of driving is a physical process that involves moving physical matter, but we wouldn’t say that the act of driving has mass. The car itself, however, is an example of physical matter, and as any cyclist in the city of Boston is well aware  – cars have mass. It’s clear that if we define a thought as a process, it does not have mass, and if we define a thought as physical parts, it does have mass – so, which one is it? In order to resolve our issue, we have to be incredibly precise with our definition. Is a thought a process or parts? That is, is a thought more like driving or more like a car?

In order to resolve our issue, we have to be incredibly precise with our definition of the word thought.

Both physical definitions (process and parts) have merit. For a parts definition, we can look at what is required for a thought – neurons, electrical signals, and neurochemicals, etc. This type of definition becomes quite imprecise and limiting. It doesn’t seem too problematic to say that the neurons, neurochemicals, etc. are themselves the thought, but this style of definition starts to fall apart when we try to include all the parts involved (e.g. blood flow, connective tissue, outside stimuli). When we look at a face, the stimuli received by the visual cortex is part of the thought – is the face part of a thought? When we look at our phone, is the phone itself part of a thought? A parts definition either needs an arbitrary limit, or we end up having to include all possible parts involved in the thought, ending up with an incredibly convoluted and effectively useless definition.

A process definition is more versatile and precise, and it allows us to include all the physical parts in a more elegant way. We can now say that all the moving parts are included in the process without saying that they themselves are the thought. That is, we can say blood flow is included in the process without saying that blood flow itself is part of the thought. It doesn’t sound ridiculous to say that a phone is part of the thought process. If we subscribe to the parts definition, however, we’re forced to say that part of the mass of a thought comes from the mass of a phone. A process definition allows us to be precise without being convoluted, and allows us to include outside influences without committing to absurd definitions.

Typical of a philosophical endeavor, we’re left with more questions and no simple answer. However, we can walk away with three conclusions.

  1. A process definition of “thought” allows for elegance and the involvement of factors outside the “vacuum” of our physical body, however, we lose out on some function by not describing a thought by its physical parts.
  2. The colloquial definition of “thought” breaks down once we invite a philosopher over to break it down, but this is to be expected – when we try to break something down, sometimes, it will break down. What we should be aware of is that if we want to use the word in a rigorous scientific framework, we need a rigorous scientific definition.
  3. Most importantly, it’s clear that we need to put a lot of work into defining exactly what we mean by “thought” – a job well suited to a scientifically-informed philosopher.

Michal De-Medonsa earned her bachelor’s degree in neuroscience and philosophy from Johns Hopkins University in 2012 and went on to receive her master’s degree in history and philosophy of science at the University of Pittsburgh in 2015. She joined the Jazayeri lab in 2018 as a lab manager/technician and spends most of her free time rock climbing, doing standup comedy, and woodworking at the MIT Hobby Shop. 

_____

Do you have a question for The Brain? Ask it here.

Ed Boyden wins premier Royal Society honor

Edward S. Boyden, the Y. Eva Tan Professor in Neurotechnology at MIT, has been awarded the 2019 Croonian Medal and Lecture by the Royal Society. Twenty-four medals and awards are announced by the Royal Society each year, honoring exceptional researchers who are making outstanding contributions to science.

“The Royal Society gives an array of medals and awards to scientists who have done exceptional, ground-breaking work,” explained Sir Venki Ramakrishnan, President of the Royal Society. “This year, it is again a pleasure to see these awards bestowed on scientists who have made such distinguished and far-reaching contributions in their fields. I congratulate and thank them for their efforts.”

Boyden wins the medal and lecture in recognition of his research that is expanding our understanding of the brain. This includes his critical role in the development of optogenetics, a technique for controlling brain activity with light, and his invention of expansion microscopy. Croonian Medal laureates include notable luminaries of science and neurobiology.

“It is a great honor to be selected to receive this medal, especially
since it was also given to people such as Santiago Ramon y Cajal, the
founder of modern neuroscience,” says Boyden. “This award reflects the great work of many fantastic students, postdocs, and collaborators who I’ve had the privilege to work with over the years.”

The award includes an invitation to deliver the premier British lecture in the biological sciences, given annually at the Royal Society in London. At the lecture, the winner is awarded a medal and a gift of £10,000. This announcement comes shortly after Boyden was co-awarded the Warren Alpert Prize for his role in developing optogenetics.

History of the Croonian Medal and Lecture

William Croone, pictured, envisioned an annual lecture that is the premier biological sciences medal and lecture at the Royal Society
William Croone, FRS Photo credit: Royal College of Physicians, London

The lectureship was conceived by William Croone FRS, one of the original Fellows of the Society based in London. Among the papers left on his death in 1684 were plans to endow two lectureships, one at the Royal Society and the other at the Royal College of Physicians. His widow later bequeathed the means to carry out the scheme. The lecture series began in 1738.

 

 

Ed Boyden holds the titles of Investigator, McGovern Institute; Y. Eva Tan Professor in Neurotechnology at MIT; Leader, Synthetic Neurobiology Group, MIT Media Lab; Professor, Biological Engineering, Brain and Cognitive Sciences, MIT Media Lab; Co-Director, MIT Center for Neurobiological Engineering; Member, MIT Center for Environmental Health Sciences, Computational and Systems Biology Initiative, and Koch Institute.

Ed Boyden receives 2019 Warren Alpert Prize

The 2019 Warren Alpert Foundation Prize has been awarded to four scientists, including Ed Boyden, for pioneering work that launched the field of optogenetics, a technique that uses light-sensitive channels and pumps to control the activity of neurons in the brain with a flick of a switch. He receives the prize alongside Karl Deisseroth, Peter Hegemann, and Gero Miesenböck, as outlined by The Warren Alpert Foundation in their announcement.

Harnessing light and genetics, the approach illuminates and modulates the activity of neurons, enables study of brain function and behavior, and helps reveal activity patterns that can overcome brain diseases.

Boyden’s work was key to envisioning and developing optogenetics, now a core method in neuroscience. The method allows brain circuits linked to complex behavioral processes, such as those involved in decision-making, feeding, and sleep, to be unraveled in genetic models. It is also helping to elucidate the mechanisms underlying neuropsychiatric disorders, and has the potential to inspire new strategies to overcome brain disorders.

“It is truly an honor to be included among the extremely distinguished list of winners of the Alpert Award,” says Boyden, the Y. Eva Tan Professor in Neurotechnology at the McGovern Institute, MIT. “To me personally, it is exciting to see the relatively new field of neurotechnology recognized. The brain implements our thoughts and feelings. It makes us who we are. This mysteries and challenge requires new technologies to make the brain understandable and repairable. It is a great honor that our technology of optogenetics is being thus recognized.”

While they were students, Boyden, and fellow awardee Karl Deisseroth, brainstormed about how microbial opsins could be used to mediate optical control of neural activity. In mid-2004, the pair collaborated to show that microbial opsins can be used to optically control neural activity. Upon launching his lab at MIT, Boyden’s team developed the first optogenetic silencing tool, the first effective optogenetic silencing in live mammals, noninvasive optogenetic silencing, and single-cell optogenetic control.

“The discoveries made by this year’s four honorees have fundamentally changed the landscape of neuroscience,” said George Q. Daley, dean of Harvard Medical School. “Their work has enabled scientists to see, understand and manipulate neurons, providing the foundation for understanding the ultimate enigma—the human brain.”

Beyond optogenetics, Boyden has pioneered transformative technologies that image, record, and manipulate complex systems, including expansion microscopy, robotic patch clamping, and even shrinking objects to the nanoscale. He was elected this year to the ranks of the National Academy of Sciences, and selected as an HHMI Investigator. Boyden has received numerous awards for this work, including the 2018 Gairdner International Prize and the 2016 Breakthrough Prize in Life Sciences.

The Warren Alpert Foundation, in association with Harvard Medical School, honors scientists whose work has improved the understanding, prevention, treatment or cure of human disease. Prize recipients are selected by the foundation’s scientific advisory board, which is composed of distinguished biomedical scientists and chaired by the dean of Harvard Medical School. The honorees will share a $500,000 prize and will be recognized at a daylong symposium on Oct. 3 at Harvard Medical School.

Ed Boyden holds the titles of Investigator, McGovern Institute; Y. Eva Tan Professor in Neurotechnology at MIT; Leader, Synthetic Neurobiology Group, Media Lab; Associate Professor, Biological Engineering, Brain and Cognitive Sciences, Media Lab; Co-Director, MIT Center for Neurobiological Engineering; Member, MIT Center for Environmental Health Sciences, Computational and Systems Biology Initiative, and Koch Institute.

New CRISPR platform expands RNA editing capabilities

CRISPR-based tools have revolutionized our ability to target disease-linked genetic mutations. CRISPR technology comprises a growing family of tools that can manipulate genes and their expression, including by targeting DNA with the enzymes Cas9 and Cas12 and targeting RNA with the enzyme Cas13. This collection offers different strategies for tackling mutations. Targeting disease-linked mutations in RNA, which is relatively short-lived, would avoid making permanent changes to the genome. In addition, some cell types, such as neurons, are difficult to edit using CRISPR/Cas9-mediated editing, and new strategies are needed to treat devastating diseases that affect the brain.

McGovern Institute Investigator and Broad Institute of MIT and Harvard core member Feng Zhang and his team have now developed one such strategy, called RESCUE (RNA Editing for Specific C to U Exchange), described in the journal Science.

Zhang and his team, including first co-authors Omar Abudayyeh and Jonathan Gootenberg (both now McGovern Fellows), made use of a deactivated Cas13 to guide RESCUE to targeted cytosine bases on RNA transcripts, and used a novel, evolved, programmable enzyme to convert unwanted cytosine into uridine — thereby directing a change in the RNA instructions. RESCUE builds on REPAIR, a technology developed by Zhang’s team that changes adenine bases into inosine in RNA.

RESCUE significantly expands the landscape that CRISPR tools can target to include modifiable positions in proteins, such as phosphorylation sites. Such sites act as on/off switches for protein activity and are notably found in signaling molecules and cancer-linked pathways.

“To treat the diversity of genetic changes that cause disease, we need an array of precise technologies to choose from. By developing this new enzyme and combining it with the programmability and precision of CRISPR, we were able to fill a critical gap in the toolbox,” says Zhang, the James and Patricia Poitras Professor of Neuroscience at MIT. Zhang also has appointments in MIT’s departments of Brain and Cognitive Sciences and Biological Engineering.

Expanding the reach of RNA editing to new targets

The previously developed REPAIR platform used the RNA-targeting CRISPR/Cas13 to direct the active domain of an RNA editor, ADAR2, to specific RNA transcripts where it could convert the nucleotide base adenine to inosine, or letters A to I. Zhang and colleagues took the REPAIR fusion, and evolved it in the lab until it could change cytosine to uridine, or C to U.

RESCUE can be guided to any RNA of choice, then perform a C-to-U edit through the evolved ADAR2 component of the platform. The team took the new platform into human cells, showing that they could target natural RNAs in the cell as well as 24 clinically relevant mutations in synthetic RNAs. They then further optimized RESCUE to reduce off-target editing, while minimally disrupting on-target editing.

New targets in sight

Expanded targeting by RESCUE means that sites regulating activity and function of many proteins through post-translational modifications, such as phosphorylation, glycosylation, and methylation can now be more readily targeted for editing.

A major advantage of RNA editing is its reversibility, in contrast to changes made at the DNA level, which are permanent. Thus, RESCUE could be deployed transiently in situations where a modification may be desirable temporarily, but not permanently. To demonstrate this, the team showed that in human cells, RESCUE can target specific sites in the RNA encoding β-catenin, that are known to be phosphorylated on the protein product, leading to a temporary increase in β-catenin activation and cell growth. If such a change was made permanently, it could predispose cells to uncontrolled cell growth and cancer, but by using RESCUE, transient cell growth could potentially stimulate wound healing in response to acute injuries.

The researchers also targeted a pathogenic gene variant, APOE4.  The APOE4 allele has consistently emerged as a genetic risk factor for the development of late-onset Alzheimer’s Disease. Isoform APOE4 differs from APOE2, which is not a risk factor, by just two differences (both C in APOE4 vs. U in APOE2). Zhang and colleagues introduced the risk-associated APOE4 RNA into cells, and showed that RESCUE can convert its signature C’s to an APOE2 sequence, essentially converting a risk to a non-risk variant.

To facilitate additional work that will push RESCUE toward the clinic as well as enable researchers to use RESCUE as a tool to better understand disease-causing mutations, the Zhang lab plans to share the RESCUE system broadly, as they have with previously developed CRISPR tools. The technology will be freely available for academic research through the non-profit plasmid repository Addgene. Additional information can be found on the Zhang lab’s webpage.

Support for the study was provided by The Phillips Family; J. and P. Poitras; the Poitras Center for Psychiatric Disorders Research; Hock E. Tan and K. Lisa Yang Center for Autism Research.; Robert Metcalfe; David Cheng; a NIH F30 NRSA 1F30-CA210382 to Omar Abudayyeh. F.Z. is a New York Stem Cell Foundation–Robertson Investigator. F.Z. is supported by NIH grants (1R01-HG009761, 1R01-222 MH110049, and 1DP1-HL141201); the Howard Hughes Medical Institute; the New York Stem Cell Foundation and G. Harold and Leila Mathers Foundations.