A “golden era” to study the brain

As an undergraduate, Mitch Murdock was a rare science-humanities double major, specializing in both English and molecular, cellular, and developmental biology at Yale University. Today, as a doctoral student in the MIT Department of Brain and Cognitive Sciences, he sees obvious ways that his English education expanded his horizons as a neuroscientist.

“One of my favorite parts of English was trying to explore interiority, and how people have really complicated experiences inside their heads,” Murdock explains. “I was excited about trying to bridge that gap between internal experiences of the world and that actual biological substrate of the brain.”

Though he can see those connections now, it wasn’t until after Yale that Murdock became interested in brain sciences. As an undergraduate, he was in a traditional molecular biology lab. He even planned to stay there after graduation as a research technician; fortunately, though, he says his advisor Ron Breaker encouraged him to explore the field. That’s how Murdock ended up in a new lab run by Conor Liston, an associate professor at Weill Cornell Medicine, who studies how factors such as stress and sleep regulate the modeling of brain circuits.

It was in Liston’s lab that Murdock was first exposed to neuroscience and began to see the brain as the biological basis of the philosophical questions about experience and emotion that interested him. “It was really in his lab where I thought, ‘Wow, this is so cool. I have to do a PhD studying neuroscience,’” Murdock laughs.

During his time as a research technician, Murdock examined the impact of chronic stress on brain activity in mice. Specifically, he was interested in ketamine, a fast-acting antidepressant prone to being abused, with the hope that better understanding how ketamine works will help scientists find safer alternatives. He focused on dendritic spines, small organelles attached to neurons that help transmit electrical signals between neurons and provide the physical substrate for memory storage. His findings, Murdock explains, suggested that ketamine works by recovering dendritic spines that can be lost after periods of chronic stress.

After three years at Weill Cornell, Murdock decided to pursue doctoral studies in neuroscience, hoping to continue some of the work he started with Liston. He chose MIT because of the research being done on dendritic spines in the lab of Elly Nedivi, the William R. (1964) and Linda R. Young Professor of Neuroscience in The Picower Institute for Learning and Memory.

Once again, though, the opportunity to explore a wider set of interests fortuitously led Murdock to a new passion. During lab rotations at the beginning of his PhD program, Murdock spent time shadowing a physician at Massachusetts General Hospital who was working with Alzheimer’s disease patients.

“Everyone knows that Alzheimer’s doesn’t have a cure. But I realized that, really, if you have Alzheimer’s disease, there’s very little that can be done,” he says. “That was a big wake-up call for me.”

After that experience, Murdock strategically planned his remaining lab rotations, eventually settling into the lab of Li-Huei Tsai, the Picower Professor of Neuroscience and the director of the Picower Institute. For the past five years, Murdock has worked with Tsai on various strands of Alzheimer’s research.

In one project, for example, members of the Tsai lab have shown how certain kinds of non-invasive light and sound stimulation induce brain activity that can improve memory loss in mouse models of Alzheimer’s. Scientists think that, during sleep, small movements in blood vessels drive spinal fluid into the brain, which, in turn, flushes out toxic metabolic waste. Murdock’s research suggests that certain kinds of stimulation might drive a similar process, flushing out waste that can exacerbate memory loss.

Much of his work is focused on the activity of single cells in the brain. Are certain neurons or types of neurons genetically predisposed to degenerate, or do they break down randomly? Why do certain subtypes of cells appear to be dysfunctional earlier on in the course of Alzheimer’s disease? How do changes in blood flow in vascular cells affect degeneration? All of these questions, Murdock believes, will help scientists better understand the causes of Alzheimer’s, which will translate eventually into developing cures and therapies.

To answer these questions, Murdock relies on new single-cell sequencing techniques that he says have changed the way we think about the brain. “This has been a big advance for the field, because we know there are a lot of different cell types in the brain, and we think that they might contribute differentially to Alzheimer’s disease risk,” says Murdock. “We can’t think of the brain as only about neurons.”

Murdock says that that kind of “big-picture” approach — thinking about the brain as a compilation of many different cell types that are all interacting — is the central tenet of his research. To look at the brain in the kind of detail that approach requires, Murdock works with Ed Boyden, the Y. Eva Tan Professor in Neurotechnology, a professor of biological engineering and brain and cognitive sciences at MIT, a Howard Hughes Medical Institute investigator, and a member of MIT’s McGovern Institute for Brain Research and Koch Institute for Integrative Cancer Research. Working with Boyden has allowed Murdock to use new technologies such as expansion microscopy and genetically encoded sensors to aid his research.

That kind of new technology, he adds, has helped blow the field wide open. “This is such a cool time to be a neuroscientist because the tools available now make this a golden era to study the brain.” That rapid intellectual expansion applies to the study of Alzheimer’s as well, including newly understood connections between the immune system and Alzheimer’s — an area in which Murdock says he hopes to continue after graduation.

Right now, though, Murdock is focused on a review paper synthesizing some of the latest research. Given the mountains of new Alzheimer’s work coming out each year, he admits that synthesizing all the data is a bit “crazy,” but he couldn’t be happier to be in the middle of it. “There’s just so much that we are learning about the brain from these new techniques, and it’s just so exciting.”

Personal pursuits

This story originally appeared in the Fall 2022 issue of BrainScan.

***

Many neuroscientists were drawn to their careers out of curiosity and wonder. Their deep desire to understand how the brain works drew them into the lab and keeps them coming back, digging deeper and exploring more each day. But for some, the work is more personal.

Several McGovern faculty say they entered their field because someone in their lives was dealing with a brain disorder that they wanted to better understand. They are committed to unraveling the basic biology of those conditions, knowing that knowledge is essential to guide the development of better treatments.

The distance from basic research to clinical progress is shortening, and many young neuroscientists hope not just to deepen scientific understanding of the brain, but to have direct impact on the lives of patients. Some want to know why people they love are suffering from neurological disorders or mental illness; others seek to understand the ways in which their own brains work differently than others. But above all, they want better treatments for people affected by such disorders.

Seeking answers

That’s true for Kian Caplan, a graduate student in MIT’s Department of Brain and Cognitive Sciences who was diagnosed with Tourette syndrome around age 13. At the time, learning that the repetitive, uncontrollable movements and vocal tics he had been making for most of his life were caused by a neurological disorder was something of a relief. But it didn’t take long for Caplan to realize his diagnosis came with few answers.

Graduate student Kian Caplan studies the brain circuits associated with Tourette syndrome and obsessive-compulsive disorder in Guoping Feng and Fan Wang’s labs at the McGovern Institute. Photo: Steph Stevens

Tourette syndrome has been estimated to occur in about six of every 1,000 children, but its neurobiology remains poorly understood.

“The doctors couldn’t really explain why I can’t control the movements and sounds I make,” he says. “They couldn’t really explain why my symptoms wax and wane, or why the tics I have aren’t always the same.”

That lack of understanding is not just frustrating for curious kids like Caplan. It means that researchers have been unable to develop treatments that target the root cause of Tourette syndrome. Drugs that dampen signaling in parts of the brain that control movement can help suppress tics, but not without significant side effects. Caplan has tried those drugs. For him, he says, “they’re not worth the suppression.”

Advised by Fan Wang and McGovern Associate Director Guoping Feng, Caplan is looking for answers. A mouse model of obsessive-compulsive disorder developed in Feng’s lab was recently found to exhibit repetitive movements similar to those of people with Tourette syndrome, and Caplan is working to characterize those tic-like movements. He will use the mouse model to examine the brain circuits underlying the two conditions, which often co-occur in people. Broadly, researchers think Tourette syndrome arises due to dysregulation of cortico-striatal-thalamo-cortical circuits, which connect distant parts of the brain to control movement. Caplan and Wang suspect that the brainstem — a structure found where the brain connects to the spinal cord, known for organizing motor movement into different modules — is probably involved, too.

Wang’s research group studies the brainstem’s role in movement, but she says that like most researchers, she hadn’t considered its role in Tourette syndrome until Caplan joined her lab. That’s one reason Caplan, who has long been a mentor and advocate for students with neurodevelopmental disorders, thinks neuroscience needs more neurodiversity.

“I think we need more representation in basic science research by the people who actually live with those conditions,” he says. Their experiences can lead to insights that may be inaccessible to others, he says, but significant barriers in academia often prevent this kind of representation. Caplan wants to see institutions make systemic changes to ensure that neurodiverse and otherwise minority individuals are able to thrive in academia. “I’m not an exception,” he says, “there should be more people like me here, but the present system makes that incredibly difficult.”

Overcoming adversity

Like Caplan, Lace Riggs faced significant challenges in her pursuit to study the brain. She grew up in Southern California’s Inland Empire, where issues of social disparity, chronic stress, drug addiction, and mental illness were a part of everyday life.

Postdoctoral fellow Lace Riggs studies the origins of neurodevelopmental conditions in Guoping Feng’s lab at the McGovern Institute. Photo: Lace Riggs

“Living in severe poverty and relying on government assistance without access to adequate education and resources led everyone I know and love to suffer tremendously, myself included,” says Riggs, a postdoctoral fellow in the Feng lab.

“There are not a lot of people like me who make it to this stage,” says Riggs, who has lost friends and family members to addiction, mental illness, and suicide. “There’s a reason for that,” she adds. “It’s really, really difficult to get through the educational system and to overcome socioeconomic barriers.”

Today, Riggs is investigating the origins of neurodevelopmental conditions, hoping to pave the way to better treatments for brain disorders by uncovering the molecular changes that alter the structure and function of neural circuits.

Riggs says that the adversities she faced early in life offered valuable insights in the pursuit of these goals. She first became interested in the brain because she wanted to understand how our experiences have a lasting impact on who we are — including in ways that leave people vulnerable to psychiatric problems.

“While the need for more effective treatments led me to become interested in psychiatry, my fascination with the brain’s unique ability to adapt is what led me to neuroscience,” says Riggs.

After finishing high school, Riggs attended California State University in San Bernardino and became the only member of her family to attend university or attempt a four-year degree. Today, she spends her days working with mice that carry mutations linked to autism or ADHD in humans, studying the animals’ behavior and monitoring their neural activity. She expects that aberrant neural circuit activity in these conditions may also contribute to mood disorders, whose origins are harder to tease apart because they often arise when genetic and environmental factors intersect. Ultimately, Riggs says, she wants to understand how our genes dictate whether an experience will alter neural signaling and impact mental health in a long-lasting way.

Riggs uses patch clamp electrophysiology to record the strength of inhibitory and excitatory synaptic input onto individual neurons (white arrow) in an animal model of autism. Image: Lace Riggs

“If we understand how these long-lasting synaptic changes come about, then we might be able to leverage these mechanisms to develop new and more effective treatments.”

While the turmoil of her childhood is in the past, Riggs says it is not forgotten — in part, because of its lasting effects on her own mental health.  She talks openly about her ongoing struggle with social anxiety and complex post-traumatic stress disorder because she is passionate about dismantling the stigma surrounding these conditions. “It’s something I have to deal with every day,” Riggs says. That means coping with symptoms like difficulty concentrating, hypervigilance, and heightened sensitivity to stress. “It’s like a constant hum in the background of my life, it never stops,” she says.

“I urge all of us to strive, not only to make scientific discoveries to move the field forward,” says Riggs, “but to improve the accessibility of this career to those whose lived experiences are required to truly accomplish that goal.”

Modeling the social mind

Typically, it would take two graduate students to do the research that Setayesh Radkani is doing.

Driven by an insatiable curiosity about the human mind, she is working on two PhD thesis projects in two different cognitive neuroscience labs at MIT. For one, she is studying punishment as a social tool to influence others. For the other, she is uncovering the neural processes underlying social learning — that is, learning from others. By piecing together these two research programs, Radkani is hoping to gain a better understanding of the mechanisms underpinning social influence in the mind and brain.

Radkani lived in Iran for most of her life, growing up alongside her younger brother in Tehran. The two spent a lot of time together and have long been each other’s best friends. Her father is a civil engineer, and her mother is a midwife. Her parents always encouraged her to explore new things and follow her own path, even if it wasn’t quite what they imagined for her. And her uncle helped cultivate her sense of curiosity, teaching her to “always ask why” as a way to understand how the world works.

Growing up, Radkani most loved learning about human psychology and using math to model the world around her. But she thought it was impossible to combine her two interests. Prioritizing math, she pursued a bachelor’s degree in electrical engineering at the Sharif University of Technology in Iran.

Then, late in her undergraduate studies, Radkani took a psychology course and discovered the field of cognitive neuroscience, in which scientists mathematically model the human mind and brain. She also spent a summer working in a computational neuroscience lab at the Swiss Federal Institute of Technology in Lausanne. Seeing a way to combine her interests, she decided to pivot and pursue the subject in graduate school.

An experience leading a project in her engineering ethics course during her final year of undergrad further helped her discover some of the questions that would eventually form the basis of her PhD. The project investigated why some students cheat and how to change this.

“Through this project I learned how complicated it is to understand the reasons that people engage in immoral behavior, and even more complicated than that is how to devise policies and react in these situations in order to change people’s attitudes,” Radkani says. “It was this experience that made me realize that I’m interested in studying the human social and moral mind.”

She began looking into social cognitive neuroscience research and stumbled upon a relevant TED talk by Rebecca Saxe, the John W. Jarve Professor in Brain and Cognitive Sciences at MIT, who would eventually become one of Radkani’s research advisors. Radkani knew immediately that she wanted to work with Saxe. But she needed to first get into the BCS PhD program at MIT, a challenging obstacle given her minimal background in the field.

After two application cycles and a year’s worth of graduate courses in cognitive neuroscience, Radkani was accepted into the program. But to come to MIT, she had to leave her family behind. Coming from Iran, Radkani has a single-entry visa, making it difficult for her to travel outside the U.S. She hasn’t been able to visit her family since starting her PhD and won’t be able to until at least after she graduates. Her visa also limits her research contributions, restricting her from attending conferences outside the U.S. “That is definitely a huge burden on my education and on my mental health,” she says.

Still, Radkani is grateful to be at MIT, indulging her curiosity in the human social mind. And she’s thankful for her supportive family, who she calls over FaceTime every day.

Modeling how people think about punishment

In Saxe’s lab, Radkani is researching how people approach and react to punishment, through behavioral studies and neuroimaging. By synthesizing these findings, she’s developing a computational model of the mind that characterizes how people make decisions in situations involving punishment, such as when a parent disciplines a child, when someone punishes their romantic partner, or when the criminal justice system sentences a defendant. With this model, Radkani says she hopes to better understand “when and why punishment works in changing behavior and influencing beliefs about right and wrong, and why sometimes it fails.”

Punishment isn’t a new research topic in cognitive neuroscience, Radkani says, but in previous studies, scientists have often only focused on people’s behavior in punitive situations and haven’t considered the thought processes that underlie those behaviors. Characterizing these thought processes, though, is key to understanding whether punishment in a situation can be effective in changing people’s attitudes.

People bring their prior beliefs into a punitive situation. Apart from moral beliefs about the appropriateness of different behaviors, “you have beliefs about the characteristics of the people involved, and you have theories about their intentions and motivations,” Radkani says. “All those come together to determine what you do or how you are influenced by punishment,” given the circumstances. Punishers decide a suitable punishment based on their interpretation of the situation, in light of their beliefs. Targets of punishment then decide whether they’ll change their attitude as a result of the punishment, depending on their own beliefs. Even outside observers make decisions, choosing whether to keep or change their moral beliefs based on what they see.

To capture these decision-making processes, Radkani is developing a computational model of the mind for punitive situations. The model mathematically represents people’s beliefs and how they interact with certain features of the situation to shape their decisions. The model then predicts a punisher’s decisions, and how punishment will influence the target and observers. Through this model, Radkani will provide a foundational understanding of how people think in various punitive situations.

Researching the neural mechanisms of social learning

In parallel, working in the lab of Professor Mehrdad Jazayeri, Radkani is studying social learning, uncovering its underlying neural processes. Through social learning, people learn from other people’s experiences and decisions, and incorporate this socially acquired knowledge into their own decisions or beliefs.

Humans are extraordinary in their social learning abilities, however our primary form of learning, shared by all other animals, is learning from self-experience. To investigate how learning from others is similar to or different from learning from our own experiences, Radkani has designed a two-player video game that involves both types of learning. During the game, she and her collaborators in Jazayeri’s lab record neural activity in the brain. By analyzing these neural measurements, they plan to uncover the computations carried out by neural circuits during social learning, and compare those to learning from self-experience.

Radkani first became curious about this comparison as a way to understand why people sometimes draw contrasting conclusions from very similar situations. “For example, if I get Covid from going to a restaurant, I’ll blame the restaurant and say it was not clean,” Radkani says. “But if I hear the same thing happen to my friend, I’ll say it’s because they were not careful.” Radkani wanted to know the root causes of this mismatch in how other people’s experiences affect our beliefs and judgements differently from our own similar experiences, particularly because it can lead to “errors that color the way that we judge other people,” she says.

By combining her two research projects, Radkani hopes to better understand how social influence works, particularly in moral situations. From there, she has a slew of research questions that she’s eager to investigate, including: How do people choose who to trust? And which types of people tend to be the most influential? As Radkani’s research grows, so does her curiosity.

Studies of autism tend to exclude women, researchers find

In recent years, researchers who study autism have made an effort to include more women and girls in their studies. However, despite these efforts, most studies of autism consistently enroll small numbers of female subjects or exclude them altogether, according to a new study from MIT.

The researchers found that a screening test commonly used to determine eligibility for studies of autism consistently winnows out a much higher percentage of women than men, creating a “leaky pipeline” that results in severe underrepresentation of women in studies of autism.

This lack of representation makes it more difficult to develop useful interventions or provide accurate diagnoses for girls and women, the researchers say.

“I think the findings favor having a more inclusive approach and widening the lens to end up being less biased in terms of who participates in research,” says John Gabrieli, the Grover Hermann Professor of Health Sciences and Technology and a professor of brain and cognitive sciences at MIT. “The more we understand autism in men and women and nonbinary individuals, the better services and more accurate diagnoses we can provide.”

Gabrieli, who is also a member of MIT’s McGovern Institute for Brain Research, is the senior author of the study, which appears in the journal Autism Research. Anila D’Mello, a former MIT postdoc who is now an assistant professor at the University of Texas Southwestern, is the lead author of the paper. MIT Technical Associate Isabelle Frosch, Research Coordinator Cindy Li, and Research Specialist Annie Cardinaux are also authors of the paper.

Gabrieli lab researchers Annie Cardinaux (left), Anila D’Mello (center), Cindy Li (right), and Isabelle Frosch (not pictured) have
uncovered sex biases in ASD research. Photo: Steph Stevens

Screening out females

Autism spectrum disorders are diagnosed based on observation of traits such as repetitive behaviors and difficulty with language and social interaction. Doctors may use a variety of screening tests to help them make a diagnosis, but these screens are not required.

For research studies of autism, it is routine to use a screening test called the Autism Diagnostic Observation Schedule (ADOS) to determine eligibility for the study. This test, which assesses social interaction, communication, play, and repetitive behaviors, provides a quantitative score in each category, and only participants who reach certain scores qualify for inclusion in studies.

While doing a study exploring how quickly the brains of autistic adults adapt to novel events in the environment, scientists in Gabrieli’s lab began to notice that the ADOS appeared to have unequal effects on male and female participation in research. As the study progressed, D’Mello noticed some significant brain differences between the male and female subjects in the study.

To investigate these differences further, D’Mello tried to find more female participants using an MIT database of autistic adults who have expressed interest in participating in research studies. However, when she sorted through the subjects, she found that only about half of the women in the database had met the ADOS cutoff scores typically required for inclusion in autism studies, compared to 80 percent of the males.

“We realized then that there’s a discrepancy and that the ADOS is essentially screening out who eventually participated in research,” D’Mello says. “We were really surprised at how many males we retained and how many females we lost to the ADOS.”

To see if this phenomenon was more widespread, the researchers looked at six publicly available datasets, which include more than 40,000 adults who have been diagnosed as autistic. For some of these datasets, participants were screened with ADOS to determine their eligibility to participate in studies, while for others, a “community diagnosis” — diagnosis from a doctor or other health care provider — was sufficient.

The researchers found that in datasets that required ADOS screening for eligibility, the ratio of male to female participants ended up being around 8:1, while in those that required only a community diagnosis the ratios ranged from about 2:1 to 1:1.

Previous studies have found differences between behavioral patterns in autistic men and women, but the ADOS test was originally developed using a largely male sample, which may explain why it often excludes women from research studies, D’Mello says.

“There were few females in the sample that was used to create this assessment, so it might be that it’s not great at picking up the female phenotype, which may differ in certain ways — primarily in domains like social communication,” she says.

Effects of exclusion

Failure to include more women and girls in studies of autism may contribute to shortcomings in the definitions of the disorder, the researchers say.

“The way we think about it is that the field evolved perhaps an implicit bias in how autism is defined, and it was driven disproportionately by analysis of males, and recruitment of males, and so on,” Gabrieli says. “So, the definition doesn’t fit as well, on average, with the different expression of autism that seems to be more common in females.”

This implicit bias has led to documented difficulties in receiving a diagnosis for girls and women, even when their symptoms are the same as those presented by autistic boys and men.

“Many females might be missed altogether in terms of diagnoses, and then our study shows that in the research setting, what is already a small pool gets whittled down at a much larger rate than that of males,” D’Mello says.

Excluding girls and women from this kind of research study can lead to treatments that don’t work as well for them, and it contributes to the perception that autism doesn’t affect women as much as men.

“The goal is that research should directly inform treatment, therapies, and public perception,” D’Mello says. “If the research is saying that there aren’t females with autism, or that the brain basis of autism only looks like the patterns established in males, then you’re not really helping females as much as you could be, and you’re not really getting at the truth of what the disorder might be.”

The researchers now plan to further explore some of the gender and sex-based differences that appear in autism, and how they arise. They also plan to expand the gender categories that they include. In the current study, the surveys that each participant filled out asked them to choose male or female, but the researchers have updated their questionnaire to include nonbinary and transgender options.

The research was funded by the Hock E. Tan and K. Lisa Yang Center for Autism Research, the Simons Center for the Social Brain at MIT, and the National Institutes of Mental Health.

Making and breaking habits

As part of our Ask the Brain series, science writer Shafaq Zia explores the question, “How are habits formed in the brain?”

____

Have you ever wondered why it is so hard to break free of bad habits like nail biting or obsessive social networking?

When we repeat an action over and over again, the behavioral pattern becomes automated in our brain, according to Jill R. Crittenden, molecular biologist and scientific advisor at McGovern Institute for Brain Research at MIT. For over a decade, Crittenden worked as a research scientist in the lab of Ann Graybiel, where one of the key questions scientists are working to answer is, how are habits formed?

Making habits

To understand how certain actions get wired in our neural pathways, this team of McGovern researchers experimented with rats, who were trained to run down a maze to receive a reward. If they turned left, they would get rich chocolate milk and for turning right, only sugar water. With this, the scientists wanted to see whether these animals could “learn to associate a cue with which direction they should turn in the maze in order to get the chocolate milk reward.”

Over time, the rats grew extremely habitual in their behavior; “they always turned the the correct direction and the places where their paws touched, in a fairly long maze, were exactly the same every time,” said Crittenden.

This isn’t a coincidence. When we’re first learning to do something, the frontal lobe and basal ganglia of the brain are highly active and doing a lot of calculations. These brain regions work together to associate behaviors with thoughts, emotions, and, most importantly, motor movements. But when we repeat an action over and over again, like the rats running down the maze, our brains become more efficient and fewer neurons are required to achieve the goal. This means, the more you do something, the easier it gets to carry it out because the behavior becomes literally etched in our brain as our motor movements.

But habits are complicated and they come in many different flavors, according to Crittenden. “I think we don’t have a great handle on how the differences [in our many habits] are separable neurobiologically, and so people argue a lot about how do you know that something’s a habit.”

The easiest way for scientists to test this in rodents is to see if the animal engages in the behavior even in the absence of reward. In this particular experiment, the researchers take away the reward, chocolate milk, to see whether the rats continue to run down the maze correctly. And to take it even a step further, they mix the chocolate milk with lithium chloride, which would upset the rat’s stomach. Despite all this, the rats continue to run down the maze and turn left towards the chocolate milk, as they had learnt to do over and over again.

Breaking habits

So does that mean once a habit is formed, it is impossible to shake it? Not quite. But it is tough. Rewards are a key building block to forming habits because our dopamine levels surge when we learn that an action is unexpectedly rewarded. For example, when the rats first learn to run down the maze, they’re motivated to receive the chocolate milk.

But things get complicated once the habit is formed. Researchers have found that this dopamine surge in response to reward ceases after a behavior becomes a habit. Instead the brain begins to release dopamine at the first cue or action that was previously learned to lead to the reward, so we are motivated to engage in the full behavioral sequence anyway, even if the reward isn’t there anymore.

This means we don’t have as much self-control as we think we do, which may also be the reason why it’s so hard to break the cycle of addiction. “People will report that they know this is bad for them. They don’t want it. And nevertheless, they select that action,” said Crittenden.

One common method to break the behavior, in this case, is called extinction. This is where psychologists try to weaken the association between the cue and the reward that led to habit formation in the first place. For example, if the rat no longer associates the cue to run down the maze with a reward, it will stop engaging in that behavior.

So the next time you beat yourself up over being unable to stick to a diet or sleep at a certain time, give yourself some grace and know that with consistency, a new, healthier habit can be born.

How the brain generates rhythmic behavior

Many of our bodily functions, such as walking, breathing, and chewing, are controlled by brain circuits called central oscillators, which generate rhythmic firing patterns that regulate these behaviors.

MIT neuroscientists have now discovered the neuronal identity and mechanism underlying one of these circuits: an oscillator that controls the rhythmic back-and-forth sweeping of tactile whiskers, or whisking, in mice. This is the first time that any such oscillator has been fully characterized in mammals.

The MIT team found that the whisking oscillator consists of a population of inhibitory neurons in the brainstem that fires rhythmic bursts during whisking. As each neuron fires, it also inhibits some of the other neurons in the network, allowing the overall population to generate a synchronous rhythm that retracts the whiskers from their protracted positions.

“We have defined a mammalian oscillator molecularly, electrophysiologically, functionally, and mechanistically,” says Fan Wang, an MIT professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research. “It’s very exciting to see a clearly defined circuit and mechanism of how rhythm is generated in a mammal.”

Wang is the senior author of the study, which appears today in Nature. The lead authors of the paper are MIT research scientists Jun Takatoh and Vincent Prevosto.

Rhythmic behavior

Most of the research that clearly identified central oscillator circuits has been done in invertebrates. For example, Eve Marder’s lab at Brandeis University found cells in the stomatogastric ganglion in lobsters and crabs that generate oscillatory activity to control rhythmic motion of the digestive tract.

Characterizing oscillators in mammals, especially in awake behaving animals, has proven to be highly challenging. The oscillator that controls walking is believed to be distributed throughout the spinal cord, making it difficult to precisely identify the neurons and circuits involved. The oscillator that generates rhythmic breathing is located in a part of the brain stem called the pre-Bötzinger complex, but the exact identity of the oscillator neurons is not fully understood.

“There haven’t been detailed studies in awake behaving animals, where one can record from molecularly identified oscillator cells and manipulate them in a precise way,” Wang says.

Whisking is a prominent rhythmic exploratory behavior in many mammals, which use their tactile whiskers to detect objects and sense textures. In mice, whiskers extend and retract at a frequency of about 12 cycles per second. Several years ago, Wang’s lab set out try to identify the cells and the mechanism that control this oscillation.

To find the location of the whisking oscillator, the researchers traced back from the motor neurons that innervate whisker muscles. Using a modified rabies virus that infects axons, the researchers were able to label a group of cells presynaptic to these motor neurons in a part of the brainstem called the vibrissa intermediate reticular nucleus (vIRt). This finding was consistent with previous studies showing that damage to this part of the brain eliminates whisking.

The researchers then found that about half of these vIRt neurons express a protein called parvalbumin, and that this subpopulation of cells drives the rhythmic motion of the whiskers. When these neurons are silenced, whisking activity is abolished.

Next, the researchers recorded electrical activity from these parvalbumin-expressing vIRt neurons in brainstem in awake mice, a technically challenging task, and found that these neurons indeed have bursts of activity only during the whisker retraction period. Because these neurons provide inhibitory synaptic inputs to whisker motor neurons, it follows that rhythmic whisking is generated by a constant motor neuron protraction signal interrupted by the rhythmic retraction signal from these oscillator cells.

“That was a super satisfying and rewarding moment, to see that these cells are indeed the oscillator cells, because they fire rhythmically, they fire in the retraction phase, and they’re inhibitory neurons,” Wang says.

A maximum projection image showing tracked whiskers on the mouse muzzle. The right (control) side shows the back-and-forth rhythmic sweeping of the whiskers, while the experimental side where the whisking oscillator neurons are silenced, the whiskers move very little. Image: Wang Lab

“New principles”

The oscillatory bursting pattern of vIRt cells is initiated at the start of whisking. When the whiskers are not moving, these neurons fire continuously. When the researchers blocked vIRt neurons from inhibiting each other, the rhythm disappeared, and instead the oscillator neurons simply increased their rate of continuous firing.

This type of network, known as recurrent inhibitory network, differs from the types of oscillators that have been seen in the stomatogastric neurons in lobsters, in which neurons intrinsically generate their own rhythm.

“Now we have found a mammalian network oscillator that is formed by all inhibitory neurons,” Wang says.

The MIT scientists also collaborated with a team of theorists led by David Golomb at Ben-Gurion University, Israel, and David Kleinfeld at the University of California at San Diego. The theorists created a detailed computational model outlining how whisking is controlled, which fits well with all experimental data. A paper describing that model is appearing in an upcoming issue of Neuron.

Wang’s lab now plans to investigate other types of oscillatory circuits in mice, including those that control chewing and licking.

“We are very excited to find oscillators of these feeding behaviors and compare and contrast to the whisking oscillator, because they are all in the brain stem, and we want to know whether there’s some common theme or if there are many different ways to generate oscillators,” she says.

The research was funded by the National Institutes of Health.

Microscopy technique reveals hidden nanostructures in cells and tissues

Press Mentions

Inside a living cell, proteins and other molecules are often tightly packed together. These dense clusters can be difficult to image because the fluorescent labels used to make them visible can’t wedge themselves in between the molecules.

MIT researchers have now developed a novel way to overcome this limitation and make those “invisible” molecules visible. Their technique allows them to “de-crowd” the molecules by expanding a cell or tissue sample before labeling the molecules, which makes the molecules more accessible to fluorescent tags.

This method, which builds on a widely used technique known as expansion microscopy previously developed at MIT, should allow scientists to visualize molecules and cellular structures that have never been seen before.

“It’s becoming clear that the expansion process will reveal many new biological discoveries. If biologists and clinicians have been studying a protein in the brain or another biological specimen, and they’re labeling it the regular way, they might be missing entire categories of phenomena,” says Edward Boyden, the Y. Eva Tan Professor in Neurotechnology, a professor of biological engineering and brain and cognitive sciences at MIT, a Howard Hughes Medical Institute investigator, and a member of MIT’s McGovern Institute for Brain Research and Koch Institute for Integrative Cancer Research.

Using this technique, Boyden and his colleagues showed that they could image a nanostructure found in the synapses of neurons. They also imaged the structure of Alzheimer’s-linked amyloid beta plaques in greater detail than has been possible before.

“Our technology, which we named expansion revealing, enables visualization of these nanostructures, which previously remained hidden, using hardware easily available in academic labs,” says Deblina Sarkar, an assistant professor in the Media Lab and one of the lead authors of the study.

The senior authors of the study are Boyden; Li-Huei Tsai, director of MIT’s Picower Institute for Learning and Memory; and Thomas Blanpied, a professor of physiology at the University of Maryland. Other lead authors include Jinyoung Kang, an MIT postdoc, and Asmamaw Wassie, a recent MIT PhD recipient. The study appears today in Nature Biomedical Engineering.

De-crowding

Imaging a specific protein or other molecule inside a cell requires labeling it with a fluorescent tag carried by an antibody that binds to the target. Antibodies are about 10 nanometers long, while typical cellular proteins are usually about 2 to 5 nanometers in diameter, so if the target proteins are too densely packed, the antibodies can’t get to them.

This has been an obstacle to traditional imaging and also to the original version of expansion microscopy, which Boyden first developed in 2015. In the original version of expansion microscopy, researchers attached fluorescent labels to molecules of interest before they expanded the tissue. The labeling was done first, in part because the researchers had to use an enzyme to chop up proteins in the sample so the tissue could be expanded. This meant that the proteins couldn’t be labeled after the tissue was expanded.

To overcome that obstacle, the researchers had to find a way to expand the tissue while leaving the proteins intact. They used heat instead of enzymes to soften the tissue, allowing the tissue to expand 20-fold without being destroyed. Then, the separated proteins could be labeled with fluorescent tags after expansion.

With so many more proteins accessible for labeling, the researchers were able to identify tiny cellular structures within synapses, the connections between neurons that are densely packed with proteins. They labeled and imaged seven different synaptic proteins, which allowed them to visualize, in detail, “nanocolumns” consisting of calcium channels aligned with other synaptic proteins. These nanocolumns, which are believed to help make synaptic communication more efficient, were first discovered by Blanpied’s lab in 2016.

“This technology can be used to answer a lot of biological questions about dysfunction in synaptic proteins, which are involved in neurodegenerative diseases,” Kang says. “Until now there has been no tool to visualize synapses very well.”

New patterns

The researchers also used their new technique to image beta amyloid, a peptide that forms plaques in the brains of Alzheimer’s patients. Using brain tissue from mice, the researchers found that amyloid beta forms periodic nanoclusters, which had not been seen before. These clusters of amyloid beta also include potassium channels. The researchers also found amyloid beta molecules that formed helical structures along axons.

“In this paper, we don’t speculate as to what that biology might mean, but we show that it exists. That is just one example of the new patterns that we can see,” says Margaret Schroeder, an MIT graduate student who is also an author of the paper.

Sarkar says that she is fascinated by the nanoscale biomolecular patterns that this technology unveils. “With a background in nanoelectronics, I have developed electronic chips that require extremely precise alignment, in the nanofab. But when I see that in our brain Mother Nature has arranged biomolecules with such nanoscale precision, that really blows my mind,” she says.

Boyden and his group members are now working with other labs to study cellular structures such as protein aggregates linked to Parkinson’s and other diseases. In other projects, they are studying pathogens that infect cells and molecules that are involved in aging in the brain. Preliminary results from these studies have also revealed novel structures, Boyden says.

“Time and time again, you see things that are truly shocking,” he says. “It shows us how much we are missing with classical unexpanded staining.”

The researchers are also working on modifying the technique so they can image up to 20 proteins at a time. They are also working on adapting their process so that it can be used on human tissue samples.

Sarkar and her team, on the other hand, are developing tiny wirelessly powered nanoelectronic devices which could be distributed in the brain. They plan to integrate these devices with expansion revealing. “This can combine the intelligence of nanoelectronics with the nanoscopy prowess of expansion technology, for an integrated functional and structural understanding of the brain,” Sarkar says.

The research was funded by the National Institutes of Health, the National Science Foundation, the Ludwig Family Foundation, the JPB Foundation, the Open Philanthropy Project, John Doerr, Lisa Yang and the Tan-Yang Center for Autism Research at MIT, the U.S. Army Research Office, Charles Hieken, Tom Stocky, Kathleen Octavio, Lore McGovern, Good Ventures, and HHMI.

These neurons have food on the brain

A gooey slice of pizza. A pile of crispy French fries. Ice cream dripping down a cone on a hot summer day. When you look at any of these foods, a specialized part of your visual cortex lights up, according to a new study from MIT neuroscientists.

This newly discovered population of food-responsive neurons is located in the ventral visual stream, alongside populations that respond specifically to faces, bodies, places, and words. The unexpected finding may reflect the special significance of food in human culture, the researchers say.

“Food is central to human social interactions and cultural practices. It’s not just sustenance,” says Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience and a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds, and Machines. “Food is core to so many elements of our cultural identity, religious practice, and social interactions, and many other things that humans do.”

The findings, based on an analysis of a large public database of human brain responses to a set of 10,000 images, raise many additional questions about how and why this neural population develops. In future studies, the researchers hope to explore how people’s responses to certain foods might differ depending on their likes and dislikes, or their familiarity with certain types of food.

MIT postdoc Meenakshi Khosla is the lead author of the paper, along with MIT research scientist N. Apurva Ratan Murty. The study appears today in the journal Current Biology.

Visual categories

More than 20 years ago, while studying the ventral visual stream, the part of the brain that recognizes objects, Kanwisher discovered cortical regions that respond selectively to faces. Later, she and other scientists discovered other regions that respond selectively to places, bodies, or words. Most of those areas were discovered when researchers specifically set out to look for them. However, that hypothesis-driven approach can limit what you end up finding, Kanwisher says.

“There could be other things that we might not think to look for,” she says. “And even when we find something, how do we know that that’s actually part of the basic dominant structure of that pathway, and not something we found just because we were looking for it?”

To try to uncover the fundamental structure of the ventral visual stream, Kanwisher and Khosla decided to analyze a large, publicly available dataset of full-brain functional magnetic resonance imaging (fMRI) responses from eight human subjects as they viewed thousands of images.

“We wanted to see when we apply a data-driven, hypothesis-free strategy, what kinds of selectivities pop up, and whether those are consistent with what had been discovered before. A second goal was to see if we could discover novel selectivities that either haven’t been hypothesized before, or that have remained hidden due to the lower spatial resolution of fMRI data,” Khosla says.

To do that, the researchers applied a mathematical method that allows them to discover neural populations that can’t be identified from traditional fMRI data. An fMRI image is made up of many voxels — three-dimensional units that represent a cube of brain tissue. Each voxel contains hundreds of thousands of neurons, and if some of those neurons belong to smaller populations that respond to one type of visual input, their responses may be drowned out by other populations within the same voxel.

The new analytical method, which Kanwisher’s lab has previously used on fMRI data from the auditory cortex, can tease out responses of neural populations within each voxel of fMRI data.

Using this approach, the researchers found four populations that corresponded to previously identified clusters that respond to faces, places, bodies, and words. “That tells us that this method works, and it tells us that the things that we found before are not just obscure properties of that pathway, but major, dominant properties,” Kanwisher says.

Intriguingly, a fifth population also emerged, and this one appeared to be selective for images of food.

“We were first quite puzzled by this because food is not a visually homogenous category,” Khosla says. “Things like apples and corn and pasta all look so unlike each other, yet we found a single population that responds similarly to all these diverse food items.”

The food-specific population, which the researchers call the ventral food component (VFC), appears to be spread across two clusters of neurons, located on either side of the FFA. The fact that the food-specific populations are spread out between other category-specific populations may help explain why they have not been seen before, the researchers say.

“We think that food selectivity had been harder to characterize before because the populations that are selective for food are intermingled with other nearby populations that have distinct responses to other stimulus attributes. The low spatial resolution of fMRI prevents us from seeing this selectivity because the responses of different neural population get mixed in a voxel,” Khosla says.

“The technique which the researchers used to identify category-sensitive cells or areas is impressive, and it recovered known category-sensitive systems, making the food category findings most impressive,” says Paul Rozin, a professor of psychology at the University of Pennsylvania, who was not involved in the study. “I can’t imagine a way for the brain to reliably identify the diversity of foods based on sensory features. That makes this all the more fascinating, and likely to clue us in about something really new.”

Food vs non-food

The researchers also used the data to train a computational model of the VFC, based on previous models Murty had developed for the brain’s face and place recognition areas. This allowed the researchers to run additional experiments and predict the responses of the VFC. In one experiment, they fed the model matched images of food and non-food items that looked very similar — for example, a banana and a yellow crescent moon.

“Those matched stimuli have very similar visual properties, but the main attribute in which they differ is edible versus inedible,” Khosla says. “We could feed those arbitrary stimuli through the predictive model and see whether it would still respond more to food than non-food, without having to collect the fMRI data.”

They could also use the computational model to analyze much larger datasets, consisting of millions of images. Those simulations helped to confirm that the VFC is highly selective for images of food.

From their analysis of the human fMRI data, the researchers found that in some subjects, the VFC responded slightly more to processed foods such as pizza than unprocessed foods like apples. In the future they hope to explore how factors such as familiarity and like or dislike of a particular food might affect individuals’ responses to that food.

They also hope to study when and how this region becomes specialized during early childhood, and what other parts of the brain it communicates with. Another question is whether this food-selective population will be seen in other animals such as monkeys, who do not attach the cultural significance to food that humans do.

The research was funded by the National Institutes of Health, the National Eye Institute, and the National Science Foundation through the MIT Center for Brains, Minds, and Machines.

MIT scientists discover new antiviral defense system in bacteria

Bacteria use a variety of defense strategies to fight off viral infection, and some of these systems have led to groundbreaking technologies, such as CRISPR-based gene-editing. Scientists predict there are many more antiviral weapons yet to be found in the microbial world.

A team led by researchers at the Broad Institute of MIT and Harvard and the McGovern Institute for Brain Research at MIT has discovered and characterized one of these unexplored microbial defense systems. They found that certain proteins in bacteria and archaea (together known as prokaryotes) detect viruses in surprisingly direct ways, recognizing key parts of the viruses and causing the single-celled organisms to commit suicide to quell the infection within a microbial community. The study is the first time this mechanism has been seen in prokaryotes and shows that organisms across all three domains of life — bacteria, archaea, and eukaryotes (which includes plants and animals) — use pattern recognition of conserved viral proteins to defend against pathogens.

The study appears in Science.

“This work demonstrates a remarkable unity in how pattern recognition occurs across very different organisms,” said senior author Feng Zhang, who is a core institute member at the Broad, the James and Patricia Poitras Professor of Neuroscience at MIT, a professor of brain and cognitive sciences and biological engineering at MIT, and an investigator at MIT’s McGovern Institute and the Howard Hughes Medical Institute. “It’s been very exciting to integrate genetics, bioinformatics, biochemistry, and structural biology approaches in one study to understand this fascinating molecular system.”

Microbial armory

In an earlier study, the researchers scanned data on the DNA sequences of hundreds of thousands of bacteria and archaea, which revealed several thousand genes harboring signatures of microbial defense. In the new study, they homed in on a handful of these genes encoding enzymes that are members of the STAND ATPase family of proteins, which in eukaryotes are involved in the innate immune response.

In humans and plants, the STAND ATPase proteins fight infection by recognizing patterns in a pathogen itself or in the cell’s response to infection. In the new study, the researchers wanted to know if the proteins work the same way in prokaryotes to defend against infection. The team chose a few STAND ATPase genes from the earlier study, delivered them to bacterial cells, and challenged those cells with bacteriophage viruses. The cells underwent a dramatic defensive response and survived.

The scientists next wondered which part of the bacteriophage triggers that response, so they delivered viral genes to the bacteria one at a time. Two viral proteins elicited an immune response: the portal, a part of the virus’s capsid shell, which contains viral DNA; and the terminase, the molecular motor that helps assemble the virus by pushing the viral DNA into the capsid. Each of these viral proteins activated a different STAND ATPase to protect the cell.

The finding was striking and unprecedented. Most known bacterial defense systems work by sensing viral DNA or RNA, or cellular stress due to the infection. These bacterial proteins were instead directly sensing key parts of the virus.

The team next showed that bacterial STAND ATPase proteins could recognize diverse portal and terminase proteins from different phages. “It’s surprising that bacteria have these highly versatile sensors that can recognize all sorts of different phage threats that they might encounter,” said co-first author Linyi Gao, a junior fellow in the Harvard Society of Fellows and a former graduate student in the Zhang lab.

Structural analysis

For a detailed look at how the microbial STAND ATPases detect the viral proteins, the researchers used cryo-electron microscopy to examine their molecular structure when bound to the viral proteins. “By analyzing the structure, we were able to precisely answer a lot of the questions about how these things actually work,” said co-first author Max Wilkinson, a postdoctoral researcher in the Zhang lab.

The team saw that the portal or terminase protein from the virus fits within a pocket in the STAND ATPase protein, with each STAND ATPase protein grasping one viral protein. The STAND ATPase proteins then group together in sets of four known as tetramers, which brings together key parts of the bacterial proteins called effector domains. This activates the proteins’ endonuclease function, shredding cellular DNA and killing the cell.

The tetramers bound viral proteins from other bacteriophages just as tightly, demonstrating that the STAND ATPases sense the viral proteins’ three-dimensional shape, rather than their sequence. This helps explain how one STAND ATPase can recognize dozens of different viral proteins. “Regardless of sequence, they all fit like a hand in a glove,” said Wilkinson.

STAND ATPases in humans and plants also work by forming multi-unit complexes that activate specific functions in the cell. “That’s the most exciting part of this work,” said Strecker. “To see this across the domains of life is unprecedented.”

The research was funded in part by the National Institutes of Health, the Howard Hughes Medical Institute, Open Philanthropy, the Edward Mallinckrodt, Jr. Foundation, the Poitras Center for Psychiatric Disorders Research, the Hock E. Tan and K. Lisa Yang Center for Autism Research, the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics in Neuroscience, the Phillips family, J. and P. Poitras, and the BT Charitable Foundation.

Why do we dream?

As part of our Ask the Brain series, science writer Shafaq Zia answers the question, “Why do we dream?”

_____

One night, Albert Einstein dreamt that he was walking through a farm where he found a herd of cows against an electric fence. When the farmer switched on the fence, the cows suddenly jumped back, all at the same time. But to the farmer’s eyes, who was standing at the other end of the field, they seemed to have jumped one after another, in a wave formation. Einstein woke up and the Theory of Relativity was born.

Dreaming is one of the oldest biological phenomena; for as long as humans have slept, they’ve dreamt. But through most of our history, dreams have remained mystified, leaving scientists, philosophers, and artists alike searching for meaning.

In many aboriginal cultures, such as the Esa Eja community in Peruvian Amazon, dreaming is a sacred practice for gaining knowledge, or solving a problem, through the dream narrative. But in the last century or so, technological advancements have allowed neuroscientists to take up dreams as a matter of scientific inquiry in order to answer a much-pondered question — what is the purpose of dreaming?

Falling asleep

The human brain is a fascinating place. It is composed of approximately 80 billion neurons and it is their combined electrical chatter that generates oscillations known as brain waves. There are five types of brain waves —  alpha, beta, theta, delta, and gamma — that each indicate a different state between sleep and wakefulness.

Using EEG, a test that records electrical activity in the brain, scientists have identified that when we’re awake, our brain emits beta and gamma waves. These tend to have a stimulating effect and help us remain actively engaged in mental activities.

The differently named frequency bands of neural oscillations, or brainwaves: delta, theta, alpha, beta, and gamma.

But during the transition to sleep, the number of beta waves lowers significantly and the brain produces high levels of alpha waves. These waves regulate attention and help filter out distractions. A recent study led by McGovern Institute Director Robert Desimone, showed that people can actually enhance their attention by controlling their own alpha brain waves using neurofeedback training. It’s unknown how long these effects might last and whether this kind of control could be achieved with other types of brain waves, but the researchers are now planning additional studies to explore these questions.

Alpha waves are also produced when we daydream, meditate, or listen to the sound of rain. As our minds wander, many parts of the brain are engaged, including a specialized system called the “default mode network.” Disturbances in this network, explains Susan Whitfield-Gabrieli, a professor of psychology at Northeastern University and a McGovern Institute research affiliate, have been linked to various brain disorders including schizophrenia, depression and ADHD. By identifying the brain circuits associated with mind wandering, she says, we can begin to develop better treatment options for people suffering from these disorders.

Finally, as we enter a dreamlike state, the prefrontal cortex of the brain, responsible for keeping impulses in check, slowly grows less active. This is when there’s a spur in theta waves that leads to an unconstrained window of consciousness; there is little censorship from the mind, allowing for visceral dreams and creative thoughts.

The dreaming brain

“Every time you learn something, it happens so quickly,” said Dheeraj Roy, postdoctoral fellow in Guoping Feng’s lab at the McGovern Institute. “The brain is continuously recording information, but how do you take a break and then make sense of it all?”

This is where dreams come in, says Roy. During sleep, newly-formed memories are gradually stabilized into a more permanent form of long-term storage in the brain. Dreaming, he says, is influenced by the consolidation of these memories during sleep. Most dreams are made up of experiences, thoughts, emotion, places, and people we have already encountered in our lives. But, during dreaming, bits and pieces of these memories seem to be reorganized to create a particularly bizarre scenario: you’re talking to your sister when it suddenly begins to rain roses and you’re dancing at a New Year’s party.

This re-organization may not be so random; as the brain is processing memories, it pulls together the ones that are seemingly related to each other. Perhaps you dreamt of your sister because you were at a store recently where a candle smelt like her rose-scented perfume, which reminded you of the time you made a new year resolution to spend less money on flowers.

Some brain disorders, like Parkinson’s disease, have been associated with vivid, unpleasant dreams and erratic brain wave patterns. Researchers at the McGovern Institute hope that a better understanding of mechanics of the brain – including neural circuits and brain waves – will help people with Parkinson’s and other brain disorders.

So perhaps dreams aren’t instilled with meaning, symbolism, and wisdom in the way we’ve always imagined, and they simply reflect important biological processes taking place in our brain. But with all that science has uncovered about dreaming and the ways in which it links to creativity and memory, the magical essence of this universal human experience remains untainted.

_____

Do you have a question for The Brain? Ask it here.