Can I rewire my brain?

As part of our Ask the Brain series, Halie Olson, a graduate student in the labs of John Gabrieli and Rebecca Saxe, pens her answer to the question,”Can I rewire my brain?”

_____

Yes, kind of, sometimes – it all depends on what you mean by “rewiring” the brain.

Halie Olson, a graduate student in the Gabrieli and Saxe labs.

If you’re asking whether you can remove all memories of your ex from your head, then no. (That’s probably for the best – just watch Eternal Sunshine of the Spotless Mind.) However, if you’re asking whether you can teach a dog new tricks – that have a physical implementation in the brain – then yes.

To embrace the analogy that “rewiring” alludes to, let’s imagine you live in an old house with outlets in less-than-optimal locations. You really want your brand-new TV to be plugged in on the far side of the living room, but there is no outlet to be found. So you call up your electrician, she pops over, and moves some wires around in the living room wall to give you a new outlet. No sweat!

Local changes in neural connectivity happen throughout the lifespan. With over 100 billion neurons and 100 trillion connections – or synapses – between these neurons in the adult human brain, it is unsurprising that some pathways end up being more important than others. When we learn something new, the connections between relevant neurons communicating with each other are strengthened. To paraphrase Donald Hebb, one of the most influential psychologists of the twentieth century, “neurons that fire together, wire together” – by forming new synapses or more efficiently connecting the ones that are already there. This ability to rewire neural connections at a local level is a key feature of the brain, enabling us to tailor our neural infrastructure to our needs.

Plasticity in our brain allows us to learn, adjust, and thrive in our environments.

We can also see this plasticity in the brain at a larger scale. My favorite example of “rewiring” in the brain is when children learn to read. Our brains did not evolve to enable us to read – there is no built-in “reading region” that magically comes online when a child enters school. However, if you stick a proficient reader in an MRI scanner, you will see a region in the left lateral occipitotemporal sulcus (that is, the back bottom left of your cortex) that is particularly active when you read written text. Before children learn to read, this region – known as the visual word form area – is not exceptionally interested in words, but as children get acquainted with written language and start connecting letters with sounds, it becomes selective for familiar written language – no matter the font, CaPItaLIZation, or size.

Now, let’s say that you wake up in the middle of the night with a desire to move your oven and stovetop from the kitchen into your swanky new living room with the TV. You call up your electrician – she tells you this is impossible, and to stop calling her in the middle of the night.

Similarly, your brain comes with a particular infrastructure – a floorplan, let’s call it – that cannot be easily adjusted when you are an adult. Large lesions tend to have large consequences. For instance, an adult who suffers a serious stroke in their left hemisphere will likely struggle with language, a condition called aphasia. Young children’s brains, on the other hand, can sometimes rewire in profound ways. An entire half of the brain can be damaged early on with minimal functional consequences. So if you’re going for a remodel? Better do it really early.

Plasticity in our brain allows us to learn, adjust, and thrive in our environments. It also gives neuroscientists like me something to study – since clearly I would fail as an electrician.

Halie Olson earned her bachelor’s degree in neurobiology from Harvard College in 2017. She is currently a graduate student in MIT’s Department of Brain and Cognitive Sciences working with John Gabrieli and Rebecca Saxe. She studies how early life experiences and environments impact brain development, particularly in the context of reading and language, and what this means for children’s educational outcomes.

_____

Do you have a question for The Brain? Ask it here.

Hearing through the clatter

In a busy coffee shop, our eardrums are inundated with sound waves – people chatting, the clatter of cups, music playing – yet our brains somehow manage to untangle relevant sounds, like a barista announcing that our “coffee is ready,” from insignificant noise. A new McGovern Institute study sheds light on how the brain accomplishes the task of extracting meaningful sounds from background noise – findings that could one day help to build artificial hearing systems and aid development of targeted hearing prosthetics.

“These findings reveal a neural correlate of our ability to listen in noise, and at the same time demonstrate functional differentiation between different stages of auditory processing in the cortex,” explains Josh McDermott, an associate professor of brain and cognitive sciences at MIT, a member of the McGovern Institute and the Center for Brains, Minds and Machines, and the senior author of the study.

The auditory cortex, a part of the brain that responds to sound, has long been known to have distinct anatomical subregions, but the role these areas play in auditory processing has remained a mystery. In their study published today in Nature Communications, McDermott and former graduate student Alex Kell, discovered that these subregions respond differently to the presence of background noise, suggesting that auditory processing occurs in steps that progressively hone in on and isolate a sound of interest.

Background check

Previous studies have shown that the primary and non-primary subregions of the auditory cortex respond to sound with different dynamics, but these studies were largely based on brain activity in response to speech or simple synthetic sounds (such as tones and clicks). Little was known about how these regions might work to subserve everyday auditory behavior.

To test these subregions under more realistic conditions, McDermott and Kell, who is now a postdoctoral researcher at Columbia University, assessed changes in human brain activity while subjects listened to natural sounds with and without background noise.

While lying in an MRI scanner, subjects listened to 30 different natural sounds, ranging from meowing cats to ringing phones, that were presented alone or embedded in real-world background noise such as heavy rain.

“When I started studying audition,” explains Kell, “I started just sitting around in my day-to-day life, just listening, and was astonished at the constant background noise that seemed to usually be filtered out by default. Most of these noises tended to be pretty stable over time, suggesting we could experimentally separate them. The project flowed from there.”

To their surprise, Kell and McDermott found that the primary and non-primary regions of the auditory cortex responded differently to natural sound depending upon whether background noise was present.

brain regions responding to sound
Primary auditory cortex (outlined in white) responses change (blue) when background noise is present, whereas non-primary activity is robust to background noise (yellow). Image: Alex Kell

They found that activity of the primary auditory cortex was altered when background noise is present, suggesting that this region has not yet differentiated between meaningful sounds and background noise. Non-primary regions, however, respond similarly to natural sounds irrespective of whether noise is present, suggesting that cortical signals generated by sound are transformed or “cleaned up” to remove background noise by the time they reach the non-primary auditory cortex.

“We were surprised by how big the difference was between primary and non-primary areas,” explained Kell, “so we ran a bunch more subjects but kept seeing the same thing. We had a ton of questions about what might be responsible for this difference, and that’s why we ended up running all these follow-up experiments.”

A general principle

Kell and McDermott went on to test whether these responses were specific to particular sounds, and discovered that the above effect remained stable no matter the source or type of sound activity. Music, speech, or a squeaky toy, all activated the non-primary cortex region similarly, whether or not background noise was present.

The authors also tested whether attention is relevant. Even when the researchers sneakily distracted subjects with a visual task in the scanner, the cortical subregions responded to meaningful sound and background noise in the same way, showing that attention is not driving this aspect of sound processing. In other words, even when we are focused on reading a book, our brain is diligently sorting the sound of our meowing cat from the patter of heavy rain outside.

Future directions

The McDermott lab is now building computational models of the so-called “noise robustness” found in the Nature Communications study and Kell is pursuing a finer-grained understanding of sound processing in his postdoctoral work at Columbia, by exploring the neural circuit mechanisms underlying this phenomenon.

By gaining a deeper understanding of how the brain processes sound, the researchers hope their work will contribute to improve diagnoses and treatment of hearing dysfunction. Such research could help to reveal the origins of listening difficulties that accompany developmental disorders or age-related hearing loss. For instance, if hearing loss results from dysfunction in sensory processing, this could be caused by abnormal noise robustness in the auditory cortex. Normal noise robustness might instead suggest that there are impairments elsewhere in the brain, for example a break down in higher executive function.

“In the future,” McDermott says, “we hope these noninvasive measures of auditory function may become valuable tools for clinical assessment.”

Benefits of mindfulness for middle schoolers

Two new studies from investigators at the McGovern Institute at MIT suggest that mindfulness — the practice of focusing one’s awareness on the present moment — can enhance academic performance and mental health in middle schoolers. The researchers found that more mindfulness correlates with better academic performance, fewer suspensions from school, and less stress.

“By definition, mindfulness is the ability to focus attention on the present moment, as opposed to being distracted by external things or internal thoughts. If you’re focused on the teacher in front of you, or the homework in front of you, that should be good for learning,” says John Gabrieli, the Grover M. Hermann Professor in Health Sciences and Technology, a professor of brain and cognitive sciences, and a member of MIT’s McGovern Institute for Brain Research.

The researchers also showed, for the first time, that mindfulness training can alter brain activity in students. Sixth-graders who received mindfulness training not only reported feeling less stressed, but their brain scans revealed reduced activation of the amygdala, a brain region that processes fear and other emotions, when they viewed images of fearful faces.

“Mindfulness is like going to the gym. If you go for a month, that’s good, but if you stop going, the effects won’t last,” Gabrieli says. “It’s a form of mental exercise that needs to be sustained.”

Together, the findings suggest that offering mindfulness training in schools could benefit many students, says Gabrieli, who is the senior author of both studies.

“We think there is a reasonable possibility that mindfulness training would be beneficial for children as part of the daily curriculum in their classroom,” he says. “What’s also appealing about mindfulness is that there are pretty well-established ways of teaching it.”

In the moment

Both studies were performed at charter schools in Boston. In one of the papers, which appears today in the journal Behavioral Neuroscience, the MIT team studied about 100 sixth-graders. Half of the students received mindfulness training every day for eight weeks, while the other half took a coding class. The mindfulness exercises were designed to encourage students to pay attention to their breath, and to focus on the present moment rather than thoughts of the past or the future.

Students who received the mindfulness training reported that their stress levels went down after the training, while the students in the control group did not. Students in the mindfulness training group also reported fewer negative feelings, such as sadness or anger, after the training.

About 40 of the students also participated in brain imaging studies before and after the training. The researchers measured activity in the amygdala as the students looked at pictures of faces expressing different emotions.

At the beginning of the study, before any training, students who reported higher stress levels showed more amygdala activity when they saw fearful faces. This is consistent with previous research showing that the amygdala can be overactive in people who experience more stress, leading them to have stronger negative reactions to adverse events.

“There’s a lot of evidence that an overly strong amygdala response to negative things is associated with high stress in early childhood and risk for depression,” Gabrieli says.

After the mindfulness training, students showed a smaller amygdala response when they saw the fearful faces, consistent with their reports that they felt less stressed. This suggests that mindfulness training could potentially help prevent or mitigate mood disorders linked with higher stress levels, the researchers say.

Richard Davidson, a professor of psychology and psychiatry at the University of Wisconsin, says that the findings suggest there could be great benefit to implementing mindfulness training in middle schools.

“This is really one of the very first rigorous studies with children of that age to demonstrate behavioral and neural benefits of a simple mindfulness training,” says Davidson, who was not involved in the study.

Evaluating mindfulness

In the other paper, which appeared in the journal Mind, Brain, and Education in June, the researchers did not perform any mindfulness training but used a questionnaire to evaluate mindfulness in more than 2,000 students in grades 5-8. The questionnaire was based on the Mindfulness Attention Awareness Scale, which is often used in mindfulness studies on adults. Participants are asked to rate how strongly they agree with statements such as “I rush through activities without being really attentive to them.”

The researchers compared the questionnaire results with students’ grades, their scores on statewide standardized tests, their attendance rates, and the number of times they had been suspended from school. Students who showed more mindfulness tended to have better grades and test scores, as well as fewer absences and suspensions.

“People had not asked that question in any quantitative sense at all, as to whether a more mindful child is more likely to fare better in school,” Gabrieli says. “This is the first paper that says there is a relationship between the two.”

The researchers now plan to do a full school-year study, with a larger group of students across many schools, to examine the longer-term effects of mindfulness training. Shorter programs like the two-month training used in the Behavioral Neuroscience study would most likely not have a lasting impact, Gabrieli says.

“Mindfulness is like going to the gym. If you go for a month, that’s good, but if you stop going, the effects won’t last,” he says. “It’s a form of mental exercise that needs to be sustained.”

The research was funded by the Walton Family Foundation, the Poitras Center for Psychiatric Disorders Research at the McGovern Institute for Brain Research, and the National Council of Science and Technology of Mexico. Camila Caballero ’13, now a graduate student at Yale University, is the lead author of the Mind, Brain, and Education study. Caballero and MIT postdoc Clemens Bauer are lead authors of the Behavioral Neuroscience study. Additional collaborators were from the Harvard Graduate School of Education, Transforming Education, Boston Collegiate Charter School, and Calmer Choice.

A new way to deliver drugs with pinpoint targeting

Most pharmaceuticals must either be ingested or injected into the body to do their work. Either way, it takes some time for them to reach their intended targets, and they also tend to spread out to other areas of the body. Now, researchers at the McGovern Institute at MIT and elsewhere have developed a system to deliver medical treatments that can be released at precise times, minimally-invasively, and that ultimately could also deliver those drugs to specifically targeted areas such as a specific group of neurons in the brain.

The new approach is based on the use of tiny magnetic particles enclosed within a tiny hollow bubble of lipids (fatty molecules) filled with water, known as a liposome. The drug of choice is encapsulated within these bubbles, and can be released by applying a magnetic field to heat up the particles, allowing the drug to escape from the liposome and into the surrounding tissue.

The findings are reported today in the journal Nature Nanotechnology in a paper by MIT postdoc Siyuan Rao, Associate Professor Polina Anikeeva, and 14 others at MIT, Stanford University, Harvard University, and the Swiss Federal Institute of Technology in Zurich.

“We wanted a system that could deliver a drug with temporal precision, and could eventually target a particular location,” Anikeeva explains. “And if we don’t want it to be invasive, we need to find a non-invasive way to trigger the release.”

Magnetic fields, which can easily penetrate through the body — as demonstrated by detailed internal images produced by magnetic resonance imaging, or MRI — were a natural choice. The hard part was finding materials that could be triggered to heat up by using a very weak magnetic field (about one-hundredth the strength of that used for MRI), in order to prevent damage to the drug or surrounding tissues, Rao says.

Rao came up with the idea of taking magnetic nanoparticles, which had already been shown to be capable of being heated by placing them in a magnetic field, and packing them into these spheres called liposomes. These are like little bubbles of lipids, which naturally form a spherical double layer surrounding a water droplet.

Electron microscope image shows the actual liposome, the white blob at center, with its magnetic particles showing up in black at its center.
Image courtesy of the researchers

When placed inside a high-frequency but low-strength magnetic field, the nanoparticles heat up, warming the lipids and making them undergo a transition from solid to liquid, which makes the layer more porous — just enough to let some of the drug molecules escape into the surrounding areas. When the magnetic field is switched off, the lipids re-solidify, preventing further releases. Over time, this process can be repeated, thus releasing doses of the enclosed drug at precisely controlled intervals.

The drug carriers were engineered to be stable inside the body at the normal body temperature of 37 degrees Celsius, but able to release their payload of drugs at a temperature of 42 degrees. “So we have a magnetic switch for drug delivery,” and that amount of heat is small enough “so that you don’t cause thermal damage to tissues,” says Anikeeva, who also holds appointments in the departments of Materials Science and Engineering and the Brain and Cognitive Sciences.

In principle, this technique could also be used to guide the particles to specific, pinpoint locations in the body, using gradients of magnetic fields to push them along, but that aspect of the work is an ongoing project. For now, the researchers have been injecting the particles directly into the target locations, and using the magnetic fields to control the timing of drug releases. “The technology will allow us to address the spatial aspect,” Anikeeva says, but that has not yet been demonstrated.

This could enable very precise treatments for a wide variety of conditions, she says. “Many brain disorders are characterized by erroneous activity of certain cells. When neurons are too active or not active enough, that manifests as a disorder, such as Parkinson’s, or depression, or epilepsy.” If a medical team wanted to deliver a drug to a specific patch of neurons and at a particular time, such as when an onset of symptoms is detected, without subjecting the rest of the brain to that drug, this system “could give us a very precise way to treat those conditions,” she says.

Rao says that making these nanoparticle-activated liposomes is actually quite a simple process. “We can prepare the liposomes with the particles within minutes in the lab,” she says, and the process should be “very easy to scale up” for manufacturing. And the system is broadly applicable for drug delivery: “we can encapsulate any water-soluble drug,” and with some adaptations, other drugs as well, she says.

One key to developing this system was perfecting and calibrating a way of making liposomes of a highly uniform size and composition. This involves mixing a water base with the fatty acid lipid molecules and magnetic nanoparticles and homogenizing them under precisely controlled conditions. Anikeeva compares it to shaking a bottle of salad dressing to get the oil and vinegar mixed, but controlling the timing, direction and strength of the shaking to ensure a precise mixing.

Anikeeva says that while her team has focused on neurological disorders, as that is their specialty, the drug delivery system is actually quite general and could be applied to almost any part of the body, for example to deliver cancer drugs, or even to deliver painkillers directly to an affected area instead of delivering them systemically and affecting the whole body. “This could deliver it to where it’s needed, and not deliver it continuously,” but only as needed.

Because the magnetic particles themselves are similar to those already in widespread use as contrast agents for MRI scans, the regulatory approval process for their use may be simplified, as their biological compatibility has largely been proven.

The team included researchers in MIT’s departments of Materials Science and Engineering and Brain and Cognitive Sciences, as well as the McGovern Institute for Brain Research, the Simons Center for Social Brain, and the Research Laboratory of Electronics; the Harvard University Department of Chemistry and Chemical Biology and the John A. Paulsen School of Engineering and Applied Sciences; Stanford University; and the Swiss Federal Institute of Technology in Zurich. The work was supported by the Simons Postdoctoral Fellowship, the U.S. Defense Advanced Research Projects Agency, the Bose Research Grant, and the National Institutes of Health.

Call for Nominations: 2020 Scolnick Prize in Neuroscience

The McGovern Institute is now accepting nominations for the Scolnick Prize in Neuroscience, which recognizes an outstanding discovery or significant advance in any field of neuroscience, until December 15, 2019.

About the Scolnick Prize

The prize is named in honor of Edward M. Scolnick, who stepped down as president of Merck Research Laboratories in December 2002 after holding Merck’s top research post for 17 years. The prize, which is endowed through a gift from Merck to the McGovern Institute, consists of a $150,000 award, plus an inscribed gift. The recipient presents a public lecture at MIT, hosted by the McGovern Institute and followed by a dinner in Spring 2020.

Nomination Process

Candidates for the award must be nominated by individuals affiliated with universities, hospitals, medical schools, or research institutes, with a background in neuroscience. Self-nomination is not permitted. Each nomination should include a biosketch or CV of the nominee and a letter of nomination with a summary and analysis of the nominee’s major contributions to the field of neuroscience. Up to two representative reprints will be accepted. The winner, selected by a committee appointed by the director of the McGovern Institute, will be announced in January 2020.

More information about the Scolnick Prize, including details about the nomination process, selection committee, and past Scolnick Prize recipients, can be found on our website.

submit nomination

Finding the brain’s compass

The world is constantly bombarding our senses with information, but the ways in which our brain extracts meaning from this information remains elusive. How do neurons transform raw visual input into a mental representation of an object – like a chair or a dog?

In work published today in Nature Neuroscience, MIT neuroscientists have identified a brain circuit in mice that distills “high-dimensional” complex information about the environment into a simple abstract object in the brain.

“There are no degree markings in the external world, our current head direction has to be extracted, computed, and estimated by the brain,” explains Ila Fiete, an associate member of the McGovern Institute and senior author of the paper. “The approaches we used allowed us to demonstrate the emergence of a low-dimensional concept, essentially an abstract compass in the brain.”

This abstract compass, according to the researchers, is a one-dimensional ring that represents the current direction of the head relative to the external world.

Schooling fish

Trying to show that a data cloud has a simple shape, like a ring, is a bit like watching a school of fish. By tracking one or two sardines, you might not see a pattern. But if you could map all of the sardines, and transform the noisy dataset into points representing the positions of the whole school of sardines over time, and where each fish is relative to its neighbors, a pattern would emerge. This model would reveal a ring shape, a simple shape formed by the activity of hundreds of individual fish.

Fiete, who is also an associate professor in MIT’s Department of Brain and Cognitive Sciences, used a similar approach, called topological modeling, to transform the activity of large populations of noisy neurons into a data cloud the shape of a ring.

Simple and persistent ring

Previous work in fly brains revealed a physical ellipsoid ring of neurons representing changes in the direction of the fly’s head, and researchers suspected that such a system might also exist in mammals.

In this new mouse study, Fiete and her colleagues measured hours of neural activity from scores of neurons in the anterodorsal thalamic nucleus (ADN) – a region believed to play a role in spatial navigation – as the animals moved freely around their environment. They mapped how the neurons in the ADN circuit fired as the animal’s head changed direction.

Together these data points formed a cloud in the shape of a simple and persistent ring.

“In the absence of this ring,” Fiete explains, “we would be lost in the world.”

“This tells us a lot about how neural networks are organized in the brain,” explains Edvard Moser, Director of the Kavli Institute of Systems Neuroscience in Norway, who was not involved in the study. “Past data have indirectly pointed towards such a ring-like organization but only now has it been possible, with the right cell numbers and methods, to demonstrate it convincingly,” says Moser.

Their method for characterizing the shape of the data cloud allowed Fiete and colleagues to determine which variable the circuit was devoted to representing, and to decode this variable over time, using only the neural responses.

“The animal’s doing really complicated stuff,” explains Fiete, “but this circuit is devoted to integrating the animal’s speed along a one-dimensional compass that encodes head direction,” explains Fiete. “Without a manifold approach, which captures the whole state space, you wouldn’t know that this circuit of thousands of neurons is encoding only this one aspect of the complex behavior, and not encoding any other variables at the same time.”

Even during sleep, when the circuit is not being bombarded with external information, this circuit robustly traces out the same one-dimensional ring, as if dreaming of past head direction trajectories.

Further analysis revealed that the ring acts an attractor. If neurons stray off trajectory, they are drawn back to it, quickly correcting the system. This attractor property of the ring means that the representation of head direction in abstract space is reliably stable over time, a key requirement if we are to understand and maintain a stable sense of where our head is relative to the world around us.

“In the absence of this ring,” Fiete explains, “we would be lost in the world.”

Shaping the future

Fiete’s work provides a first glimpse into how complex sensory information is distilled into a simple concept in the mind, and how that representation autonomously corrects errors, making it exquisitely stable.

But the implications of this study go beyond coding of head direction.

“Similar organization is probably present for other cognitive functions so the paper is likely to inspire numerous new studies,” says Moser.

Fiete sees these analyses and related studies carried out by colleagues at the Norwegian University of Science and Technology, Princeton University, the Weitzman Institute, and elsewhere as fundamental to the future of neural decoding studies.

With this approach, she explains, it is possible to extract abstract representations of the mind from the brain, potentially even thoughts and dreams.

“We’ve found that the brain deconstructs and represents complex things in the world with simple shapes,” explains Fiete. “Manifold-level analysis can help us to find those shapes, and they almost certainly exist beyond head direction circuits.”

Do thoughts have mass?

As part of our Ask the Brain series, we received the question, “Do thoughts have mass?” The following is a guest blog post by Michal De-Medonsa, technical associate and manager of the Jazayeri lab, who tapped into her background in philosophy to answer this intriguing question.

_____

Portrat of Michal De-Medonsa
Jazayeri lab manager (and philosopher) Michal De-Medonsa.

To answer the question, “Do thoughts have mass?” we must, like any good philosopher, define something that already has a definition – “thoughts.”

Logically, we can assert that thoughts are either metaphysical or physical (beyond that, we run out of options). If our definition of thought is metaphysical, it is safe to say that metaphysical thoughts do not have mass since they are by definition not physical, and mass is a property of a physical things. However, if we define a thought as a physical thing, it becomes a little trickier to determine whether or not it has mass.

A physical definition of thoughts falls into (at least) two subgroups – physical processes and physical parts. Take driving a car, for example – a parts definition describes the doors, motor, etc. and has mass. A process definition of a car being driven, turning the wheel, moving from point A to point B, etc. does not have mass. The process of driving is a physical process that involves moving physical matter, but we wouldn’t say that the act of driving has mass. The car itself, however, is an example of physical matter, and as any cyclist in the city of Boston is well aware  – cars have mass. It’s clear that if we define a thought as a process, it does not have mass, and if we define a thought as physical parts, it does have mass – so, which one is it? In order to resolve our issue, we have to be incredibly precise with our definition. Is a thought a process or parts? That is, is a thought more like driving or more like a car?

In order to resolve our issue, we have to be incredibly precise with our definition of the word thought.

Both physical definitions (process and parts) have merit. For a parts definition, we can look at what is required for a thought – neurons, electrical signals, and neurochemicals, etc. This type of definition becomes quite imprecise and limiting. It doesn’t seem too problematic to say that the neurons, neurochemicals, etc. are themselves the thought, but this style of definition starts to fall apart when we try to include all the parts involved (e.g. blood flow, connective tissue, outside stimuli). When we look at a face, the stimuli received by the visual cortex is part of the thought – is the face part of a thought? When we look at our phone, is the phone itself part of a thought? A parts definition either needs an arbitrary limit, or we end up having to include all possible parts involved in the thought, ending up with an incredibly convoluted and effectively useless definition.

A process definition is more versatile and precise, and it allows us to include all the physical parts in a more elegant way. We can now say that all the moving parts are included in the process without saying that they themselves are the thought. That is, we can say blood flow is included in the process without saying that blood flow itself is part of the thought. It doesn’t sound ridiculous to say that a phone is part of the thought process. If we subscribe to the parts definition, however, we’re forced to say that part of the mass of a thought comes from the mass of a phone. A process definition allows us to be precise without being convoluted, and allows us to include outside influences without committing to absurd definitions.

Typical of a philosophical endeavor, we’re left with more questions and no simple answer. However, we can walk away with three conclusions.

  1. A process definition of “thought” allows for elegance and the involvement of factors outside the “vacuum” of our physical body, however, we lose out on some function by not describing a thought by its physical parts.
  2. The colloquial definition of “thought” breaks down once we invite a philosopher over to break it down, but this is to be expected – when we try to break something down, sometimes, it will break down. What we should be aware of is that if we want to use the word in a rigorous scientific framework, we need a rigorous scientific definition.
  3. Most importantly, it’s clear that we need to put a lot of work into defining exactly what we mean by “thought” – a job well suited to a scientifically-informed philosopher.

Michal De-Medonsa earned her bachelor’s degree in neuroscience and philosophy from Johns Hopkins University in 2012 and went on to receive her master’s degree in history and philosophy of science at the University of Pittsburgh in 2015. She joined the Jazayeri lab in 2018 as a lab manager/technician and spends most of her free time rock climbing, doing standup comedy, and woodworking at the MIT Hobby Shop. 

_____

Do you have a question for The Brain? Ask it here.

Brain region linked to altered social interactions in autism model

Although psychiatric disorders can be linked to particular genes, the brain regions and mechanisms underlying particular disorders are not well-understood. Mutations or deletions of the SHANK3 gene are strongly associated with autism spectrum disorder (ASD) and a related rare disorder called Phelan-McDermid syndrome. Mice with SHANK3 mutations also display some of the traits associated with autism, including avoidance of social interactions, but the brain regions responsible for this behavior have not been identified.

A new study by neuroscientists at MIT and colleagues in China provides clues to the neural circuits underlying social deficits associated with ASD. The paper, published in Nature Neuroscience, found that structural and functional impairments in the anterior cingulate cortex (ACC) of SHANK3 mutant mice are linked to altered social interactions.

“Neurobiological mechanisms of social deficits are very complex and involve many brain regions, even in a mouse model,” explains Guoping Feng, the James W. and Patricia T. Poitras Professor at MIT and one of the senior authors of the study. “These findings add another piece of the puzzle to mapping the neural circuits responsible for this social deficit in ASD models.”

The Nature Neuroscience paper is the result of a collaboration between Feng, who is also an investigator at MIT’s McGovern Institute and a senior scientist in the Broad Institute’s Stanley Center for Psychiatric Research, and Wenting Wang and Shengxi Wu at the Fourth Military Medical University, Xi’an, China.

A number of brain regions have been implicated in social interactions, including the prefrontal cortex (PFC) and its projections to brain regions including the nucleus accumbens and habenula, but these studies failed to definitively link the PFC to altered social interactions seen in SHANK3 knockout mice.

In the new study, the authors instead focused on the ACC, a brain region noted for its role in social functions in humans and animal models. The ACC is also known to play a role in fundamental cognitive processes, including cost-benefit calculation, motivation, and decision making.

In mice lacking SHANK3, the researchers found structural and functional disruptions at the synapses, or connections, between excitatory neurons in the ACC. The researchers went on to show that the loss of SHANK3 in excitatory ACC neurons alone was enough to disrupt communication between these neurons and led to unusually reduced activity of these neurons during behavioral tasks reflecting social interaction.

Having implicated these ACC neurons in social preferences and interactions in SHANK3 knockout mice, the authors then tested whether activating these same neurons could rescue these behaviors. Using optogenetics and specfic drugs, the researchers activated the ACC neurons and found improved social behavior in the SHANK3 mutant mice.

“Next, we are planning to explore brain regions downstream of the ACC that modulate social behavior in normal mice and models of autism,” explains Wenting Wang, co-corresponding author on the study. “This will help us to better understand the neural mechanisms of social behavior, as well as social deficits in neurodevelopmental disorders.”

Previous clinical studies reported that anatomical structures in the ACC were altered and/or dysfunctional in people with ASD, an initial indication that the findings from SHANK3 mice may also hold true in these individuals.

The research was funded, in part, by the Natural Science Foundation of China. Guoping Feng was supported by NIMH grant no. MH097104, the  Poitras Center for Psychiatric Disorders Research at the McGovern Institute at MIT, and the Hock E. Tan and K. Lisa Yang Center for Autism Research at the McGovern Institute at MIT.

Four new faces in the School of Science faculty

This fall, the School of Science will welcome four new members joining the faculty in the departments of Biology, Brain and Cognitive Sciences, and Chemistry.

Evelina Fedorenko investigates how our brains process language. She has developed novel analytic approaches for functional magnetic resonance imaging (fMRI) and other brain imaging techniques to help answer the questions of how the language processing network functions and how it relates to other networks in the brain. She works with both neurotypical individuals and individuals with brain disorders. Fedorenko joins the Department of Brain and Cognitive Sciences as an assistant professor. She received her BA from Harvard University in linguistics and psychology and then completed her doctoral studies at MIT in 2007. After graduating from MIT, Fedorenko worked as a postdoc and then as a research scientist at the McGovern Institute for Brain Research. In 2014, she joined the faculty at Massachusetts General Hospital and Harvard Medical School, where she was an associate researcher and an assistant professor, respectively. She is also a member of the McGovern Institute.

Morgan Sheng focuses on the structure, function, and turnover of synapses, the junctions that allow communication between brain cells. His discoveries have improved our understanding of the molecular basis of cognitive function and diseases of the nervous system, such as autism, Alzheimer’s disease, and dementia. Being both a physician and a scientist, he incorporates genetic as well as biological insights to aid the study and treatment of mental illnesses and neurodegenerative diseases. He rejoins the Department of Brain and Cognitive Sciences (BCS), returning as a professor of neuroscience, a position he also held from 2001 to 2008. At that time, he was a member of the Picower Institute for Learning and Memory, a joint appointee in the Department of Biology, and an investigator of the Howard Hughes Medical Institute. Sheng earned his PhD from Harvard University in 1990, completed a postdoc at the University of California at San Francisco in 1994, and finished his medical training with a residency in London in 1986. From 1994 to 2001, he researched molecular and cellular neuroscience at Massachusetts General Hospital and Harvard Medical School. From 2008 to 2019 he was vice president of neuroscience at Genentech, a leading biotech company. In addition to his faculty appointment in BCS, Sheng is core institute member and co-director of the Stanley Center for Psychiatric Research at the Broad Institute of MIT and Harvard, as well as an affiliate member of the McGovern Institute and the Picower Institute.

Seychelle Vos studies genome organization and its effect on gene expression at the intersection of biochemistry and genetics. Vos uses X-ray crystallography, cryo-electron microscopy, and biophysical approaches to understand how transcription is physically coupled to the genome’s organization and structure. She joins the Department of Biology as an assistant professor after completing a postdoc at the Max Plank Institute for Biophysical Chemistry. Vos received her BS in genetics in 2008 from the University of Georgia and her PhD in molecular and cell biology in 2013 from the University of California at Berkeley.

Xiao Wang is a chemist and molecular engineer working to improve our understanding of biology and human health. She focuses on brain function and dysfunction, producing and applying new chemical, biophysical, and genomic tools at the molecular level. Previously, she focused on RNA modifications and how they impact cellular function. Wang is joining MIT as an assistant professor in the Department of Chemistry. She was previously a postdoc of the Life Science Research Foundation at Stanford University. Wang received her BS in chemistry and molecular engineering from Peking University in 2010 and her PhD in chemistry from the University of Chicago in 2015. She is also a core member of the Broad Institute of MIT and Harvard.

Ed Boyden wins premier Royal Society honor

Edward S. Boyden, the Y. Eva Tan Professor in Neurotechnology at MIT, has been awarded the 2019 Croonian Medal and Lecture by the Royal Society. Twenty-four medals and awards are announced by the Royal Society each year, honoring exceptional researchers who are making outstanding contributions to science.

“The Royal Society gives an array of medals and awards to scientists who have done exceptional, ground-breaking work,” explained Sir Venki Ramakrishnan, President of the Royal Society. “This year, it is again a pleasure to see these awards bestowed on scientists who have made such distinguished and far-reaching contributions in their fields. I congratulate and thank them for their efforts.”

Boyden wins the medal and lecture in recognition of his research that is expanding our understanding of the brain. This includes his critical role in the development of optogenetics, a technique for controlling brain activity with light, and his invention of expansion microscopy. Croonian Medal laureates include notable luminaries of science and neurobiology.

“It is a great honor to be selected to receive this medal, especially
since it was also given to people such as Santiago Ramon y Cajal, the
founder of modern neuroscience,” says Boyden. “This award reflects the great work of many fantastic students, postdocs, and collaborators who I’ve had the privilege to work with over the years.”

The award includes an invitation to deliver the premier British lecture in the biological sciences, given annually at the Royal Society in London. At the lecture, the winner is awarded a medal and a gift of £10,000. This announcement comes shortly after Boyden was co-awarded the Warren Alpert Prize for his role in developing optogenetics.

History of the Croonian Medal and Lecture

William Croone, pictured, envisioned an annual lecture that is the premier biological sciences medal and lecture at the Royal Society
William Croone, FRS Photo credit: Royal College of Physicians, London

The lectureship was conceived by William Croone FRS, one of the original Fellows of the Society based in London. Among the papers left on his death in 1684 were plans to endow two lectureships, one at the Royal Society and the other at the Royal College of Physicians. His widow later bequeathed the means to carry out the scheme. The lecture series began in 1738.

 

 

Ed Boyden holds the titles of Investigator, McGovern Institute; Y. Eva Tan Professor in Neurotechnology at MIT; Leader, Synthetic Neurobiology Group, MIT Media Lab; Professor, Biological Engineering, Brain and Cognitive Sciences, MIT Media Lab; Co-Director, MIT Center for Neurobiological Engineering; Member, MIT Center for Environmental Health Sciences, Computational and Systems Biology Initiative, and Koch Institute.