Artificial neural networks model face processing in autism

Many of us easily recognize emotions expressed in others’ faces. A smile may mean happiness, while a frown may indicate anger. Autistic people often have a more difficult time with this task. It’s unclear why. But new research, published today in The Journal of Neuroscience, sheds light on the inner workings of the brain to suggest an answer. And it does so using a tool that opens new pathways to modeling the computation in our heads: artificial intelligence.

Researchers have primarily suggested two brain areas where the differences might lie. A region on the side of the primate (including human) brain called the inferior temporal (IT) cortex contributes to facial recognition. Meanwhile, a deeper region called the amygdala receives input from the IT cortex and other sources and helps process emotions.

Kohitij Kar, a research scientist in the lab of MIT Professor James DiCarlo, hoped to zero in on the answer. (DiCarlo, the Peter de Florez Professor in the Department of Brain and Cognitive Sciences, is a member of the McGovern Institute for Brain Research and director of MIT’s Quest for Intelligence.)

Kar began by looking at data provided by two other researchers: Shuo Wang, at Washington University in St. Louis, and Ralph Adolphs, at the California Institute of Technology. In one experiment, they showed images of faces to autistic adults and to neurotypical controls. The images had been generated by software to vary on a spectrum from fearful to happy, and the participants judged, quickly, whether the faces depicted happiness. Compared with controls, autistic adults required higher levels of happiness in the faces to report them as happy.

Modeling the brain

Kar, who is also a member of the Center for Brains, Minds and Machines, trained an artificial neural network, a complex mathematical function inspired by the brain’s architecture, to perform the same task. The network contained layers of units that roughly resemble biological neurons that process visual information. These layers process information as it passes from an input image to a final judgment indicating the probability that the face is happy. Kar found that the network’s behavior more closely matched the neurotypical controls than it did the autistic adults.

The network also served two more interesting functions. First, Kar could dissect it. He stripped off layers and retested its performance, measuring the difference between how well it matched controls and how well it matched autistic adults. This difference was greatest when the output was based on the last network layer. Previous work has shown that this layer in some ways mimics the IT cortex, which sits near the end of the primate brain’s ventral visual processing pipeline. Kar’s results implicate the IT cortex in differentiating neurotypical controls from autistic adults.

The other function is that the network can be used to select images that might be more efficient in autism diagnoses. If the difference between how closely the network matches neurotypical controls versus autistic adults is greater when judging one set of images versus another set of images, the first set could be used in the clinic to detect autistic behavioral traits. “These are promising results,” Kar says. Better models of the brain will come along, “but oftentimes in the clinic, we don’t need to wait for the absolute best product.”

Next, Kar evaluated the role of the amygdala. Again, he used data from Wang and colleagues. They had used electrodes to record the activity of neurons in the amygdala of people undergoing surgery for epilepsy as they performed the face task. The team found that they could predict a person’s judgment based on these neurons’ activity. Kar re-analyzed the data, this time controlling for the ability of the IT-cortex-like network layer to predict whether a face truly was happy. Now, the amygdala provided very little information of its own. Kar concludes that the IT cortex is the driving force behind the amygdala’s role in judging facial emotion.

Noisy networks

Finally, Kar trained separate neural networks to match the judgments of neurotypical controls and autistic adults. He looked at the strengths or “weights” of the connections between the final layers and the decision nodes. The weights in the network matching autistic adults, both the positive or “excitatory” and negative or “inhibitory” weights, were weaker than in the network matching neurotypical controls. This suggests that sensory neural connections in autistic adults might be noisy or inefficient.

To further test the noise hypothesis, which is popular in the field, Kar added various levels of fluctuation to the activity of the final layer in the network modeling autistic adults. Within a certain range, added noise greatly increased the similarity between its performance and that of the autistic adults. Adding noise to the control network did much less to improve its similarity to the control participants. This further suggest that sensory perception in autistic people may be the result of a so-called “noisy” brain.

Computational power

Looking forward, Kar sees several uses for computational models of visual processing. They can be further prodded, providing hypotheses that researchers might test in animal models. “I think facial emotion recognition is just the tip of the iceberg,” Kar says. They can also be used to select or even generate diagnostic content. Artificial intelligence could be used to generate content like movies and educational materials that optimally engages autistic children and adults. One might even tweak facial and other relevant pixels in what autistic people see in augmented reality goggles, work that Kar plans to pursue in the future.

Ultimately, Kar says, the work helps to validate the usefulness of computational models, especially image-processing neural networks. They formalize hypotheses and make them testable. Does one model or another better match behavioral data? “Even if these models are very far off from brains, they are falsifiable, rather than people just making up stories,” he says. “To me, that’s a more powerful version of science.”

Convenience-sized RNA editing

Last year, researchers at MIT’s McGovern Institute discovered and characterized Cas7-11, the first CRISPR enzyme capable of making precise, guided cuts to strands of RNA without harming cells in the process. Now, working with collaborators at the University of Tokyo, the same team has revealed that Cas7-11 can be shrunk to a more compact version, making it an even more viable option for editing the RNA inside living cells. The new, compact Cas7-11 was described today in the journal Cell along with a detailed structural analysis of the original enzyme.

“When we looked at the structure, it was clear there were some pieces that weren’t needed which we could actually remove,” says McGovern Fellow Omar Abudayyeh, who led the new work with McGovern Fellow Jonathan Gootenberg and collaborator Hiroshi Nishimasu from the University of Tokyo. “This makes the enzyme small enough that it fits into a single viral vector for therapeutic applications.”

The authors, who also include postdoctoral researcher Nathan Zhou from the McGovern Institute and Kazuki Kato from the University Tokyo, see the new three-dimensional structure of Cas7-11 as a rich resource toanswer questions about the basic biology of the enzymes and reveal other ways to tweak its function in the future.

Targeting RNA

McGovern Fellows Jonathan Gootenberg and Omar Abudayyeh in their lab. Photo: Caitlin Cunningham

Over the past decade, the CRISPR-Cas9 genome editing technology has given researchers the ability to modify the genes inside human cells—a boon for both basic research and the development of therapeutics to reverse disease-causing genetic mutations. But CRISPR-Cas9 only works to alter DNA, and for some research and clinical purposes, editing RNA is more effective or useful.

A cell retains its DNA for life, and passes an identical copy to daughter cells as it duplicates, so any changes to DNA are relatively permanent. However, RNA is a more transient molecule, transcribed from DNA and degraded not long after.

“There are lots of positives about being able to permanently change DNA, especially when it comes to treating an inherited genetic disease,” Gootenberg says. “But for an infection, an injury or some other temporary disease, being able to temporarily modify a gene through RNA targeting makes more sense.”

Until Abudayyeh, Gootenberg and their colleagues discovered and characterized Cas7-11, the only enzyme that could target RNA had a messy side effect; when it recognized a particular gene, the enzyme—Cas13—began cutting up all the RNA around it. This property makes Cas13 effective for diagnostic tests, where it is used to detect the presence of a piece of RNA, but not very useful for therapeutics, where targeted cuts are required.

The discovery of Cas7-11 opened the doors to a more precise form of RNA editing, analogous to the Cas9 enzyme for DNA. However, the massive Cas7-11 protein was too big to fit inside a single viral vector—the empty shell of a virus that researchers typically use to deliver gene editing machinery into patient’s cells.

Structural insight

To determine the overall structure of Cas7-11, Abudayyeh, Gootenberg and Nishimasu used cryo-electron microscopy, which shines beams of electrons on frozen protein samples and measures how the beams are transmitted. The researchers knew that Cas7-11 was like an amalgamation of five separate Cas enzymes, fused into one single gene, but were not sure exactly how those parts folded and fit together.

“The really fascinating thing about Cas7-11, from a fundamental biology perspective, is that it should be all these separate pieces that come together, but instead you have a fusion into one gene,” Gootenberg says. “We really didn’t know what that would look like.”

The structure of Cas7-11, caught in the act of binding both its target tRNA strand and the guide RNA, which directs that binding, revealed how the pieces assembled and which parts of the protein were critical to recognizing and cutting RNA. This kind of structural insight is critical to figuring out how to make Cas7-11 carry out targeted jobs inside human cells.

The structure also illuminated a section of the protein that wasn’t serving any apparent functional role. This finding suggested the researchers could remove it, re-engineering Cas7-11 to make it smaller without taking away its ability to target RNA. Abudayyeh and Gootenberg tested the impact of removing different bits of this section, resulting in a new compact version of the protein, dubbed Cas7-11S. With Cas7-11S in hand, they packaged the system inside a single viral vector, delivered it into mammalian cells and efficiently targeted RNA.

The team is now planning future studies on other proteins that interact with Cas7-11 in the bacteria that it originates from, and also hopes to continue working towards the use of Cas7-11 for therapeutic applications.

“Imagine you could have an RNA gene therapy, and when you take it, it modifies your RNA, but when you stop taking it, that modification stops,” Abudayyeh says. “This is really just the beginning of enabling that tool set.”

This research was funded, in part, by the McGovern Institute Neurotechnology Program, K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics in Neuroscience, G. Harold & Leila Y. Mathers Charitable Foundation, MIT John W. Jarve (1978) Seed Fund for Science Innovation, FastGrants, Basis for Supporting Innovative Drug Discovery and Life Science Research Program, JSPS KAKENHI, Takeda Medical Research Foundation, and Inamori Research Institute for Science.

A voice for change — in Spanish

Jessica Chomik-Morales had a bicultural childhood. She was born in Boca Raton, Florida, where her parents had come seeking a better education for their daughter than she would have access to in Paraguay. But when she wasn’t in school, Chomik-Morales was back in that small, South American country with her family. One of the consequences of growing up in two cultures was an early interest in human behavior. “I was always in observer mode,” Chomik-Morales says, recalling how she would tune in to the nuances of social interactions in order to adapt and fit in.

Today, that fascination with human behavior is driving Chomik-Morales as she works with MIT professor of cognitive science Laura Schulz and Walter A. Rosenblith Professor of Cognitive Neuroscience and McGovern Institute for Brain Research investigator Nancy Kanwisher as a post-baccalaureate research scholar, using functional brain imaging to investigate how the brain recognizes and understands causal relationships. Since arriving at MIT last fall, she’s worked with study volunteers to collect functional MRI (fMRI) scans and used computational approaches to interpret the images. She’s also refined her own goals for the future.

Jessica Chomik-Morales (right) with postdoctoral associate Héctor De Jesús-Cortés. Photo: Steph Stevens

She plans to pursue a career in clinical neuropsychology, which will merge her curiosity about the biological basis of behavior with a strong desire to work directly with people. “I’d love to see what kind of questions I could answer about the neural mechanisms driving outlier behavior using fMRI coupled with cognitive assessment,” she says. And she’s confident that her experience in MIT’s two-year post-baccalaureate program will help her get there. “It’s given me the tools I need, and the techniques and methods and good scientific practice,” she says. “I’m learning that all here. And I think it’s going to make me a more successful scientist in grad school.”

The road to MIT

Chomik-Morales’s path to MIT was not a straightforward trajectory through the U.S. school system. When her mom, and later her dad, were unable to return to the U.S., she started eight grade in the capital city of Asunción. It did not go well. She spent nearly every afternoon in the principal’s office, and soon her father was encouraging her to return to the United States. “You are an American,” he told her. “You have a right to the educational system there.”

Back in Florida, Chomik-Morales became a dedicated student, even while she worked assorted jobs and shuffled between the homes of families who were willing to host her. “I had to grow up,” she says. “My parents are sacrificing everything just so I can have a chance to be somebody. People don’t get out of Paraguay often, because there aren’t opportunities and it’s a very poor country. I was given an opportunity, and if I waste that, then that is disrespect not only to my parents, but to my lineage, to my country.”

As she graduated from high school and went on to earn a degree in cognitive neuroscience at Florida Atlantic University, Chomik-Morales found herself experiencing things that were completely foreign to her family. Though she spoke daily with her mom via WhatsApp, it was hard to share what she was learning in school or what she was doing in the lab. And while they celebrated her academic achievements, Chomik-Morales knew they didn’t really understand them. “Neither of my parents went to college,” she says. “My mom told me that she never thought twice about learning about neuroscience. She had this misconception that it was something that she would never be able to digest.”

Chomik-Morales believes that the wonders of neuroscience are for everybody. But she also knows that Spanish speakers like her mom have few opportunities to hear the kinds of accessible, engaging stories that might draw them in. So she’s working to change that. With support from the McGovern Institute, the National Science Foundation funded Science and Technology Center for Brains, Minds, and Machines, Chomik-Morales is hosting and producing a weekly podcast called “Mi Última Neurona” (“My Last Neuron”), which brings conversations with neuroscientists to Spanish speakers around the world.

Listeners hear how researchers at MIT and other institutions are exploring big concepts like consciousness and neurodegeneration, and learn about the approaches they use to study the brain in humans, animals, and computational models. Chomik-Morales wants listeners to get to know neuroscientists on a personal level too, so she talks with her guests about their career paths, their lives outside the lab, and often, their experiences as immigrants in the United States.

After recording an interview with Chomik-Morales that delved into science, art, and the educational system in his home country of Peru, postdoc Arturo Deza thinks “Mi Última Neurona” has the potential to inspire Spanish speakers in Latin America, as well immigrants in other countries. “Even if you’re not a scientist, it’s really going to captivate you and you’re going to get something out of it,” he says. To that point, Chomik-Morales’s mother has quickly become an enthusiastic listener, and even begun seeking out resources to learn more about the brain on her own.

Chomik-Morales hopes the stories her guests share on “Mi Última Neurona” will inspire a future generation of Hispanic neuroscientists. She also wants listeners to know that a career in science doesn’t have to mean leaving their country behind. “Gain whatever you need to gain from outside, and then, if it’s what you desire, you’re able to go back and help your own community,” she says. With “Mi Última Neurona,” she adds, she feels she is giving back to her roots.

How do illusions trick the brain?

As part of our Ask the Brain series, Jarrod Hicks, a graduate student in Josh McDermott‘s lab and Dana Boebinger, a postdoctoral researcher at the University of Rochester (and former graduate student in Josh McDermott’s lab), answer the question, “How do illusions trick the brain?”

_____

Graduate student Jarrod Hicks studies how the brain processes sound. Photo: M.E. Megan Hicks

Imagine you’re a detective. Your job is to visit a crime scene, observe some evidence, and figure out what happened. However, there are often multiple stories that could have produced the evidence you observe. Thus, to solve the crime, you can’t just rely on the evidence in front of you – you have to use your knowledge about the world to make your best guess about the most likely sequence of events. For example, if you discover cat hair at the crime scene, your prior knowledge about the world tells you it’s unlikely that a cat is the culprit. Instead, a more likely explanation is that the culprit might have a pet cat.

Although it might not seem like it, this kind of detective work is what your brain is doing all the time. As your senses send information to your brain about the world around you, your brain plays the role of detective, piecing together each bit of information to figure out what is happening in the world. The information from your senses usually paints a pretty good picture of things, but sometimes when this information is incomplete or unclear, your brain is left to fill in the missing pieces with its best guess of what should be there. This means that what you experience isn’t actually what’s out there in the world, but rather what your brain thinks is out there. The consequence of this is that your perception of the world can depend on your experience and assumptions.

Optical illusions

Optical illusions are a great way of showing how our expectations and assumptions affect what we perceive. For example, look at the squares labeled “A” and “B” in the image below.

Checkershadow illusion. Image: Edward H. Adelson

Is one of them lighter than the other? Although most people would agree that the square labeled “B” is much lighter than the one labeled “A,” the two squares are actually the exact same color. You perceive the squares differently because your brain knows, from experience, that shadows tend to make things appear darker than what they actually are. So, despite the squares being physically identical, your brain thinks “B” should be lighter.

Auditory illusions

Tricks of perception are not limited to optical illusions. There are also several dramatic examples of how our expectations influence what we hear. For example, listen to the mystery sound below. What do you hear?

Mystery sound

Because you’ve probably never heard a sound quite like this before, your brain has very little idea about what to expect. So, although you clearly hear something, it might be very difficult to make out exactly what that something is. This mystery sound is something called sine-wave speech, and what you’re hearing is essentially a very degraded sound of someone speaking.

Now listen to a “clean” version of this speech in the audio clip below:

Clean speech

You probably hear a person saying, “the floor was quite slippery.” Now listen to the mystery sound above again. After listening to the original audio, your brain has a strong expectation about what you should hear when you listen to the mystery sound again. Even though you’re hearing the exact same mystery sound as before, you experience it completely differently. (Audio clips courtesy of University of Sussex).

 

Dana Boebinger describes the science of illusions in this McGovern Minute.

Subjective perceptions

These illusions have been specifically designed by scientists to fool your brain and reveal principles of perception. However, there are plenty of real-life situations in which your perceptions strongly depend on expectations and assumptions. For example, imagine you’re watching TV when someone begins to speak to you from another room. Because the noise from the TV makes it difficult to hear the person speaking, your brain might have to fill in the gaps to understand what’s being said. In this case, different expectations about what is being said could cause you to hear completely different things.

Which phrase do you hear?

Listen to the clip below to hear a repeating loop of speech. As the sound plays, try to listen for one of the phrases listed in teal below.

Because the audio is somewhat ambiguous, the phrase you perceive depends on which phrase you listen for. So even though it’s the exact same audio each time, you can perceive something totally different! (Note: the original audio recording is from a football game in which the fans were chanting, “that is embarrassing!”)

Illusions like the ones above are great reminders of how subjective our perceptions can be. In order to make sense of the messy information coming in from our senses, our brains are constantly trying to fill in the blanks and with its best guess of what’s out there. Because of this guesswork, our perceptions depend on our experiences, leading each of us to perceive and interact with the world in a way that’s uniquely ours.

Jarrod Hicks is a PhD candidate in the Department of Brain and Cognitive Sciences at MIT working with Josh McDermott in the Laboratory for Computational Audition. He studies sound segregation, a key aspect of real-world hearing in which a sound source of interest is estimated amid a mixture of competing sources. He is broadly interested in teaching/outreach, psychophysics, computational approaches to represent stimulus spaces, and neural coding of high-level sensory representations.

_____

Do you have a question for The Brain? Ask it here.

Seven from MIT elected to American Academy of Arts and Sciences for 2022

Seven MIT faculty members are among more than 250 leaders from academia, the arts, industry, public policy, and research elected to the American Academy of Arts and Sciences, the academy announced Thursday.

One of the nation’s most prestigious honorary societies, the academy is also a leading center for independent policy research. Members contribute to academy publications, as well as studies of science and technology policy, energy and global security, social policy and American institutions, the humanities and culture, and education.

Those elected from MIT this year are:

  • Alberto Abadie, professor of economics and associate director of the Institute for Data, Systems, and Society
  • Regina Barzilay, the School of Engineering Distinguished Professor for AI and Health
  • Roman Bezrukavnikov, professor of mathematics
  • Michale S. Fee, the Glen V. and Phyllis F. Dorflinger Professor and head of the Department of Brain and Cognitive Sciences
  • Dina Katabi, the Thuan and Nicole Pham Professor
  • Ronald T. Raines, the Roger and Georges Firmenich Professor of Natural Products Chemistry
  • Rebecca R. Saxe, the John W. Jarve Professor of Brain and Cognitive Sciences

“We are celebrating a depth of achievements in a breadth of areas,” says David Oxtoby, president of the American Academy. “These individuals excel in ways that excite us and inspire us at a time when recognizing excellence, commending expertise, and working toward the common good is absolutely essential to realizing a better future.”

Since its founding in 1780, the academy has elected leading thinkers from each generation, including George Washington and Benjamin Franklin in the 18th century, Maria Mitchell and Daniel Webster in the 19th century, and Toni Morrison and Albert Einstein in the 20th century. The current membership includes more than 250 Nobel and Pulitzer Prize winners.

What words can convey

From search engines to voice assistants, computers are getting better at understanding what we mean. That’s thanks to language processing programs that make sense of a staggering number of words, without ever being told explicitly what those words mean. Such programs infer meaning instead through statistics—and a new study reveals that this computational approach can assign many kinds of information to a single word, just like the human brain.

The study, published April 14, 2022, in the journal Nature Human Behavior, was co-led by Gabriel Grand, a graduate student at MIT’s Computer Science and Artificial Intelligence Laboratory, and Idan Blank, an assistant professor at the University of California, Los Angeles, and supervised by McGovern Investigator Ev Fedorenko, a cognitive neuroscientist who studies how the human brain uses and understands language, and Francisco Pereira at the National Institute of Mental Health. Fedorenko says the rich knowledge her team was able to find within computational language models demonstrates just how much can be learned about the world through language alone.

Early language models

The research team began its analysis of statistics-based language processing models in 2015, when the approach was new. Such models derive meaning by analyzing how often pairs of words co-occur in texts and using those relationships to assess the similarities of words’ meanings. For example, such a program might conclude that “bread” and “apple” are more similar to one another than they are to “notebook,” because “bread” and “apple” are often found in proximity to words like “eat” or “snack,” whereas “notebook” is not.

The models were clearly good at measuring words’ overall similarity to one another. But most words carry many kinds of information, and their similarities depend on which qualities are being evaluated. “Humans can come up with all these different mental scales to help organize their understanding of words,” explains Grand, a former undergraduate researcher in the Fedorenko lab. For examples, he says, “dolphins and alligators might be similar in size, but one is much more dangerous than the other.”

Grand and Idan Blank, who was then a graduate student at the McGovern Institute, wanted to know whether the models captured that same nuance. And if they did, how was the information organized?

To learn how the information in such a model stacked up to humans’ understanding of words, the team first asked human volunteers to score words along many different scales: Were the concepts those words conveyed big or small, safe or dangerous, wet or dry? Then, having mapped where people position different words along these scales, they looked to see whether language processing models did the same.

Grand explains that distributional semantic models use co-occurrence statistics to organize words into a huge, multidimensional matrix. The more similar words are to one another, the closer they are within that space. The dimensions of the space are vast, and there is no inherent meaning built into its structure. “In these word embeddings, there are hundreds of dimensions, and we have no idea what any dimension means,” he says. “We’re really trying to peer into this black box and say, ‘is there structure in here?’”

Word-vectors in the category ‘animals’ (blue circles) are orthogonally projected (light-blue lines) onto the feature subspace for ‘size’ (red line), defined as the vector difference between large−→−− and small−→−− (red circles). The three dimensions in this figure are arbitrary and were chosen via principal component analysis to enhance visualization (the original GloVe word embedding has 300 dimensions, and projection happens in that space). Image: Fedorenko lab

Specifically, they asked whether the semantic scales they had asked their volunteers use were represented in the model. So they looked to see where words in the space lined up along vectors defined by the extremes of those scales. Where did dolphins and tigers fall on line from “big” to “small,” for example? And were they closer together along that line than they were on a line representing danger (“safe” to “dangerous”)?

Across more than 50 sets of world categories and semantic scales, they found that the model had organized words very much like the human volunteers. Dolphins and tigers were judged to be similar in terms of size, but far apart on scales measuring danger or wetness. The model had organized the words in a way that represented many kinds of meaning—and it had done so based entirely on the words’ co-occurrences.

That, Fedorenko says, tells us something about the power of language. “The fact that we can recover so much of this rich semantic information from just these simple word co-occurrence statistics suggests that this is one very powerful source of learning about things that you may not even have direct perceptual experience with.”

Three from MIT awarded 2022 Paul and Daisy Soros Fellowships for New Americans

MIT graduate student Fernanda De La Torre, alumna Trang Luu ’18, SM ’20, and senior Syamantak Payra are recipients of the 2022 Paul and Daisy Soros Fellowships for New Americans.

De La Torre, Luu, and Payra are among 30 New Americans selected from a pool of over 1,800 applicants. The fellowship honors the contributions of immigrants and children of immigrants by providing $90,000 in funding for graduate school.

Students interested in applying to the P.D. Soros Fellowship for future years may contact Kim Benard, associate dean of distinguished fellowships in Career Advising and Professional Development.

Fernanda De La Torre

Fernanda De La Torre is a PhD student in the Department of Brain and Cognitive Sciences. With Professor Josh McDermott, she studies how we integrate vision and sound, and with Professor Robert Yang, she develops computational models of imagination.

De La Torre spent her early childhood with her younger sister and grandmother in Guadalajara, Mexico. At age 12, she crossed the Mexican border to reunite with her mother in Kansas City, Missouri. Shortly after, an abusive home environment forced De La Torre to leave her family and support herself throughout her early teens.

Despite her difficult circumstances, De La Torre excelled academically in high school. By winning various scholarships that would discretely take applications from undocumented students, she was able to continue her studies in computer science and mathematics at Kansas State University. There, she became intrigued by the mysteries of the human mind. During college, De La Torre received invaluable mentorship from her former high school principal, Thomas Herrera, who helped her become documented through the Violence Against Women Act. Her college professor, William Hsu, supported her interests in artificial intelligence and encouraged her to pursue a scientific career.

After her undergraduate studies, De La Torre won a post-baccalaureate fellowship from the Department of Brain and Cognitive Sciences at MIT, where she worked with Professor Tomaso Poggio on the theory of deep learning. She then transitioned into the department’s PhD program. Beyond contributing to scientific knowledge, De La Torre plans to use science to create spaces where all people, including those from backgrounds like her own, can innovate and thrive.

She says: “Immigrants face many obstacles, but overcoming them gives us a unique strength: We learn to become resilient, while relying on friends and mentors. These experiences foster both the desire and the ability to pay it forward to our community.”

Trang Luu

Trang Luu graduated from MIT with a BS in mechanical engineering in 2018, and a master of engineering degree in 2020. Her Soros award will support her graduate studies at Harvard University in the MBA/MS engineering sciences program.

Born in Saigon, Vietnam, Luu was 3 when her family immigrated to Houston, Texas. Watching her parents’ efforts to make a living in a land where they did not understand the culture or speak the language well, Luu wanted to alleviate hardship for her family. She took full responsibility for her education and found mentors to help her navigate the American education system. At home, she assisted her family in making and repairing household items, which fueled her excitement for engineering.

As an MIT undergraduate, Luu focused on assistive technology projects, applying her engineering background to solve problems impeding daily living. These projects included a new adaptive socket liner for below-the-knee amputees in Kenya, Ethiopia, and Thailand; a walking stick adapter for wheelchairs; a computer head pointer for patients with limited arm mobility, a safer makeshift cook stove design for street vendors in South Africa; and a quicker method to test new drip irrigation designs. As a graduate student in MIT D-Lab under the direction of Professor Daniel Frey, Luu was awarded a National Science Foundation Graduate Research Fellowship. In her graduate studies, Luu researched methods to improve evaporative cooling devices for off-grid farmers to reduce rapid fruit and vegetable deterioration.

These projects strengthened Luu’s commitment to innovating new technology and devices for people struggling with basic daily tasks. During her senior year, Luu collaborated on developing a working prototype of a wearable device that noninvasively reduces hand tremors associated with Parkinson’s disease or essential tremor. Observing patients’ joy after their tremors stopped compelled Luu and three co-founders to continue developing the device after college. Four years later, Encora Therapeutics has accomplished major milestones, including Breakthrough Device designation by the U.S. Food and Drug Administration.

Syamantak Payra

Hailing from Houston, Texas, Syamantak Payra is a senior majoring in electrical engineering and computer science, with minors in public policy and entrepreneurship and innovation. He will be pursuing a PhD in engineering at Stanford University, with the goal of creating new biomedical devices that can help improve daily life for patients worldwide and enhance health care outcomes for decades to come.

Payra’s parents had emigrated from India, and he grew up immersed in his grandparents’ rich Bengali culture. As a high school student, he conducted projects with NASA engineers at Johnson Space Center, experimented at home with his scientist parents, and competed in spelling bees and science fairs across the United States. Through these avenues and activities, Syamantak not only gained perspectives on bridging gaps between people, but also found passions for language, scientific discovery, and teaching others.

After watching his grandmother struggle with asthma and chronic obstructive pulmonary disease and losing his baby brother to brain cancer, Payra devoted himself to trying to use technology to solve health-care challenges. Payra’s proudest accomplishments include building a robotic leg brace for his paralyzed teacher and conducting free literacy workshops and STEM outreach programs that reached nearly a thousand underprivileged students across the Greater Houston Area.

At MIT, Payra has worked in Professor Yoel Fink’s research laboratory, creating digital sensor fibers that have been woven into intelligent garments that can assist in diagnosing illnesses, and in Professor Joseph Paradiso’s research laboratory, where he contributed to next-generation spacesuit prototypes that better protect astronauts on spacewalks. Payra’s research has been published by multiple scientific journals, and he was inducted into the National Gallery of America’s Young Inventors.

Clinical trials bring first CRISPR-based therapies to patients

Nearly ten years ago, Feng Zhang and other pioneering scientists developed CRISPR, a revolutionary technology that quickly became biologists’ preferred method of editing DNA. Biologists, computer scientists, and engineers in Zhang’s lab are continuing to explore natural CRISPR systems and expand researchers’ gene-editing toolkit. But for their long-term goal of using those tools to improve health, clinical collaboration is essential.

Clinical trials are rarely led by academic researchers; licensing agreements and partnerships with industry are usually essential to transform laboratory findings into advances that impact patients. Editas Medicine, a company co-founded by Zhang, aims to use CRISPR to correct disease-causing genetic errors inside patient cells—and two of Editas’s experimental CRISPR-based therapies have reached clinical trials.

One is a treatment for sickle cell anemia, a disorder in which a single genetic mutation disrupts the production of hemoglobin, creating misshapen red blood cells that can’t carry oxygen efficiently. With CRISPR, that mutation can be corrected in stem cells isolated from a patient’s blood. The CRISPR-modified cells are then returned to the patient, where they are expected to generate healthy red blood cells. The same strategy may also be effective for treating another inherited blood disorder, transfusion-dependent beta thalassemia.

Editas is pursuing a similar strategy to correct the mutation that causes Leber congenital amaurosis, an inherited form of blindness—but in that case, the CRISPR-based therapy is delivered directly to cells inside the body. The experimental treatment uses a viral vector to introduce CRISPR to the retina of the eye, where a gene mutation impairs the function of light-sensitive photoreceptors. Clinical trial participants received their first treatments in 2020, and in 2021, the company announced that some patients had experienced improvements to their vision.

Unexpected synergy

This story originally appeared in the Spring 2022 issue of BrainScan.

***

Recent results from cognitive neuroscientist Nancy Kanwisher’s lab have left her pondering the role of music in human evolution. “Music is this big mystery,” she says. “Every human society that’s been studied has music. No other animals have music in the way that humans do. And nobody knows why humans have music at all. This has been a puzzle for centuries.”

MIT neuroscientist and McGovern Investigator Nancy Kanwisher. Photo: Jussi Puikkonen/KNAW

Some biologists and anthropologists have reasoned that since there’s no clear evolutionary advantage for humans’ unique ability to create and respond to music, these abilities must have emerged when humans began to repurpose other brain functions. To appreciate song, they’ve proposed, we draw on parts of the brain dedicated to speech and language. It makes sense, Kanwisher says: music and language are both complex, uniquely human ways of communicating. “It’s very sensible to think that there might be common machinery,” she says. “But there isn’t.”

That conclusion is based on her team’s 2015 discovery of neurons in the human brain that respond only to music. They first became clued in to these music-sensitive cells when they asked volunteers to listen to a diverse panel of sounds inside an MRI scanner. Functional brain imaging picked up signals suggesting that some neurons were specialized to detect only music but the broad map of brain activity generated by an fMRI couldn’t pinpoint those cells.

Singing in the brain

Kanwisher’s team wanted to know more but neuroscientists who study the human brain can’t always probe its circuitry with the exactitude of their colleagues who study the brains of mice or rats. They can’t insert electrodes into human brains to monitor the neurons they’re interested in. Neurosurgeons, however, sometimes do — and thus, collaborating with neurosurgeons has created unique opportunities for Kanwisher and other McGovern investigators to learn about the human brain.

Kanwisher’s team collaborated with clinicians at Albany Medical Center to work with patients who are undergoing monitoring prior to surgical treatment for epilepsy. Before operating, a neurosurgeon must identify the spot in their patient’s brain that is triggering seizures. This means inserting electrodes into the brain to monitor specific areas over a few days or weeks. The electrodes they implant pinpoint activity far more precisely, both spatially and temporally, than an MRI. And with patients’ permission, researchers like Kanwisher can take advantage of the information they collect.

“The intracranial recording from human brains that’s possible from collaboration with neurosurgeons is extremely precious to us,” Kanwisher says. “All of the research is kind of opportunistic, on whatever the surgeons are doing for clinical reasons. But sometimes we get really lucky and the electrodes are right in an area where we have long-standing scientific questions that those data can answer.”

Song-selective neural population (yellow) in the “inflated” human brain. Image: Sam Norman-Haignere

The unexpected discovery of song-specific neurons, led by postdoctoral researcher Sam Norman-Haignere, who is now an assistant professor at the University of Rochester Medical Center, emerged from such a collaboration. The team worked with patients at Albany Medical Center whose presurgical monitoring encompassed the auditory-processing part of the brain that they were curious about. Sure enough, certain electrodes picked up activity only when patients were listening to music. The data indicated that in some of those locations, it didn’t matter what kind of music was playing: the cells fired in response to a range of sounds that included flute solos, heavy metal, and rap. But other locations became active exclusively in response to vocal music. “We did not have that hypothesis at all, Kanwisher says. “It reallytook our breath away,” she says.

When that discovery is considered along with findings from McGovern colleague Ev Fedorenko, who has shown that the brain’s language-processing regions do not respond to music, Kanwisher says it’s now clear that music and language are segregated in the human brain. The origins of our unique appreciation for music, however, remain a mystery.

Clinical advantage

Clinical collaborations are also important to researchers in Ann Graybiels lab, who rely largely on model organisms like mice and rats to investigate the fine details of neural circuits. Working with clinicians helps keep them focused on answering questions that matter to patients.

In studying how the brain makes decisions, the Graybiel lab has zeroed in on connections that are vital for making choices that carry both positive and negative consequences. This is the kind of decision-making that you might call on when considering whether to accept a job that pays more but will be more demanding than your current position, for example. In experiments with rats, mice, and monkeys, they’ve identified different neurons dedicated to triggering opposing actions “approach” or “avoid” in these complex decision-making tasks. They’ve also found evidence that both age and stress change how the brain deals with these kinds of decisions.

In work led by former Graybiel lab research scientist Ken-ichi Amemori, they have worked with psychiatrist Diego Pizzagalli at McLean Hospital to learn what happens in the human brain when people make these complex decisions.

By monitoring brain activity as people made decisions inside an MRI scanner, the team identified regions that lit up when people chose to “approach” or “avoid.” They also found parallel activity patterns in monkeys that performed the same task, supporting the relevance of animal studies to understanding this circuitry.

In people diagnosed with major depression, however, the brain responded to approach-avoidance conflict somewhat differently. Certain areas were not activated as strongly as they were in people without depression, regardless of whether subjects ultimately chose to “approach” or “avoid.” The team suspects that some of these differences might reflect a stronger tendency toward avoidance, in which potential rewards are less influential for decision-making, while an individual is experiencing major depression.

The brain activity associated with approach-avoidance conflict in humans appears to align with what Graybiel’s team has seen in mice, although clinical imaging cannot reveal nearly as much detail about the involved circuits. Graybiel says that gives her confidence that what they are learning in the lab, where they can manipulate and study neural circuits with precision, is important. “I think there’s no doubt that this is relevant to humans,” she says. “I want to get as far into the mechanisms as possible, because maybe we’ll hit something that’s therapeutically valuable, or maybe we will really get an intuition about how parts of the brain work. I think that will help people.”

Developing brain needs cannabinoid receptors after birth

Doctors warn that marijuana use during pregnancy may have harmful effects on the development of a fetus, in part because the cannabinoid receptors activated by the drug are known be critical for enabling a developing brain to wire up properly. Now, scientists at MIT’s McGovern Institute have learned that cannabinoid receptors’ critical role in brain development does not end at birth.

In today’s online issue of the journal eNeuro, scientists led by McGovern investigator Ann Graybiel report that mice need the cannabinoid receptor CB1R to establish connections within the brain’s dopamine system that take shape soon after birth. The finding raises concern that marijuana use by nursing moms, who pass the CB1R-activating compound THC to their infants when they breastfeed, might interfere with brain development by disrupting cannabinoid signaling.

“This is a real change to one of the truly important systems in the brain—a major controller of our dopamine,” Graybiel says. Dopamine exerts a powerful influence over our motivations and behavior, and changes to the dopamine system contribute to disorders from Parkinson’s disease to addiction. Thus, the researchers say, it is vital to understand whether postnatal drug exposure might put developing dopamine circuits at risk.

Brain bouquets

Cannabinoid receptors in the brain are important mediators of mood, memory, and pain. Graybiel’s lab became interested in CB1R due to their dysregulation in Huntington’s and Parkinson’s diseases, both of which impair the brain’s ability to control movement and other functions. While investigating the receptor’s distribution in the brain, they discovered that in the adult mice, CB1R is abundant within small compartments within the striatum called striosomes. The receptor was particularly concentrated within the neurons that connect striosomes to a dopamine-rich area of the brain called the substantia nigra, via structures that Graybiel’s team has dubbed striosome-dendron bouquets.

Striosome-dendron bouquets are easy to overlook within the densely connected network of the brain. But when the cells that make up the bouquets are labeled with a fluorescent protein, the bouquets become visible—and their appearance is striking, says Jill Crittenden, a research scientist in Graybiel’s lab.

Striosomal neurons form these bouquets by reaching into the substantia nigra, whose cells use dopamine to influence movement, motivation, learning, and habit formation. Clusters of dopamine-producing neurons form dendrites there that intertwine tightly with incoming axons from the striosomal neurons. The resulting structures, whose intimately associated cells resemble the bundled stems of a floral bouquet, establish so many connections that they give striosomal neurons potent control over dopamine signaling.

By tracking the bouquets’ emergence in newborn mice, Graybiel’s team found that they form in the first week after birth, a period during which striosomal neurons are ramping up production of CB1R. Mice genetically engineered to lack CB1R, however, can’t make these elaborate but orderly bouquets. Without the receptor, fibers from striosomes extend into the substantia nigra, but fail to form the tightly intertwined “bouquet stems” that facilitate extensive connections with their targets. This disorganized structure is apparent as soon as bouquets arise in the brains of young pups and persists into adulthood. “There aren’t those beautiful, strong fibers anymore,” Crittenden says. “This suggests that those very strong controllers over the dopamine system function abnormally when you interfere with cannabinoid signaling.”

The finding was a surprise. Without zeroing in on striosome-dendron bouquets, it would be easy to miss CB1R’s impact on the dopamine system, Crittenden says. Plus, she adds, prior studies of the receptor’s role in development largely focused on fetal development. The new findings reveal that the cannabinoid system continues to guide the formation of brain circuits after birth.

Graybiel notes that funds from generous donors, including the Broderick Fund for Phytocannabinoid Research at MIT, the Saks Kavanaugh Foundation, the Kristin R. Pressman and Jessica J. Pourian ‘13 Fund, Mr. Robert Buxton, and the William N. & Bernice E. Bumpus Foundation, enabled her team’s studies of CB1R’s role in shaping striosome-dendron bouquets.

Now that they have shown that CB1R is needed for postnatal brain development, it will be important to determine the consequences of disrupting cannabinoid signaling during this critical period—including whether passing THC to a nursing baby impacts the brain’s dopamine system.