Nidhi Seethapathi was first drawn to using powerful yet simple models to understand elaborate patterns when she learned about Newton’s laws of motion as a high school student in India. She was fascinated by the idea that wonderfully complex behaviors can arise from a set of objects that follow a few elementary rules.
Now an assistant professor at MIT, Seethapathi seeks to capture the intricacies of movement in the real world, using computational modeling as well as input from theory and experimentation. “[Theoretical physicist and Nobel laureate] Richard Feynman ’39 once said, ‘What I cannot create, I do not understand,’” Seethapathi says. “In that same spirit, the way I try to understand movement is by building models that move the way we do.”
Models of locomotion in the real world
Seethapathi—who holds a shared faculty position between the Department of Brain and Cognitive Sciences and the Department of Electrical Engineering and Computer Science’s Faculty of Artificial Intelligence + Decision- Making, which is housed in the Schwarzman College of Computing and the School of Engineering—recalls a moment during her undergraduate years studying mechanical engineering in Mumbai when a professor asked students to pick an aspect of movement to examine in detail. While most of her peers chose to analyze machines, Seethapathi selected the human hand. She was astounded by its versatility, she says, and by the number of variables, referred to by scientists as “degrees of freedom,” that are needed to characterize routine manual tasks. The assignment made her realize that she wanted to explore the diverse ways in which the entire human body can move.
Also an investigator at the McGovern Institute for Brain Research, Seethapathi pursued graduate research at The Ohio State University Movement Lab, where her goal was to identify the key elements of human locomotion. At that time, most people in the field were analyzing simple movements, she says, “but I was interested in broadening the scope of my models to include real-world behavior. Given that movement is so ubiquitous, I wondered: What can this model say about everyday life?”
After earning her PhD from Ohio State in 2018, Seethapathi continued this line of research as a postdoctoral fellow at the University of Pennsylvania. New computer vision tools to track human movement from video footage had just entered the scene, and during her time at UPenn, Seethapathi sought to expand her skillset to include computer vision and applications to movement rehabilitation.
At MIT, Seethapathi continues to extend the range of her studies of human movement, looking at how locomotion can evolve as people grow and age, and how it can adapt to anatomical changes and even adjust to shifts in weather, which can alter ground conditions. Her investigations now encompass other species as part of an effort to determine how creatures with different morphologies and habitats regulate their movements.
The models Seethapathi and her team create make predictions about human movements that can later be verified or refuted by empirical tests. While relatively simple experiments can be carried out on treadmills, her group is developing measurement systems incorporating wearable sensors and video-based sensing to measure movement data that have traditionally been hard to obtain outside the laboratory.
Although Seethapathi says she is primarily driven to uncover the fundamental principles that govern movement behavior, she believes her work also has practical applications.
“When people are treated for a movement disorder, the goal is to impact their movements in the real world,” she says. “We can use our predictive models to see how a particular intervention will affect a person’s trajectory. The hope is that our models can help put the individual on the right track to recovery as early as possible.”
Eight MIT faculty members are among more than 250 leaders from academia, the arts, industry, public policy, and research elected to the American Academy of Arts and Sciences, the academy announced April 19.
One of the nation’s most prestigious honorary societies, the academy is also a leading center for independent policy research. Members contribute to academy publications, as well as studies of science and technology policy, energy and global security, social policy and American institutions, the humanities and culture, and education.
Those elected from MIT in 2023 are:
Arnaud Costinot, professor of economics;
James J. DiCarlo, Peter de Florez Professor of Brain and Cognitive Sciences, director of the MIT Quest for Intelligence, and McGovern Institute Investigator;
Piotr Indyk, the Thomas D. and Virginia W. Cabot Professor of Electrical Engineering and Computer Science;
Senthil Todadri, professor of physics;
Evelyn N. Wang, Ford Professor of Engineering (on leave) and director of the Department of Energy’s Advanced Research Projects Agency-Energy;
Boleslaw Wyslouch, professor of physics and director of the Laboratory for Nuclear Science and Bates Research and Engineering Center;
Yukiko Yamashita, professor of biology and core member of the Whitehead Institute; and
Wei Zhang, professor of mathematics.
“With the election of these members, the academy is honoring excellence, innovation, and leadership and recognizing a broad array of stellar accomplishments. We hope every new member celebrates this achievement and joins our work advancing the common good,” says David W. Oxtoby, president of the academy.
Since its founding in 1780, the academy has elected leading thinkers from each generation, including George Washington and Benjamin Franklin in the 18th century, Maria Mitchell and Daniel Webster in the 19th century, and Toni Morrison and Albert Einstein in the 20th century. The current membership includes more than 250 Nobel and Pulitzer Prize winners.
Real-time feedback about brain activity can help adolescents with depression or anxiety quiet their minds, according to a new study from MIT scientists. The researchers, led by McGovern research affiliate Susan Whitfield-Gabrieli, have used functional magnetic resonance imaging (fMRI) to show patients what’s happening in their brain as they practice mindfulness inside the scanner and to encourage them to focus on the present. They report in the journal Molecular Psychiatry that doing so settles down neural networks that are associated with symptoms of depression.
McGovern research affiliate Susan Whitfield-Gabrieli in the Martinos Imaging Center.
“We know this mindfulness meditation is really good for kids and teens, and we think this real-time fMRI neurofeedback is really a way to engage them and provide a visual representation of how they’re doing,” says Whitfield-Gabrieli. “And once we train people how to do mindfulness meditation, they can do it on their own at any time, wherever they are.”
The approach could be a valuable tool to alleviate or prevent depression in young people, which has been on the rise in recent years and escalated alarmingly during the Covid-19 pandemic. “This has gone from bad to catastrophic, in my perspective,” Whitfield-Gabrieli says. “We have to think out of the box and come up some really innovative ways to help.”
Default mode network
Mindfulness meditation, in which practitioners focus their awareness on the present moment, can modulate activity within the brain’s default mode network, which is so named because it is most active when a person is not focused on any particular task. Two hubs within the default mode network, the medial prefrontal cortex and the posterior cingulate cortex, are of particular interest to Whitfield-Gabrieli and her colleagues, due to a potential role in the symptoms of depression and anxiety.
“These two core hubs are very engaged when we’re thinking about the past or the future and we’re not really engaged in the present moment,” she explains. “If we’re in a healthy state of mind, we may be reminiscing about the past or planning for the future. But if we’re depressed, that reminiscing may turn into rumination or obsessively rehashing the past. If we’re particularly anxious, we may be obsessively worrying about the future.”
Whitfield-Gabrieli explains that these key hubs are often hyperconnected in people with anxiety and depression. The more tightly correlated the activity of the two regions are, the worse a person’s symptoms are likely to be. Mindfulness, she says, can help interrupt that hyperconnectivity.
“Mindfulness really helps to focus on the now, which just precludes all of this mind wandering and repetitive negative thinking,” she explains. In fact, she and her colleagues have found that mindfulness practice can reduce stress and improve attention in children. But she acknowledges that it can be difficult to engage young people and help them focus on the practice.
Tuning the mind
To help people visualize the benefits of their mindfulness practice, the researchers developed a game that can be played while an MRI scanner tracks a person’s brain activity. On a screen inside the scanner, the participant sees a ball and two circles. The circle at the top of the screen represents a desirable state in which the activity of the brain’s default mode network has been reduced, and the activity of a network the brain uses to focus on attention-demanding tasks—the frontal parietal network—has increased. An initial fMRI scan identifies these networks in each individual’s brain, creating a customized mental map on which the game is based.
“They’re training their brain to tune their mind. And they love it.” – Susan Whitfield-Gabrieli
As the person practices mindfulness meditation, which they learn prior to entering the scanner, the default mode network in the brain quiets while the frontal parietal mode activates. When the scanner detects this change, the ball moves and eventually enters its target. With an initial success, the target shrinks, encouraging even more focus. When the participant’s mind wanders from their task, the default mode network activation increases (relative to the frontal parietal network) and the ball moves down towards the second circle, which represents an undesirable state. “Basically, they’re just moving this ball with their brain,” Whitfield-Gabrieli says. “They’re training their brain to tune their mind. And they love it.”
Nine individuals between the ages of 17 and 19 with a history of major depression or anxiety disorders tried this new approach to mindfulness training, and for each of them, Whitfield-Gabrieli’s team saw a reduction in connectivity within the default mode network. Now they are working to determine whether an electroencephalogram, in which brain activity is measured with noninvasive electrodes, can be used to provide similar neurofeedback during mindfulness training—an approach that could be more accessible for broad clinical use.
Whitfield-Gabrieli notes that hyperconnectivity in the default mode network is also associated with psychosis, and she and her team have found that mindfulness meditation with real-time fMRI feedback can help reduce symptoms in adults with schizophrenia. Future studies are planned to investigate how the method impacts teens’ ability to establish a mindfulness practice and its potential effects on depression symptoms.
Researchers at the McGovern Institute and the Broad Institute of MIT and Harvard have harnessed a natural bacterial system to develop a new protein delivery approach that works in human cells and animals. The technology, described today in Nature, can be programmed to deliver a variety of proteins, including ones for gene editing, to different cell types. The system could potentially be a safe and efficient way to deliver gene therapies and cancer therapies.
Led by McGovern Institute investigator and Broad Institute core member Feng Zhang, the team took advantage of a tiny syringe-like injection structure, produced by a bacterium, that naturally binds to insect cells and injects a protein payload into them. The researchers used the artificial intelligence tool AlphaFold to engineer these syringe structures to deliver a range of useful proteins to both human cells and cells in live mice.
“This is a really beautiful example of how protein engineering can alter the biological activity of a natural system,” said Joseph Kreitz, the study’s first author and a graduate student in Zhang’s lab. “I think it substantiates protein engineering as a useful tool in bioengineering and the development of new therapeutic systems.”
“Delivery of therapeutic molecules is a major bottleneck for medicine, and we will need a deep bench of options to get these powerful new therapies into the right cells in the body,” added Zhang. “By learning from how nature transports proteins, we were able to develop a new platform that can help address this gap.”
Zhang is senior author on the study and is also the James and Patricia Poitras Professor of Neuroscience at MIT and an investigator at the Howard Hughes Medical Institute.
Injection via contraction
Graduate student Joseph Kreitz holds a 3D printed bacteriophage. Photo: Steph Stevens
Symbiotic bacteria use the roughly 100-nanometer-long syringe-like machines to inject proteins into host cells to help adjust the biology of their surroundings and enhance their survival. These machines, called extracellular contractile injection systems (eCISs), consist of a rigid tube inside a sheath that contracts, driving a spike on the end of the tube through the cell membrane. This forces protein cargo inside the tube to enter the cell.
On the outside of one end of the eCIS are tail fibers that recognize specific receptors on the cell surface and latch on. Previous research has shown that eCISs can naturally target insect and mouse cells, but Kreitz thought it might be possible to modify them to deliver proteins to human cells by reengineering the tail fibers to bind to different receptors.
Using AlphaFold, which predicts a protein’s structure from its amino acid sequence, the researchers redesigned tail fibers of an eCIS produced by Photorhabdus bacteria to bind to human cells. By reengineering another part of the complex, the scientists tricked the syringe into delivering a protein of their choosing, in some cases with remarkably high efficiency.
The team made eCISs that targeted cancer cells expressing the EGF receptor and showed that they killed almost 100 percent of the cells, but did not affect cells without the receptor. Though efficiency depends in part on the receptor the system is designed to target, Kreitz says that the findings demonstrate the promise of the system with thoughtful engineering.
Photorhabdus virulence cassettes (green) binding to insect cells (blue) prior to injection of payload proteins. Image: Joseph Kreitz | McGovern Institute, Broad Institute
The researchers also used an eCIS to deliver proteins to the brain in live mice — where it didn’t provoke a detectable immune response, suggesting that eCISs could one day be used to safely deliver gene therapies to humans.
Packaging proteins
Kreitz says the eCIS system is versatile, and the team has already used it to deliver a range of cargos including base editor proteins (which can make single-letter changes to DNA), proteins that are toxic to cancer cells, and Cas9, a large DNA-cutting enzyme used in many gene editing systems.
Cancer cells killed by programmed Photorhabdus virulence cassettes (PVCs), imaged with a scanning electron microscope. Image: Joseph Kreitz | McGovern Institute, Broad Institute
In the future, Kreitz says researchers could engineer other components of the eCIS system to tune other properties, or to deliver other cargos such as DNA or RNA. He also wants to better understand the function of these systems in nature.
“We and others have shown that this type of system is incredibly diverse across the biosphere, but they are not very well characterized,” Kreitz said. “And we believe this type of system plays really important roles in biology that are yet to be explored.”
This work was supported in part by the National Institutes of Health, Howard Hughes Medical Institute, Poitras Center for Psychiatric Disorders Research at MIT, Hock E. Tan and K. Lisa Yang Center for Autism Research at MIT, K. Lisa Yang and Hock E. Tan Molecular Therapeutics Center at MIT, K. Lisa Yang Brain-Body Center at MIT, Broad Institute Programmable Therapeutics Gift Donors, The Pershing Square Foundation, William Ackman, Neri Oxman, J. and P. Poitras, Kenneth C. Griffin, BT Charitable Foundation, the Asness Family Foundation, the Phillips family, D. Cheng, and R. Metcalfe.
Artificial intelligence seems to have gotten a lot smarter recently. AI technologies are increasingly integrated into our lives — improving our weather forecasts, finding efficient routes through traffic, personalizing the ads we see and our experiences with social media.
Watercolor image of a robot with a human brain, created using the AI system DALL*E2.
But with the debut of powerful new chatbots like ChatGPT, millions of people have begun interacting with AI tools that seem convincingly human-like. Neuroscientists are taking note — and beginning to dig into what these tools tell us about intelligence and the human brain.
The essence of human intelligence is hard to pin down, let alone engineer. McGovern scientists say there are many kinds of intelligence, and as humans, we call on many different kinds of knowledge and ways of thinking. ChatGPT’s ability to carry on natural conversations with its users has led some to speculate the computer model is sentient, but McGovern neuroscientists insist that the AI technology cannot think for itself.
Still, they say, the field may have reached a turning point.
“I still don’t believe that we can make something that is indistinguishable from a human. I think we’re a long way from that. But for the first time in my life I think there is a small, nonzero chance that it may happen in the next year,” says McGovern founding member Tomaso Poggio, who has studied both human intelligence and machine learning for more than 40 years.
Different sort of intelligence
Developed by the company OpenAI, ChatGPT is an example of a deep neural network, a type of machine learning system that has made its way into virtually every aspect of science and technology. These models learn to perform various tasks by identifying patterns in large datasets. ChatGPT works by scouring texts and detecting and replicating the ways language is used. Drawing on language patterns it finds across the internet, ChatGPT can design you a meal plan, teach you about rocket science, or write a high school-level essay about Mark Twain. With all of the internet as a training tool, models like this have gotten so good at what they do, they can seem all-knowing.
“Engineers have been inventing some of these forms of intelligence since the beginning of the computers. ChatGPT is one. But it is very far from human intelligence.” – Tomaso Poggio
Nonetheless, language models have a restricted skill set. Play with ChatGPT long enough and it will surely give you some wrong information, even if its fluency makes its words deceptively convincing. “These models don’t know about the world, they don’t know about other people’s mental states, they don’t know how things are beyond whatever they can gather from how words go together,” says Postdoctoral Associate Anna Ivanova, who works with McGovern Investigators Evelina Fedorenko and Nancy Kanwisher as well as Jacob Andreas in MIT’s Computer Science and Artificial Intelligence Laboratory.
Such a model, the researchers say, cannot replicate the complex information processing that happens in the human brain. That doesn’t mean language models can’t be intelligent — but theirs is a different sort of intelligence than our own. “I think that there is an infinite number of different forms of intelligence,” says Poggio. “Engineers have been inventing some of these forms of intelligence since the beginning of the computers. ChatGPT is one. But it is very far from human intelligence.”
Under the hood
Just as there are many forms of intelligence, there are also many types of deep learning models — and McGovern researchers are studying the internals of these models to better understand the human brain.
A watercolor painting of a robot generated by DALL*E2.
“These AI models are, in a way, computational hypotheses for what the brain is doing,” Kanwisher says. “Up until a few years ago, we didn’t really have complete computational models of what might be going on in language processing or vision. Once you have a way of generating actual precise models and testing them against real data, you’re kind of off and running in a way that we weren’t ten years ago.”
Artificial neural networks echo the design of the brain in that they are made of densely interconnected networks of simple units that organize themselves — but Poggio says it’s not yet entirely clear how they work.
No one expects that brains and machines will work in exactly the same ways, though some types of deep learning models are more humanlike in their internals than others. For example, a computer vision model developed by McGovern Investigator James DiCarlo responds to images in ways that closely parallel the activity in the visual cortex of animals who are seeing the same thing. DiCarlo’s team can even use their model’s predictions to create an image that will activate specific neurons in an animal’s brain.
“We shouldn’t just automatically assume that if we trained a deep network on a task, that it’s going to look like the brain.” – Ila Fiete
Still, there is reason to be cautious in interpreting what artificial neural networks tell us about biology. “We shouldn’t just automatically assume that if we trained a deep network on a task, that it’s going to look like the brain,” says McGovern Associate Investigator Ila Fiete. Fiete acknowledges that it’s tempting to think of neural networks as models of the brain itself due to their architectural similarities — but she says so far, that idea remains largely untested.
McGovern Institute Associate Investigator Ila Fiete builds theoretical models of the brain. Photo: Caitlin Cunningham
She and her colleagues recently experimented with neural networks that estimate an object’s position in space by integrating information about its changing velocity.
In the brain, specialized neurons known as grid cells carry out this calculation, keeping us aware of where we are as we move through the world. Other researchers had reported that not only can neural networks do this successfully, those that do include components that behave remarkably like grid cells. They had argued that the need to do this kind of path integration must be the reason our brains have grid cells — but Fiete’s team found that artificial networks don’t need to mimic the brain to accomplish this brain-like task. They found that many neural networks can solve the same problem without grid cell-like elements.
One way investigators might generate deep learning models that do work like the brain is to give them a problem that is so complex that there is only one way of solving it, Fiete says.
Language, she acknowledges, might be that complex.
“This is clearly an example of a super-rich task,” she says. “I think on that front, there is a hope that they’re solving such an incredibly difficult task that maybe there is a sense in which they mirror the brain.”
Language parallels
In Fedorenko’s lab, where researchers are focused on identifying and understanding the brain’s language processing circuitry, they have found that some language models do, in fact, mimic certain aspects of human language processing. Many of the most effective models are trained to do a single task: make predictions about word use. That’s what your phone is doing when it suggests words for your text message as you type. Models that are good at this, it turns out, can apply this skill to carrying on conversations, composing essays, and using language in other useful ways. Neuroscientists have found evidence that humans, too, rely on word prediction as a part of language processing.
Fedorenko and her team compared the activity of language models to the brain activity of people as they read or listened to words, sentences, and stories, and found that some models were a better match to human neural responses than others. “The models that do better on this relatively unsophisticated task — just guess what comes next — also do better at capturing human neural responses,” Fedorenko says.
A watercolor painting of a language model, generated by DALL*E2.
It’s a compelling parallel, suggesting computational models and the human brain may have arrived at a similar solution to a problem, even in the face of the biological constraints that have shaped the latter. For Fedorenko and her team, it’s sparked new ideas that they will explore, in part, by modifying existing language models — possibly to more closely mimic the brain.
With so much still unknown about how both human and artificial neural networks learn, Fedorenko says it’s hard to predict what it will take to make language models work and behave more like the human brain. One possibility they are exploring is training a model in a way that more closely mirrors the way children learn language early in life.
Another question, she says, is whether language models might behave more like humans if they had a more limited recall of their own conversations. “All of the state-of-the-art language models keep track of really, really long linguistic contexts. Humans don’t do that,” she says.
Chatbots can retain long strings of dialogue, using those words to tailor their responses as a conversation progresses, she explains. Humans, on the other hand, must cope with a more limited memory. While we can keep track of information as it is conveyed, we only store a string of about eight words as we listen or read. “We get linguistic input, we crunch it up, we extract some kind of meaning representation, presumably in some more abstract format, and then we discard the exact linguistic stream because we don’t need it anymore,” Fedorenko explains.
Language models aren’t able to fill in gaps in conversation with their own knowledge and awareness in the same way a person can, Ivanova adds. “That’s why so far they have to keep track of every single input word,” she says. “If we want a model that models specifically the [human] language network, we don’t need to have this large context window. It would be very cool to train those models on those short windows of context and see if it’s more similar to the language network.”
Multimodal intelligence
Despite these parallels, Fedorenko’s lab has also shown that there are plenty of things language circuits do not do. The brain calls on other circuits to solve math problems, write computer code, and carry out myriad other cognitive processes. Their work makes it clear that in the brain, language and thought are not the same.
That’s borne out by what cognitive neuroscientists like Kanwisher have learned about the functional organization of the human brain, where circuit components are dedicated to surprisingly specific tasks, from language processing to face recognition.
“The upshot of cognitive neuroscience over the last 25 years is that the human brain really has quite a degree of modular organization,” Kanwisher says. “You can look at the brain and say, ‘what does it tell us about the nature of intelligence?’ Well, intelligence is made up of a whole bunch of things.”
In generating this image from the text prompt, “a watercolor painting of a woman looking in a mirror and seeing a robot,” DALL*E2 incorrectly placed the woman (not the robot) in the mirror, highlighting one of the weaknesses of current deep learning models.
In January, Fedorenko, Kanwisher, Ivanova, and colleagues shared an extensive analysis of the capabilities of large language models. After assessing models’ performance on various language-related tasks, they found that despite their mastery of linguistic rules and patterns, such models don’t do a good job using language in real-world situations. From a neuroscience perspective, that kind of functional competence is distinct from formal language competence, calling on not just language-processing circuits but also parts of the brain that store knowledge of the world, reason, and interpret social interactions.
Language is a powerful tool for understanding the world, they say, but it has limits.
“If you train on language prediction alone, you can learn to mimic certain aspects of thinking,” Ivanova says. “But it’s not enough. You need a multimodal system to carry out truly intelligent behavior.”
The team concluded that while AI language models do a very good job using language, they are incomplete models of human thought. For machines to truly think like humans, Ivanova says, they will need a combination of different neural nets all working together, in the same way different networks in the human brain work together to achieve complex cognitive tasks in the real world.
It remains to be seen whether such models would excel in the tech world, but they could prove valuable for revealing insights into human cognition — perhaps in ways that will inform engineers as they strive to build systems that better replicate human intelligence.
The McGovern Institute announced today that the 2023 Edward M. Scolnick Prize in Neuroscience will be awarded to neurobiologist Yang Dan. Dan holds the Nan Fung Life Sciences Chancellor’s Chair in Neuroscience at the University of California, Berkeley, and has been a Howard Hughes Investigator since 2008. The Scolnick Prize is awarded annually by the McGovern Institute for outstanding achievements in neuroscience.
“Yang Dan’s systems-level experimentation to identify the cell types and circuits that control sleep cycles represents the highest level of neuroscience research,” says Robert Desimone, McGovern Institute director and chair of the selection committee. “Her work has defined precise mechanisms for how motor behaviors are suppressed during sleep and activated during arousal, with potential implications for the design of more targeted sedatives and the treatment of sleep disorders.”
Significance of sleep
Dan received a BS in Physics in 1988 from Peking University in China. She then moved to the US to obtain her PhD in neurobiology from Columbia University, in 1994, under the mentorship of Professor Mu-Ming Poo. Her doctoral research focused on mechanisms of plasticity at the neuromuscular synapse and was published in Science, Nature, and Neuron. During this time, she showed that the quantal release of neurotransmitters is not unique to neuronal cell types and, as one example, that retrograde signaling from muscle cells regulates the synaptic strength of the neuromuscular junction. For her postdoctoral training, Dan joined Clay Reid’s lab at The Rockefeller University and then accompanied Reid’s move to Harvard Medical School a short time later. Within just over two years, Yang had collected and analyzed neuronal recording data to support and develop key computational models of visual information coding – her two papers describing this work have been cited, together, over 900 times.
Yang Dan started her own laboratory in January 1997 when she joined the faculty of UC Berkeley’s Department of Molecular and Cell Biology as an assistant professor; she became a full professor in 2005. Dan’s lab became known for discoveries of how sensory inputs, especially visual inputs, are processed by the brain to influence behavior. Using electrophysiological recordings in model animals and computational analyses, her group worked out rules for how synaptic plasticity and neural connectivity, at the microcircuit and brain-wide level, contribute to learning and goal-directed behaviors.
Sleep recordings in various animal models and humans, shown in a research review by Yang Dan (2019 Annual Review of Neuroscience). (a) In nonmammalian animals such as jellyfish, Caenorhabditis elegans, Drosophila, and zebrafish, locomotor assay is used to measure sleep. (b) Examples of mouse EEG and EMG recordings during wakefulness and NREM and REM sleep. (c) Example polysomnography recordings from a healthy human subject during wakefulness and NREM (stage 3) and phasic REM sleep.
The Dan lab carved out a new research direction upon their discovery of mechanisms controlling rapid eye movement (REM) sleep, a state in which the brain is active and neuroplastic despite minimal sensory input. In their 2015 Nature paper, Dan’s group showed that, in mice, optogenetic activation of inhibitory neurons that project forward from the brainstem to the middle of the brain can instantaneously induce REM sleep. Since then, the Dan lab has published nearly a dozen primary research papers on the sleep-wake cycle that capitalize on the latest neural engineering techniques to record and control specific cell types and circuits in the brain. Most recently, she reported the discovery of neurons in the midbrain that receive wide-ranging inputs to coordinate active suppression of movement during REM and non-REM sleep with the release of movement during arousal. This circuit is key to the ability, known to exist in most animals, to experience sleep and even vivid dreaming without acting out. Dan’s discoveries are paving the way to a holistic understanding, from the molecular to macrocircuit levels, of how our bodies regulate sleep, an evolutionarily conserved behavior that is essential for survival.
Awards and honors
Dan was appointed as a Howard Hughes Medical Institute Investigator in 2008 and elected to the US National Academy of Sciences in 2018. She was awarded the Li Ka Shing Women in Science Award in 2007 and a Research Award for Innovation in Neuroscience from the Society for Neuroscience in 2009. She teaches summer courses at institutes around the world and has mentored 16 graduate students and 27 postdoctoral researchers, 25 of whom now run their own independent laboratories. Currently, Dan serves as an editorial board member on top-ranked science journals including Cell, Neuron, PNAS, and Current Opinion in Neurobiology.
Yang Dan will be awarded the Scolnick Prize on Wednesday, June 7, 2023. At 4:00 pm on that day, she will deliver a lecture titled “The how and why of sleep,” to be followed by a reception at the McGovern Institute, 43 Vassar Street (building 46, room 3002) in Cambridge. The event is free and open to the public.
What does a healthy relationship between neuroscience and society look like? How do we set the conditions for that relationship to flourish? Researchers and staff at the McGovern Institute and the MIT Museum have been exploring these questions with a five-month planning grant from the Dana Foundation.
Between October 2022 and March 2023, the team tested the potential for an MIT Center for Neuroscience and Society through a series of MIT-sponsored events that were attended by students and faculty of nearby Cambridge Public Schools. The goal of the project was to learn more about what happens when the distinct fields of neuroscience, ethics, and public engagement are brought together to work side-by-side.
Gabrieli lab members Sadie Zacharek (left) and Shruti Nishith (right) demonstrate how the MRI mock scanner works with a student volunteer from the Cambridge Public Schools. Photo: Emma Skakel, MIT Museum
Middle schoolers visit McGovern
Over four days in February, more than 90 sixth graders from Rindge Avenue Upper Campus (RAUC) in Cambridge, Massachusetts, visited the McGovern Institute and participated in hands-on experiments and discussions about the ethical, legal, and social implications of neuroscience research. RAUC is one of four middle schools in the city of Cambridge with an economically, racially, and culturally diverse student population. The middle schoolers interacted with an MIT team led by McGovern Scientific Advisor Jill R. Crittenden, including seventeen McGovern neuroscientists, three MIT Museum outreach coordinators, and neuroethicist Stephanie Bird, a member of the Dana Foundation planning grant team.
“It is probably the only time in my life I will see a real human brain.” – RAUC student
The students participated in nine activities each day, including trials of brain-machine interfaces, close-up examinations of preserved human brains, a tour of McGovern’s imaging center in which students watched as their teacher’s brain was scanned, and a visit to the MIT Museum’s interactive Artificial Intelligence Gallery.
Imagine-IT, a brain-machine interface designed by a team of middle school students during a visit to the McGovern Institute.
To close out their visit, students worked in groups alongside experts to invent brain-computer interfaces designed to improve or enhance human abilities. At each step, students were introduced to ethical considerations through consent forms, questions regarding the use of animal and human brains, and the possible impacts of their own designs on individuals and society.
“I admit that prior to these four days, I would’ve been indifferent to the inclusion of children’s voices in a discussion about technically complex ethical questions, simply because they have not yet had any opportunity to really understand how these technologies work,” says one researcher involved in the visit. “But hearing the students’ questions and ideas has changed my perspective. I now believe it is critically important that all age groups be given a voice when discussing socially relevant issues, such as the ethics of brain computer interfaces or artificial intelligence.”
For more information on the proposed MIT Center for Neuroscience and Society, visit the MIT Museum website.
EG (a pseudonym) is an accomplished woman in her early 60s: she is a college graduate and has an advanced professional degree. She has a stellar vocabulary—in the 98th percentile, according to tests—and has mastered a foreign language (Russian) to the point that she sometimes dreams in it.
She also has, likely since birth, been missing her left temporal lobe, a part of the brain known to be critical for language.
In 2016, EG contacted McGovern Institute Investigator Evelina Fedorenko, who studies the computations and brain regions that underlie language processing, to see if her team might be interested in including her in their research.
“EG didn’t know about her missing temporal lobe until age 25, when she had a brain scan for an unrelated reason,” says Fedorenko, the Frederick A. (1971) and Carole J. Middleton Career Development Associate Professor of Neuroscience at MIT. “As with many cases of early brain damage, she had no linguistic or cognitive deficits, but brains like hers are invaluable for understanding how cognitive functions reorganize in the tissue that remains.”
“I told her we definitely wanted to study her brain.” – Ev Fedorenko
Previous studies have shown that language processing relies on an interconnected network of frontal and temporal regions in the left hemisphere of the brain. EG’s unique brain presented an opportunity for Fedorenko’s team to explore how language develops in the absence of the temporal part of these core language regions.
Greta Tuckute, a graduate student in the Fedorenko lab, is the first author of the Neuropsychologia study. Photo: Caitlin Cunningham
Their results appeared recently in the journal Neuropsychologia. They found, for the first time, that temporal language regions appear to be critical for the emergence of frontal language regions in the same hemisphere — meaning, without a left temporal lobe, EG’s intact frontal lobe did not develop a capacity for language.
They also reveal much more: EG’s language system resides happily in her right hemisphere. “Our findings provide both visual and statistical proof of the brain’s remarkable plasticity, its ability to reorganize, in the face of extensive early damage,” says Greta Tuckute, a graduate student in the Fedorenko lab and first author of the paper.
In an introduction to the study, EG herself puts the social implications of the findings starkly. “Please do not call my brain abnormal, that creeps me out,” she . “My brain is atypical. If not for accidentally finding these differences, no one would pick me out of a crowd as likely to have these, or any other differences that make me unique.”
How we process language
The frontal and temporal lobes are part of the cerebrum, the largest part of the brain. The cerebrum controls many functions, including the five senses, language, working memory, personality, movement, learning, and reasoning. It is divided into two hemispheres, the left and the right, by a deep longitudinal fissure. The two hemispheres communicate via a thick bundle of nerve fibers called the corpus callosum. Each hemisphere comprises four main lobes—frontal, parietal, temporal, and occipital. Core parts of the language network reside in the frontal and temporal lobes.
Core parts of the language network (shown in teal) reside in the left frontal and temporal lobes. Image: Ev Fedorenko
In most individuals, the language system develops in both the right and left hemispheres, with the left side dominant from an early age. The frontal lobe develops slower than the temporal lobe. Together, the interconnected frontal and temporal language areas enable us to understand and produce words, phrases, and sentences.
How, then, did EG, with no left temporal lobe, come to speak, comprehend, and remember verbal information (even a foreign language!) with such proficiency?
Simply put, the right hemisphere took over: “EG has a completely well-functioning neurotypical-like language system in her right hemisphere,” says Tuckute. “It is incredible that a person can use a single hemisphere—and the right hemisphere at that, which in most people is not the dominant hemisphere where language is processed—and be perfectly fine.”
Journey into EG’s brain
In the study, the researchers conducted two scans of EG’s brain using functional magnetic resonance imaging (fMRI), one in 2016 and one in 2019, and had her complete a range of behaviorial tests. fMRI measures the level of blood oxygenation across the brain and can be used to make inferences about where neural activity is taking place. The researchers also scanned the brains of 151 “neurotypical” people. The large number of participants, combined with robust task paradigms and rigorous statistical analyses made it possible to draw conclusions from a single case such as EG.
Magnetic resonance image of EG’s brain showing missing left temporal lobe. Image: Fedorenko Lab
Fedorenko is a staunch advocate of the single case study approach—common in medicine but not currently in neuroscience. “Unusual brains—and unusual individuals more broadly—can provide critical insights into brain organization and function that we simply cannot gain by looking at more typical brains.” Studying individual brains with fMRI, however, requires paradigms that work robustly at the single-brain level. This is not true of most paradigms used in the field, which require averaging many brains together to obtain an effect. Developing individual-level fMRI paradigms for language research has been the focus of Fedorenko’s early work, although the main reason for doing so had nothing to do with studying atypical brains: individual-level analyses are simply better—they are more sensitive and their results are more interpretable and meaningful.
“Looking at high-quality data in an individual participant versus looking at a group-level map is akin to using a high-precision microscope versus looking with a naked myopic eye, when all you see is a blur,” she wrote in an article published in Current Opinion in Behaviorial Sciences in 2021. Having developed and validated such paradigms, though, is now allowing Fedorenko and her group to probe interesting brains.
While in the scanner, each participant performed a task that Fedorenko began developing more than a decade ago. They were presented with a series of words that form real, meaningful sentences, and with a series of “nonwords”—strings of letters that are pronounceable but without meaning. In typical brains, language areas respond more strongly when participants read sentences compared to when they read nonword sequences.
Similarly, in response to the real sentences, the language regions in EG’s right frontal and temporal lobes lit up—they were bursting with activity—while the left frontal lobe regions remained silent. In the neurotypical participants, the language regions in both the left and right frontal and temporal lobes lit up, with the left areas outshining the right.
fMRI showing EG’s language activation on the brain surface. The right frontal lobe shows robust activations, while the left frontal lobe does not have any language responsive areas. Image: Fedorenko lab
“EG showed a very strong response in the right temporal and frontal regions that process language,” says Tuckute. “And if you look at the controls, whose language dominant hemisphere is in the left, EG’s response in her right hemisphere was similar—or even higher—compared to theirs, just on the opposite side.”
Leaving no stone unturned, the researchers next asked whether the lack of language responses in EG’s left frontal lobe might be due to a general lack of response to cognitive tasks rather than just to language. So they conducted a non-language, working-memory task: they had EG and the neurotypical participants perform arithmetic addition problems while in the scanner. In typical brains, this task elicits responses in frontal and parietal areas in both hemisphers.
Not only did regions of EG’s right frontal lobe light up in response to the task, those in her left frontal lobe did, too. “Both EG’s language-dominant (right) hemisphere, and her non-language-dominant (left) hemisphere showed robust responses to this working-memory task ,” says Tuckute. “So, yes, there’s definitely cognitive processing going on there. This selective lack of language responses in EG’s left frontal lobe led us to conclude that, for language, you need the temporal language region to ‘wire up’ the frontal language region.”
Next steps
In science, the answer to one question opens the door to untold more. “In EG, language took over a large chunk of the right frontal and temporal lobes,” says Fedorenko. “So what happens to the functions that in neurotypical individuals generally live in the right hemisphere?”
Many of those, she says, are social functions. The team has already tested EG on social tasks and is currently exploring how those social functions cohabit with the language ones in her right hemisphere. How can they all fit? Do some of the social functions have to migrate to other parts of the brain? They are also working with EG’s family: they have now scanned EG’s three siblings (one of whom is missing most of her right temporal lobe; the other two are neurotypical) and her father (also neurotypical).
The “Interesting Brains Project” website details current projects, findings, and ways to participate.
The project has now grown to include many other individuals with interesting brains, who contacted Fedorenko after some of this work was covered by news outlets. A website for this project can be found here. The project promises to provide unique insights into how our plastic brains reorganize and adapt to various circumstances.
MIT’s K. Lisa Yang Center for Bionics has entered into a collaboration with the Government of Sierra Leone to strengthen the capabilities and services of that country’s orthotic and prosthetic (O&P) sector. Tens of thousands of people in Sierra Leone are in need of orthotic braces and artificial limbs, but access to such specialized medical care in this African nation is limited.
The agreement, reached between MIT, the Center for Bionics, and Sierra Leone’s Ministry of Health and Sanitation (MoHS), provides a detailed memorandum of understanding and intentions that will begin as a four-year program. The collaborators aim to strengthen Sierra Leone’s O&P sector through six key objectives: data collection and clinic operations, education, supply chain, infrastructure, new technologies and mobile delivery of services.
Project Objectives
Data Collection and Clinic Operations: collect comprehensive data on epidemiology, need, utilization, and access for O&P services across the country
Education: create an inclusive education and training program for the people of Sierra Leone, to enable sustainable and independent operation of O&P services
Supply Chain: establish supply chains for prosthetic and orthotic components, parts, and materials for fabrication of devices
Infrastructure: prepare infrastructure (e.g., physical space, sufficient water, power and internet) to support increased production and services
New Technologies: develop and translate innovative technologies with potential to improve O&P clinic operations and management, patient mobility, and the design or fabrication of devices
Mobile Delivery: support outreach services and mobile delivery of care for patients in rural and difficult-to-reach areas
Working together, MIT’s bionics center and Sierra Leone’s MoHS aim to sustainably double the production and distribution of O&P services at Sierra Leone’s National Rehabilitation Centre and Bo Clinics over the next four years.
The team of MIT scientists who will be implementing this novel collaboration is led by Hugh Herr, MIT Professor of Media Arts and Sciences. Herr, himself a double amputee, serves as co-director of the K. Lisa Yang Center for Bionics, and heads the renowned Biomechatronics research group at the MIT Media Lab.
“From educational services, to supply chain, to new technology, this important MOU with the government of Sierra Leone will enable the Center to develop a broad, integrative approach to the orthotic and prosthetic sector within Sierra Leone, strengthening services and restoring much needed care to its citizens,” notes Professor Herr.
Sierra Leone’s Honorable Minister of Health Dr. Austin Demby also states: “As the Ministry of Health and Sanitation continues to galvanize efforts towards the attainment of Universal Health Coverage through the life stages approach, this collaboration will foster access, innovation and capacity building in the Orthotic and Prosthetic division. The ministry is pleased to work with and learn from MIT over the next four years in building resilient health systems, especially for vulnerable groups.”
“Our team at MIT brings together expertise across disciplines from global health systems to engineering and design,” added Francesca Riccio-Ackerman, the graduate student lead for the MIT Sierra Leone project. “This allows us to craft an innovative strategy with Sierra Leone’s Ministry of Health and Sanitation. Together we aim to improve available orthotic and prosthetic care for people with disabilities.”
The K. Lisa Yang Center for Bionics at the Massachusetts Institute of Technology pioneers transformational bionic interventions across a broad range of conditions affecting the body and mind. Based on fundamental scientific principles, the Center seeks to develop neural and mechanical interfaces for human-machine communications; integrate these interfaces into novel bionic platforms; perform clinical trials to accelerate the deployment of bionic products by the private sector; and leverage novel and durable, but affordable, materials and manufacturing processes to ensure equitable access to the latest bionic technology by all impacted individuals, especially those in developing countries.
Sierra Leone’s Ministry of Health and Sanitation is responsible for health service delivery across the country, as well as regulation of the health sector to meet the health needs of its citizenry.
This year’s holiday video (shown above) was inspired by Ev Fedorenko’s July 2022 Nature Neuroscience paper, which found similar patterns of brain activation and language selectivity across speakers of 45 different languages.
Universal language network
Ev Fedorenko uses the widely translated book “Alice in Wonderland” to test brain responses to different languages. Photo: Caitlin Cunningham
Over several decades, neuroscientists have created a well-defined map of the brain’s “language network,” or the regions of the brain that are specialized for processing language. Found primarily in the left hemisphere, this network includes regions within Broca’s area, as well as in other parts of the frontal and temporal lobes. Although roughly 7,000 languages are currently spoken and signed across the globe, the vast majority of those mapping studies have been done in English speakers as they listened to or read English texts.
To truly understand the cognitive and neural mechanisms that allow us to learn and process such diverse languages, Fedorenko and her team scanned the brains of speakers of 45 different languages while they listened to Alice in Wonderland in their native language. The results show that the speakers’ language networks appear to be essentially the same as those of native English speakers — which suggests that the location and key properties of the language network appear to be universal.
The many languages of McGovern
English may be the primary language used by McGovern researchers, but more than 35 other languages are spoken by scientists and engineers at the McGovern Institute. Our holiday video features 30 of these researchers saying Happy New Year in their native (or learned) language. Below is the complete list of languages included in our video. Expand each accordion to learn more about the speaker of that particular language and the meaning behind their new year’s greeting.
American Sign Language
Kian Caplan (Feng lab)
Nationality: American
Other languages spoken: English
American Sign Language (ASL) serves as the predominant sign language of Deaf communities in the United States and most of English-speaking Canada. Imaging studies have shown that ASL activates the brain’s language network in the same way that spoken languages do.
“In high school, I had a teacher who was fluent in ASL and exposed me to the beautiful language,” says Kaplan. “She inspired me to take three semesters of ASL in college, taught by a professor who was hard of hearing. It wasn’t until then that I began to appreciate Deaf history and culture, and had the opportunity to communicate with members of this wonderful community.”
Caplan goes on to explain that “ASL is not signed English, it is a different language with its own sets of grammar rules. Across the US, there are accents of sign language just like spoken languages, such as variations in signs used. Each country also has their own sign language, it is not universal (although there is technically a “Universal Sign Language”).”
Arabic
Ubadah Sabbagh (Feng lab)
Nationality: Syrian
Other languages spoken: English Personal webpage
Arabic, Sabbagh’s first language, is a Semitic language spoken across a large area including North Africa, most of the Arabian Peninsula, and other parts of the Middle East.
“Since this McGovern project is on language, I’d like to share a verse from one of my favorite Arabic poets, Mahmoud Darwish,” says Sabbagh. “He wrote on his relationship to language, and addressing it directly he said,
يا لغتي ساعديني على الاقتباس لأحتضن الكون.
يا لغتي! هل أكون أنا ما تكونين؟ أم أنت – يا لغتي – ما أكون؟
‘O my language, empower me to learn and so that I may embrace the universe.
O my language, will I become what you’ll become, or are you what becomes of me?'”
Bengali
Kohitij “Ko” Kar (DiCarlo lab)
Nationality: Indian
Other languages spoken: English, Hindi
Bengali, or Bangla, is an Indo-Aryan language native to the Bengal region of South Asia. It is the official, national, and most widely spoken language of Bangladesh and the second most widely spoken of the 22 scheduled languages of India.
“Like many other regional languages (and nations) around the world, Bengalis also have their own calendar. We are still in 1429 🙂 So the greeting I spoke is used a lot during our new year day, which is usually on April 15 (India), April 14 (Bangladesh),” says Kar.
Cantonese
Karen Pang (Anikeeva lab)
Nationality: Chinese (Hong Kong)
Other languages spoken: English, Mandarin
Like other Chinese dialects, Cantonese uses different tones to distinguish words. “Cantonese has nine tones,” says Pang, who was born and raised in Hong Kong.
Danish
Greta Tuckute (Fedorenko lab)
Nationality: Lithuanian and Danish
Other languages spoken: English, French, Lithuanian
“Right before midnight, most Danes will climb up on chairs, tables, or pretty much any elevated surface in order to jump down from it when the clock strikes twelve,” says Tuckute, who was born in Lithuana and moved to Denmark at age two. “It is considered good luck to ‘jump’ into the new year.”
Dothraki
Jessica Chomik-Morales (Kanwisher lab)
Nationality: American
Other languages spoken: English, Spanish
Dothraki is the constructed language (conlang) from the fantasy novel series “A Song of Ice and Fire” and its television adaptation “Game of Thrones.” It is spoken by the Dothraki, a nomadic people in the series’s fictional world. The Fedorenko lab has found that conlangs activate the language network the same way natural languages do.
“I have loved ‘Game of Thrones’ since reading the series in the sixth grade,” says Chomik-Morales. “The Dothraki are these incredible, ferocious warriors that fight on horseback in this fictional world and I can imagine they’d know how to throw a good celebration for New Year’s.”
French
Antoine De Comité (Seethapathi lab)
Nationality: Belgian
Other languages spoken: Dutch, English
“The French language has a lot of funny features,” says De Comité. “Almost all the time, we don’t pronounce the letter ‘h’ when it’s in a word. Also, there is no genuine word with a ‘w’ in French, they’re all borrowed from other languages.”
German
Marie Manthey (Anikeeva lab)
Nationality: German
Other languages spoken: English, French (beginner), Spanish (beginner)
“In Germany, depending on where you are living and what dialect you are speaking we have slightly different sayings for Happy New Year,” explains Manthey. “My family is from around Hamburg and north-west Lower Saxony, where ‘Prosit Neujahr’ is more typical. One thing that is a tradition in my family and in many German families is to watch the show ‘Dinner for One’ on New Year’s Eve. It’s a 15-minute British comedy sketch from the 1960’s about a woman named Miss Sophie who celebrates her 90th birthday by inviting her four closest friends to dinner. However, Miss Sophie has outlived all of these friends, so her butler James is forced to impersonate the guests throughout the four course meal. ‘Dinner for One’ is not really well known in Great Britain, but it airs on New Year’s Eve in German speaking countries and Scandinavia.
Greek
Konstantinos Kagias (Boyden lab)
Nationality: Greek
Other languages spoken: English, French
Greek, the official language of Greece and Cyprus, has the longest documented history of any Indo-European language, spanning thousands of years of written records.
“Each of the main words in the Greek New Year’s greeting ‘Καλη Χρονια Σε Όλους’ is the root word of few English words,” says Kagias, who has spoken the language his whole life. “Examples include calisthenics, California, chronology, chronic, and holistic.”
Hebrew
Tamar Regev (Fedorenko lab)
Nationality: Israeli
Other languages spoken: English, Spanish
“The new Jewish year is actually around September and is called ‘Rosh HaShana,’ or head of the year,” explains Regev. “This is when we say Shana Tova, eat pomegranates and apple with honey (to make the new year sweet).”
Hindi
Sugandha “Su” Sharma (Fiete/Tenenbaum labs)
Nationality: Indian, Canadian
Other languages spoken: English, Punjabi
Hindi is the preferred official language of India and is spoken as a first language by nearly 425 million people and as a second language by some 120 million more. Sharma was born and raised in India (specifically Amritsar, Punjab), and her family spoke both Hindi and Punjabi. She also learned both languages in school while growing up.
Irish (Gaeilge)
Maedbh King (Ghosh lab)
Nationality: Irish
Other languages spoken: English, French (intermediate), German (beginner)
“Although Irish is an official language of Ireland, it is not spoken by a majority of people on a day-to-day basis,” explains King. “However, Irish is taught in schools from kindergarten through high school so most people have a basic understanding of the language. I attended Irish immersion schools through high school as did most of my immediate and extended family on my mom’s side. There are certain regions of the country, known as ‘Gaeltachts’, where Irish is the primary language of the people. If you visit these regions, it is common to hear the language spoken by all members of the community, and road signs are generally only in Irish, which can be confusing for tourists!”
“The phrase I spoke in the video, ‘Go mbeirimid beo ag an am seo arís,’ directly translates to ‘May we live to see this time again next year.‘ It would typically be written on a New Year’s greeting card, or more commonly spoken as a New Year’s toast after one (or two or three) beers.”
Italian
Michelangelo “Michi” Naim (Yang lab)
Nationality: Italian
Other languages spoken: English, Hebrew Personal webpage
“Italian is a beautiful language with its rolled r’s, round vowels, and melodic rhythm,” says Naim. “We celebrate the New Year with a big dinner (we constantly think about food) and we light fireworks at midnight and drink Prosecco.”
Japanese
Atsushi Takahashi (Martinos Imaging Center)
Nationality: Canadian, American
Other languages spoken: English, French, Danish (beginner), Mandarin (beginner)
The Japanese language is spoken natively by about 128 million people, primarily by Japanese people and primarily in Japan, the only country where it is the national language. Takahashi, who was born in Ireland, learned Japanese from his father.
Kashmiri
Saima Malik Moraleda (Fedorenko lab)
Nationality: Spanish
Other languages spoken: Arabic (beginner), Catalan, English, French, Hindi/Urdu, Spanish
Kashmiri is spoken in Kashmir, a region split between India and Pakistan in the northwestern Indian subcontinent.
“While Kashmiri is spoken by approximately 8 million people, only a small percentage knows how to read and write it,” says Moraleda, whose father spoke Kashmiri in her childhood home. “I was lucky that Harvard started offering a Kashmiri course last year, so I’ve finally started to learn to read a language I have known since I was born,” she adds. “There are three different scripts for it, none of which are standardized. I ended up picking the Romanized script for the greeting since that’s what the youth use when texting.”
Klingon
Maya Taliaferro (Fedorenko lab)
Nationality: American
Other languages spoken: English, Japanese
Klingon is the constructed language (conlang) spoken by the Klingons in the the Star Trek universe. As a conlang, Klingon has no real regional specificity and therefore has speakers from all over the world. Where there are fans of Star Trek there can be Klingon speakers. Fictionally, however, it originates on the planet Qo’noS where the Klingon people are from. The Fedorenko lab has found that conlangs activate the language network the same way natural languages do.
“While Klingon is a relatively niche language with an estimated 50-60 fluent speakers, anyone can learn it by taking a course on Duolingo/joining the Klingon Language Institute,” says Taliaferro, whose father is a “huge fan” of Star Trek.
Konkani
Rahul Brito (Ghosh lab)
Nationality: American
Other languages spoken: English, French (beginner)
Konkani is primarily spoken in Konkan, India which includes parts of modern states on the west coast of India such as Goa, Karnataka, Maharashtra, and Kerala. Although Brito’s extended family speaks Konkani, he actually does not speak it himself.
“To learn how to say ‘happy new year,’ I had to ask my mom (who did not remember), my aunt in India (who did not know for sure), and then her friend (who sent me a voice recording),” says Brito.
Korean
Jaeyoung Yoon (Harnett lab)
Nationality: Korean
Other languages spoken: English, Italian (beginner)
Korean is the native language for about 80 million people, mostly of Korean descent. Yoon was born in South Korea and has spoken the language his entire life.
Mandarin
Yiting “Veronica” Su (Desimone lab)
Nationality: Chinese
Other languages spoken: English
Chinese New Year, also called Lunar New Year, is an annual 15-day festival in China and Chinese communities around the world that begins with the new moon that occurs sometime between January 21 and February 20 according to Western calendars. Festivities last until the following full moon.
“In my culture, we celebrate the new year by cleaning and decorating the house with red things, offering sacrifices to ancestors, exchanging red envelopes and other gifts, watching lion and dragon dances, and of course, eating food at family reunion dinners!”
Marathi
Aalok Sathe (Fedorenko lab)
Nationality: Indian
Other languages spoken: English, Hindi, Sanskrit
Marathi is an Indo-Aryan language predominantly spoken in the central-west and coastal regions of India.
“We typically celebrate the new year in March/April by raising a gudhi in a window or a balcony of the home and by drawing colorful rangoli on the floor outside of entrances to homes and other establishments like schools and offices,” says Sathe. “The gudhi is a kind of flag made from a long wooden stick with a festive cloth, mango and neem leaves, marigold flowers, sugar crystals, and an upside-down silver/copper vessel on top to hold everything in place. This day also symbolizes the day Rama returned from a 14-year exile after defeating Ravana. Rama was a king whose dynasty and story (Ramayana) finds mention in mythologies of many cultures of South and East Asia including India, Nepal, Tibet, Thailand, Indonesia, the Philippines, and more. Some also consider this the day Brahma created the universe.”
Marwari
Vinayak “Vin” Agarwal (McDermott lab)
Nationality: Indian
Other languages spoken: English, Hindi
Marwari is spoken in the Indian state of Rajasthan, where Agarwal grew up. Rajasthan is the largest Indian state by area and is located on India’s northwestern side, where it comprises most of the Thar Desert, or Great Indian Desert.
Nepali
Sujaya Neupane (Jazayeri lab)
Nationality: Nepalese, Canadian
Other languages spoken: English, Hindi
Nepali is an Indo-Aryan language native to the Himalayas region of South Asia. It is the official, and most widely spoken, language of Nepal, where Neupane was born and raised.
Persian (Farsi)
Yasaman Bagherzadeh (Desimone lab)
Nationality: Iranian
Other languages spoken: English
Persian language or Farsi is spoken in Iran, Afghanistan, and Tajikistan. In Iran, 68% of the population speaks Persian as a first language.
“The new year and the first day of the Iranian calendar is different from most parts of the world,” explains Bagherzadeh. “The first day of the Iranian calendar falls on the March equinox, the first day of spring, around 21 March. We call it ‘Nowruz’ which means new day. The day of Nowruz has its origins in the Iranian religion of Zoroastrianism and is thus rooted in the traditions of the Iranian people for over 3,000 years. We celebrate Nowruz by cleaning our house (we call it home shaking), buying new clothes for the new year, visiting friends and family, and food preparation. Instead of a Christmas tree, we have 7-sin. Typically, before the arrival of Nowruz, family members gather around the Haft-sin table and await the exact moment of the March equinox to celebrate the New Year. The number 7 and the letter S are related to the seven Ameshasepantas as mentioned in the Zend-Avesta. They relate to the four elements of Fire, Earth, Air, Water, and the three life forms of Humans, Animals and Plants.”
Polish
Julia Dziubek (Harnett lab)
Nationality: Polish
Other languages spoken: English, German
“In Poland, we believe that the way you spend the last twelve days of your year will represent how you will spend the twelve months of the new year,” explains Dziubek. “For people who do not spend their last 12 days well, we have another belief,” she adds. “The way you spend your New Year’s Eve will determine how you will spend your new year.”
Portuguese
Willian De Faria (Kanwisher lab)
Nationality: Brazilian
Other languages spoken: English, Spanish
Portuguese is a western Romance language originating in the Iberian Peninsula of Europe. Approximately 274 million people speak Portuguese and is usually listed as the sixth-most spoken language in the world. Today, Portuguese is spoken in the Iberian peninsula, South America, and parts of Africa. The countries where Portuguese is spoken as the primary native languages are Portugal, Brazil, Angola, and São Tomé e Príncipe. However, Portuguese is the primary administrative language of many other countries like Mozambique and Cabo Verde.
“Fun fact,” says De Faria, who was born in Brazil and lived there until he was six. “It is easier for Portuguese native speakers to learn Spanish than the other way around. Also, Portuguese is a well represented language in New England! Aside from immigrants from Portugal, lots of lusophone communities have called Massachusetts, Rhode Island, and Connecticut home. Many of these communities have Brazilian and Cabo Verdean origins. To note, Cabo Verdeans speak a beautiful Portuguese-based creole.”
Russian
Elvira Kinzina (AbuGoot lab)
Nationality: Russian
Other languages spoken: Arabic (beginner), English
Russian is an East Slavic language mainly spoken across Russia with over 258 million total speakers worldwide.
Saint Lucian Creole French (Kwéyòl)
Quilee Simeon (Yang lab)
Nationality: Saint Lucian
Other languages spoken: English
Saint Lucian Creole French (Kwéyòl), known locally as Patwa, is the French-based Creole widely spoken in Saint Lucia, where Simeon was born. It is the vernacular language of the country and is spoken alongside the official language of English. Though Kwéyòl is not an official language, the government and media houses present information in Kwéyòl, alongside English.
Spanish
Raul Mojica Soto-Albors (Harnett lab)
Nationality: Puerto Rican, American
Other languages spoken: English
“In Puerto Rico, most people speak Spanglish – a combination of Spanish and English,” explains Soto-Albors, who was born in Puerto Rico. “We constantly switch words up in a single sentence when speaking, with a seemingly arbitrary yet consistent set of rules.”
Regarding his new year’s greeting, Soto-Albors says, “it is common (more as a courtesy for acquaintances, service workers, and anyone you won’t see until after the new year) for people to wish each other ‘Feliz navidad y próspero año nuevo,’ which roughly translates to ‘Merry Christmas and Happy New Year,’ or, literally, ‘Merry Christmas and have a prosperous new year.’
Tamil
Karthik Srinivasan (Desimone lab)
Nationality: Indian
Other languages spoken: English, Hindi, and to varying degrees of comprehension and spoken ability Malayalam, Telugu, and Kannada (the other three major languages of the Dravidian language family)
Tamil is a Dravidian language natively spoken by the Tamil people of South Asia. Roughly 70 million people are native Tamil speakers. Tamil is an official language of the Indian state of Tamil Nadu, the sovereign nations of Sri Lanka and Singapore, and the Indian territory of Puducherry. According to Srinivasan, “Tamil is one of the classical languages of India with literature dating back to antiquity and before (~atleast 1500 BCE if not earlier). It is possibly the oldest continuously spoken civilizational language and culture in the world with written records.”
Urdu
Syed Suleman Abbas Zaidi
Nationality: Pakistani
Other languages spoken: English
Urdu is an Indo-Aryan language spoken chiefly in South Asia. It is the national language of Pakistan, where it is also an official language alongside English. Similar to celebrations in the United States, Pakistanis ring in the new year with lots of fireworks, says Zaidi.