When he turned his ankle five years ago as an undergraduate playing pickup basketball at the University of Illinois, Wei-Chen (Eric) Wang SM ’22 knew his life would change in certain ways. For one thing, Wang, then a computer science major, wouldn’t be playing basketball anytime soon. He also assumed, correctly, that he might require physical therapy (PT).
What he did not foresee was that this minor injury would influence his career trajectory. While lying on the PT bench, Wang began to wonder: “Can I replicate what the therapist is doing using a robot?” It was an idle thought at the time. Today, however, his research involves robots and movement, closely related to what had seemed a passing fancy.
Wang continued his focus on computer science as an MIT graduate student, receiving his master’s in 2022 before deciding to pursue work of a more applied nature. He met Nidhi Seethapathi, who had joined MIT’s faculty as an assistant professor in electrical engineering and computer science and brain and cognitive science a few months earlier, and was intrigued by the notion of creating robots that could illuminate the key principles of movement—knowledge that might someday help people regain the ability to move comfortably after suffering from injury, stroke, or disease.
As the first PhD student in Seethapathi’s group and a MathWorks Fellow, Wang is charged with building machine learning-based models that can accurately predict and reproduce human movements. He will then use computer-simulated environments to visualize and evaluate the performance of these models.
To begin, he needs to gather data about specific human movements. One potential data collection method involves the placement of sensors or markers on different parts of the body to pinpoint their precise positions at any given moment. He can then try to calculate those positions in the future, as dictated by the equations of motion in physics.
The other method relies on computer vision-powered software that can automatically convert video footage to motion data. Wang prefers the latter approach, which he considers more natural. “We just look at what humans are doing and try to learn from that directly,” he explains. That’s also where machine learning comes in. “We use machine-learning tools to extract data from the video, and those data become the input to our model,” he adds. The model, in this case, is just another term for the robot brain.
The near-term goal is not to make robots more natural, Wang notes. “We’re using [simulated] robots to understand how humans are moving and eventually to explain any kind of movement—or at least that’s the hope. That said, based on the general principles we’re able to abstract, we might someday build robots that can move more naturally.”
Wang is also collaborating on a project headed by postdoctoral fellow Antoine De Comité that focuses on robotic retrieval of objects—the movements required to remove books from a library shelf, for example, or to grab a drink from a refrigerator. While robots routinely excel at tasks such as grasping an object on a tabletop, performing naturalistic movements in three dimensions remains challenging.
Wang describes a video shown by a Stanford University scientist in which a robot destroyed a refrigerator while attempting to extract a beer. He and De Comité hope for better results with robots that have undergone reinforcement learning—an approach using deep learning in which desired motions are rewarded or reinforced whereas unwanted motions are discouraged.
If they succeed in designing a robot that can safely retrieve a beer, Wang says, then more important and delicate tasks could be within reach. Someday, a robot at PT might guide a patient through knee exercises or apply ultrasound to an arthritic elbow.
Nidhi Seethapathi was first drawn to using powerful yet simple models to understand elaborate patterns when she learned about Newton’s laws of motion as a high school student in India. She was fascinated by the idea that wonderfully complex behaviors can arise from a set of objects that follow a few elementary rules.
Now an assistant professor at MIT, Seethapathi seeks to capture the intricacies of movement in the real world, using computational modeling as well as input from theory and experimentation. “[Theoretical physicist and Nobel laureate] Richard Feynman ’39 once said, ‘What I cannot create, I do not understand,’” Seethapathi says. “In that same spirit, the way I try to understand movement is by building models that move the way we do.”
Models of locomotion in the real world
Seethapathi—who holds a shared faculty position between the Department of Brain and Cognitive Sciences and the Department of Electrical Engineering and Computer Science’s Faculty of Artificial Intelligence + Decision- Making, which is housed in the Schwarzman College of Computing and the School of Engineering—recalls a moment during her undergraduate years studying mechanical engineering in Mumbai when a professor asked students to pick an aspect of movement to examine in detail. While most of her peers chose to analyze machines, Seethapathi selected the human hand. She was astounded by its versatility, she says, and by the number of variables, referred to by scientists as “degrees of freedom,” that are needed to characterize routine manual tasks. The assignment made her realize that she wanted to explore the diverse ways in which the entire human body can move.
Also an investigator at the McGovern Institute for Brain Research, Seethapathi pursued graduate research at The Ohio State University Movement Lab, where her goal was to identify the key elements of human locomotion. At that time, most people in the field were analyzing simple movements, she says, “but I was interested in broadening the scope of my models to include real-world behavior. Given that movement is so ubiquitous, I wondered: What can this model say about everyday life?”
After earning her PhD from Ohio State in 2018, Seethapathi continued this line of research as a postdoctoral fellow at the University of Pennsylvania. New computer vision tools to track human movement from video footage had just entered the scene, and during her time at UPenn, Seethapathi sought to expand her skillset to include computer vision and applications to movement rehabilitation.
At MIT, Seethapathi continues to extend the range of her studies of human movement, looking at how locomotion can evolve as people grow and age, and how it can adapt to anatomical changes and even adjust to shifts in weather, which can alter ground conditions. Her investigations now encompass other species as part of an effort to determine how creatures with different morphologies and habitats regulate their movements.
The models Seethapathi and her team create make predictions about human movements that can later be verified or refuted by empirical tests. While relatively simple experiments can be carried out on treadmills, her group is developing measurement systems incorporating wearable sensors and video-based sensing to measure movement data that have traditionally been hard to obtain outside the laboratory.
Although Seethapathi says she is primarily driven to uncover the fundamental principles that govern movement behavior, she believes her work also has practical applications.
“When people are treated for a movement disorder, the goal is to impact their movements in the real world,” she says. “We can use our predictive models to see how a particular intervention will affect a person’s trajectory. The hope is that our models can help put the individual on the right track to recovery as early as possible.”
Artificial intelligence seems to have gotten a lot smarter recently. AI technologies are increasingly integrated into our lives — improving our weather forecasts, finding efficient routes through traffic, personalizing the ads we see and our experiences with social media.
But with the debut of powerful new chatbots like ChatGPT, millions of people have begun interacting with AI tools that seem convincingly human-like. Neuroscientists are taking note — and beginning to dig into what these tools tell us about intelligence and the human brain.
The essence of human intelligence is hard to pin down, let alone engineer. McGovern scientists say there are many kinds of intelligence, and as humans, we call on many different kinds of knowledge and ways of thinking. ChatGPT’s ability to carry on natural conversations with its users has led some to speculate the computer model is sentient, but McGovern neuroscientists insist that the AI technology cannot think for itself.
Still, they say, the field may have reached a turning point.
“I still don’t believe that we can make something that is indistinguishable from a human. I think we’re a long way from that. But for the first time in my life I think there is a small, nonzero chance that it may happen in the next year,” says McGovern founding member Tomaso Poggio, who has studied both human intelligence and machine learning for more than 40 years.
Different sort of intelligence
Developed by the company OpenAI, ChatGPT is an example of a deep neural network, a type of machine learning system that has made its way into virtually every aspect of science and technology. These models learn to perform various tasks by identifying patterns in large datasets. ChatGPT works by scouring texts and detecting and replicating the ways language is used. Drawing on language patterns it finds across the internet, ChatGPT can design you a meal plan, teach you about rocket science, or write a high school-level essay about Mark Twain. With all of the internet as a training tool, models like this have gotten so good at what they do, they can seem all-knowing.
“Engineers have been inventing some of these forms of intelligence since the beginning of the computers. ChatGPT is one. But it is very far from human intelligence.” – Tomaso Poggio
Nonetheless, language models have a restricted skill set. Play with ChatGPT long enough and it will surely give you some wrong information, even if its fluency makes its words deceptively convincing. “These models don’t know about the world, they don’t know about other people’s mental states, they don’t know how things are beyond whatever they can gather from how words go together,” says Postdoctoral Associate Anna Ivanova, who works with McGovern Investigators Evelina Fedorenko and Nancy Kanwisher as well as Jacob Andreas in MIT’s Computer Science and Artificial Intelligence Laboratory.
Such a model, the researchers say, cannot replicate the complex information processing that happens in the human brain. That doesn’t mean language models can’t be intelligent — but theirs is a different sort of intelligence than our own. “I think that there is an infinite number of different forms of intelligence,” says Poggio. “Engineers have been inventing some of these forms of intelligence since the beginning of the computers. ChatGPT is one. But it is very far from human intelligence.”
Under the hood
Just as there are many forms of intelligence, there are also many types of deep learning models — and McGovern researchers are studying the internals of these models to better understand the human brain.
“These AI models are, in a way, computational hypotheses for what the brain is doing,” Kanwisher says. “Up until a few years ago, we didn’t really have complete computational models of what might be going on in language processing or vision. Once you have a way of generating actual precise models and testing them against real data, you’re kind of off and running in a way that we weren’t ten years ago.”
Artificial neural networks echo the design of the brain in that they are made of densely interconnected networks of simple units that organize themselves — but Poggio says it’s not yet entirely clear how they work.
No one expects that brains and machines will work in exactly the same ways, though some types of deep learning models are more humanlike in their internals than others. For example, a computer vision model developed by McGovern Investigator James DiCarlo responds to images in ways that closely parallel the activity in the visual cortex of animals who are seeing the same thing. DiCarlo’s team can even use their model’s predictions to create an image that will activate specific neurons in an animal’s brain.
“We shouldn’t just automatically assume that if we trained a deep network on a task, that it’s going to look like the brain.” – Ila Fiete
Still, there is reason to be cautious in interpreting what artificial neural networks tell us about biology. “We shouldn’t just automatically assume that if we trained a deep network on a task, that it’s going to look like the brain,” says McGovern Associate Investigator Ila Fiete. Fiete acknowledges that it’s tempting to think of neural networks as models of the brain itself due to their architectural similarities — but she says so far, that idea remains largely untested.
She and her colleagues recently experimented with neural networks that estimate an object’s position in space by integrating information about its changing velocity.
In the brain, specialized neurons known as grid cells carry out this calculation, keeping us aware of where we are as we move through the world. Other researchers had reported that not only can neural networks do this successfully, those that do include components that behave remarkably like grid cells. They had argued that the need to do this kind of path integration must be the reason our brains have grid cells — but Fiete’s team found that artificial networks don’t need to mimic the brain to accomplish this brain-like task. They found that many neural networks can solve the same problem without grid cell-like elements.
One way investigators might generate deep learning models that do work like the brain is to give them a problem that is so complex that there is only one way of solving it, Fiete says.
Language, she acknowledges, might be that complex.
“This is clearly an example of a super-rich task,” she says. “I think on that front, there is a hope that they’re solving such an incredibly difficult task that maybe there is a sense in which they mirror the brain.”
In Fedorenko’s lab, where researchers are focused on identifying and understanding the brain’s language processing circuitry, they have found that some language models do, in fact, mimic certain aspects of human language processing. Many of the most effective models are trained to do a single task: make predictions about word use. That’s what your phone is doing when it suggests words for your text message as you type. Models that are good at this, it turns out, can apply this skill to carrying on conversations, composing essays, and using language in other useful ways. Neuroscientists have found evidence that humans, too, rely on word prediction as a part of language processing.
Fedorenko and her team compared the activity of language models to the brain activity of people as they read or listened to words, sentences, and stories, and found that some models were a better match to human neural responses than others. “The models that do better on this relatively unsophisticated task — just guess what comes next — also do better at capturing human neural responses,” Fedorenko says.
It’s a compelling parallel, suggesting computational models and the human brain may have arrived at a similar solution to a problem, even in the face of the biological constraints that have shaped the latter. For Fedorenko and her team, it’s sparked new ideas that they will explore, in part, by modifying existing language models — possibly to more closely mimic the brain.
With so much still unknown about how both human and artificial neural networks learn, Fedorenko says it’s hard to predict what it will take to make language models work and behave more like the human brain. One possibility they are exploring is training a model in a way that more closely mirrors the way children learn language early in life.
Another question, she says, is whether language models might behave more like humans if they had a more limited recall of their own conversations. “All of the state-of-the-art language models keep track of really, really long linguistic contexts. Humans don’t do that,” she says.
Chatbots can retain long strings of dialogue, using those words to tailor their responses as a conversation progresses, she explains. Humans, on the other hand, must cope with a more limited memory. While we can keep track of information as it is conveyed, we only store a string of about eight words as we listen or read. “We get linguistic input, we crunch it up, we extract some kind of meaning representation, presumably in some more abstract format, and then we discard the exact linguistic stream because we don’t need it anymore,” Fedorenko explains.
Language models aren’t able to fill in gaps in conversation with their own knowledge and awareness in the same way a person can, Ivanova adds. “That’s why so far they have to keep track of every single input word,” she says. “If we want a model that models specifically the [human] language network, we don’t need to have this large context window. It would be very cool to train those models on those short windows of context and see if it’s more similar to the language network.”
Despite these parallels, Fedorenko’s lab has also shown that there are plenty of things language circuits do not do. The brain calls on other circuits to solve math problems, write computer code, and carry out myriad other cognitive processes. Their work makes it clear that in the brain, language and thought are not the same.
That’s borne out by what cognitive neuroscientists like Kanwisher have learned about the functional organization of the human brain, where circuit components are dedicated to surprisingly specific tasks, from language processing to face recognition.
“The upshot of cognitive neuroscience over the last 25 years is that the human brain really has quite a degree of modular organization,” Kanwisher says. “You can look at the brain and say, ‘what does it tell us about the nature of intelligence?’ Well, intelligence is made up of a whole bunch of things.”
In January, Fedorenko, Kanwisher, Ivanova, and colleagues shared an extensive analysis of the capabilities of large language models. After assessing models’ performance on various language-related tasks, they found that despite their mastery of linguistic rules and patterns, such models don’t do a good job using language in real-world situations. From a neuroscience perspective, that kind of functional competence is distinct from formal language competence, calling on not just language-processing circuits but also parts of the brain that store knowledge of the world, reason, and interpret social interactions.
Language is a powerful tool for understanding the world, they say, but it has limits.
“If you train on language prediction alone, you can learn to mimic certain aspects of thinking,” Ivanova says. “But it’s not enough. You need a multimodal system to carry out truly intelligent behavior.”
The team concluded that while AI language models do a very good job using language, they are incomplete models of human thought. For machines to truly think like humans, Ivanova says, they will need a combination of different neural nets all working together, in the same way different networks in the human brain work together to achieve complex cognitive tasks in the real world.
It remains to be seen whether such models would excel in the tech world, but they could prove valuable for revealing insights into human cognition — perhaps in ways that will inform engineers as they strive to build systems that better replicate human intelligence.
What does a healthy relationship between neuroscience and society look like? How do we set the conditions for that relationship to flourish? Researchers and staff at the McGovern Institute and the MIT Museum have been exploring these questions with a five-month planning grant from the Dana Foundation.
Between October 2022 and March 2023, the team tested the potential for an MIT Center for Neuroscience and Society through a series of MIT-sponsored events that were attended by students and faculty of nearby Cambridge Public Schools. The goal of the project was to learn more about what happens when the distinct fields of neuroscience, ethics, and public engagement are brought together to work side-by-side.
Middle schoolers visit McGovern
Over four days in February, more than 90 sixth graders from Rindge Avenue Upper Campus (RAUC) in Cambridge, Massachusetts, visited the McGovern Institute and participated in hands-on experiments and discussions about the ethical, legal, and social implications of neuroscience research. RAUC is one of four middle schools in the city of Cambridge with an economically, racially, and culturally diverse student population. The middle schoolers interacted with an MIT team led by McGovern Scientific Advisor Jill R. Crittenden, including seventeen McGovern neuroscientists, three MIT Museum outreach coordinators, and neuroethicist Stephanie Bird, a member of the Dana Foundation planning grant team.
“It is probably the only time in my life I will see a real human brain.” – RAUC student
The students participated in nine activities each day, including trials of brain-machine interfaces, close-up examinations of preserved human brains, a tour of McGovern’s imaging center in which students watched as their teacher’s brain was scanned, and a visit to the MIT Museum’s interactive Artificial Intelligence Gallery.
To close out their visit, students worked in groups alongside experts to invent brain-computer interfaces designed to improve or enhance human abilities. At each step, students were introduced to ethical considerations through consent forms, questions regarding the use of animal and human brains, and the possible impacts of their own designs on individuals and society.
“I admit that prior to these four days, I would’ve been indifferent to the inclusion of children’s voices in a discussion about technically complex ethical questions, simply because they have not yet had any opportunity to really understand how these technologies work,” says one researcher involved in the visit. “But hearing the students’ questions and ideas has changed my perspective. I now believe it is critically important that all age groups be given a voice when discussing socially relevant issues, such as the ethics of brain computer interfaces or artificial intelligence.”
For more information on the proposed MIT Center for Neuroscience and Society, visit the MIT Museum website.
A new study from researchers at MIT and Brown University characterizes several properties that emerge during the training of deep classifiers, a type of artificial neural network commonly used for classification tasks such as image classification, speech recognition, and natural language processing.
In the study, the authors focused on two types of deep classifiers: fully connected deep networks and convolutional neural networks (CNNs).
A previous study examined the structural properties that develop in large neural networks at the final stages of training. That study focused on the last layer of the network and found that deep networks trained to fit a training dataset will eventually reach a state known as “neural collapse.” When neural collapse occurs, the network maps multiple examples of a particular class (such as images of cats) to a single template of that class. Ideally, the templates for each class should be as far apart from each other as possible, allowing the network to accurately classify new examples.
An MIT group based at the MIT Center for Brains, Minds and Machines studied the conditions under which networks can achieve neural collapse. Deep networks that have the three ingredients of stochastic gradient descent (SGD), weight decay regularization (WD), and weight normalization (WN) will display neural collapse if they are trained to fit their training data. The MIT group has taken a theoretical approach — as compared to the empirical approach of the earlier study — proving that neural collapse emerges from the minimization of the square loss using SGD, WD, and WN.
Co-author and MIT McGovern Institute postdoc Akshay Rangamani states, “Our analysis shows that neural collapse emerges from the minimization of the square loss with highly expressive deep neural networks. It also highlights the key roles played by weight decay regularization and stochastic gradient descent in driving solutions towards neural collapse.”
Weight decay is a regularization technique that prevents the network from over-fitting the training data by reducing the magnitude of the weights. Weight normalization scales the weight matrices of a network so that they have a similar scale. Low rank refers to a property of a matrix where it has a small number of non-zero singular values. Generalization bounds offer guarantees about the ability of a network to accurately predict new examples that it has not seen during training.
The authors found that the same theoretical observation that predicts a low-rank bias also predicts the existence of an intrinsic SGD noise in the weight matrices and in the output of the network. This noise is not generated by the randomness of the SGD algorithm but by an interesting dynamic trade-off between rank minimization and fitting of the data, which provides an intrinsic source of noise similar to what happens in dynamic systems in the chaotic regime. Such a random-like search may be beneficial for generalization because it may prevent over-fitting.
“Interestingly, this result validates the classical theory of generalization showing that traditional bounds are meaningful. It also provides a theoretical explanation for the superior performance in many tasks of sparse networks, such as CNNs, with respect to dense networks,” comments co-author and MIT McGovern Institute postdoc Tomer Galanti. In fact, the authors prove new norm-based generalization bounds for CNNs with localized kernels, that is a network with sparse connectivity in their weight matrices.
In this case, generalization can be orders of magnitude better than densely connected networks. This result validates the classical theory of generalization, showing that its bounds are meaningful, and goes against a number of recent papers expressing doubts about past approaches to generalization. It also provides a theoretical explanation for the superior performance of sparse networks, such as CNNs, with respect to dense networks. Thus far, the fact that CNNs and not dense networks represent the success story of deep networks has been almost completely ignored by machine learning theory. Instead, the theory presented here suggests that this is an important insight in why deep networks work as well as they do.
“This study provides one of the first theoretical analyses covering optimization, generalization, and approximation in deep networks and offers new insights into the properties that emerge during training,” says co-author Tomaso Poggio, the Eugene McDermott Professor at the Department of Brain and Cognitive Sciences at MIT and co-director of the Center for Brains, Minds and Machines. “Our results have the potential to advance our understanding of why deep learning works as well as it does.”
This year’s holiday video (shown above) was inspired by Ev Fedorenko’s July 2022 Nature Neuroscience paper, which found similar patterns of brain activation and language selectivity across speakers of 45 different languages.
Universal language network
Over several decades, neuroscientists have created a well-defined map of the brain’s “language network,” or the regions of the brain that are specialized for processing language. Found primarily in the left hemisphere, this network includes regions within Broca’s area, as well as in other parts of the frontal and temporal lobes. Although roughly 7,000 languages are currently spoken and signed across the globe, the vast majority of those mapping studies have been done in English speakers as they listened to or read English texts.
To truly understand the cognitive and neural mechanisms that allow us to learn and process such diverse languages, Fedorenko and her team scanned the brains of speakers of 45 different languages while they listened to Alice in Wonderland in their native language. The results show that the speakers’ language networks appear to be essentially the same as those of native English speakers — which suggests that the location and key properties of the language network appear to be universal.
The many languages of McGovern
English may be the primary language used by McGovern researchers, but more than 35 other languages are spoken by scientists and engineers at the McGovern Institute. Our holiday video features 30 of these researchers saying Happy New Year in their native (or learned) language. Below is the complete list of languages included in our video. Expand each accordion to learn more about the speaker of that particular language and the meaning behind their new year’s greeting.
American Sign Language
Kian Caplan (Feng lab)
Other languages spoken: English
American Sign Language (ASL) serves as the predominant sign language of Deaf communities in the United States and most of English-speaking Canada. Imaging studies have shown that ASL activates the brain’s language network in the same way that spoken languages do.
“In high school, I had a teacher who was fluent in ASL and exposed me to the beautiful language,” says Kaplan. “She inspired me to take three semesters of ASL in college, taught by a professor who was hard of hearing. It wasn’t until then that I began to appreciate Deaf history and culture, and had the opportunity to communicate with members of this wonderful community.”
Caplan goes on to explain that “ASL is not signed English, it is a different language with its own sets of grammar rules. Across the US, there are accents of sign language just like spoken languages, such as variations in signs used. Each country also has their own sign language, it is not universal (although there is technically a “Universal Sign Language”).”
Arabic, Sabbagh’s first language, is a Semitic language spoken across a large area including North Africa, most of the Arabian Peninsula, and other parts of the Middle East.
“Since this McGovern project is on language, I’d like to share a verse from one of my favorite Arabic poets, Mahmoud Darwish,” says Sabbagh. “He wrote on his relationship to language, and addressing it directly he said,
يا لغتي ساعديني على الاقتباس لأحتضن الكون.
يا لغتي! هل أكون أنا ما تكونين؟ أم أنت – يا لغتي – ما أكون؟
‘O my language, empower me to learn and so that I may embrace the universe.
O my language, will I become what you’ll become, or are you what becomes of me?'”
Kohitij “Ko” Kar (DiCarlo lab)
Other languages spoken: English, Hindi
Bengali, or Bangla, is an Indo-Aryan language native to the Bengal region of South Asia. It is the official, national, and most widely spoken language of Bangladesh and the second most widely spoken of the 22 scheduled languages of India.
“Like many other regional languages (and nations) around the world, Bengalis also have their own calendar. We are still in 1429 🙂 So the greeting I spoke is used a lot during our new year day, which is usually on April 15 (India), April 14 (Bangladesh),” says Kar.
Karen Pang (Anikeeva lab)
Nationality: Chinese (Hong Kong)
Other languages spoken: English, Mandarin
Like other Chinese dialects, Cantonese uses different tones to distinguish words. “Cantonese has nine tones,” says Pang, who was born and raised in Hong Kong.
Greta Tuckute (Fedorenko lab)
Nationality: Lithuanian and Danish
Other languages spoken: English, French, Lithuanian
“Right before midnight, most Danes will climb up on chairs, tables, or pretty much any elevated surface in order to jump down from it when the clock strikes twelve,” says Tuckute, who was born in Lithuana and moved to Denmark at age two. “It is considered good luck to ‘jump’ into the new year.”
Jessica Chomik-Morales (Kanwisher lab)
Other languages spoken: English, Spanish
Dothraki is the constructed language (conlang) from the fantasy novel series “A Song of Ice and Fire” and its television adaptation “Game of Thrones.” It is spoken by the Dothraki, a nomadic people in the series’s fictional world. The Fedorenko lab has found that conlangs activate the language network the same way natural languages do.
“I have loved ‘Game of Thrones’ since reading the series in the sixth grade,” says Chomik-Morales. “The Dothraki are these incredible, ferocious warriors that fight on horseback in this fictional world and I can imagine they’d know how to throw a good celebration for New Year’s.”
Antoine De Comité (Seethapathi lab)
Other languages spoken: Dutch, English
“The French language has a lot of funny features,” says De Comité. “Almost all the time, we don’t pronounce the letter ‘h’ when it’s in a word. Also, there is no genuine word with a ‘w’ in French, they’re all borrowed from other languages.”
Marie Manthey (Anikeeva lab)
Other languages spoken: English, French (beginner), Spanish (beginner)
“In Germany, depending on where you are living and what dialect you are speaking we have slightly different sayings for Happy New Year,” explains Manthey. “My family is from around Hamburg and north-west Lower Saxony, where ‘Prosit Neujahr’ is more typical. One thing that is a tradition in my family and in many German families is to watch the show ‘Dinner for One’ on New Year’s Eve. It’s a 15-minute British comedy sketch from the 1960’s about a woman named Miss Sophie who celebrates her 90th birthday by inviting her four closest friends to dinner. However, Miss Sophie has outlived all of these friends, so her butler James is forced to impersonate the guests throughout the four course meal. ‘Dinner for One’ is not really well known in Great Britain, but it airs on New Year’s Eve in German speaking countries and Scandinavia.
Konstantinos Kagias (Boyden lab)
Other languages spoken: English, French
Greek, the official language of Greece and Cyprus, has the longest documented history of any Indo-European language, spanning thousands of years of written records.
“Each of the main words in the Greek New Year’s greeting ‘Καλη Χρονια Σε Όλους’ is the root word of few English words,” says Kagias, who has spoken the language his whole life. “Examples include calisthenics, California, chronology, chronic, and holistic.”
Tamar Regev (Fedorenko lab)
Other languages spoken: English, Spanish
“The new Jewish year is actually around September and is called ‘Rosh HaShana,’ or head of the year,” explains Regev. “This is when we say Shana Tova, eat pomegranates and apple with honey (to make the new year sweet).”
Sugandha “Su” Sharma (Fiete/Tenenbaum labs)
Nationality: Indian, Canadian
Other languages spoken: English, Punjabi
Hindi is the preferred official language of India and is spoken as a first language by nearly 425 million people and as a second language by some 120 million more. Sharma was born and raised in India (specifically Amritsar, Punjab), and her family spoke both Hindi and Punjabi. She also learned both languages in school while growing up.
Maedbh King (Ghosh lab)
Other languages spoken: English, French (intermediate), German (beginner)
“Although Irish is an official language of Ireland, it is not spoken by a majority of people on a day-to-day basis,” explains King. “However, Irish is taught in schools from kindergarten through high school so most people have a basic understanding of the language. I attended Irish immersion schools through high school as did most of my immediate and extended family on my mom’s side. There are certain regions of the country, known as ‘Gaeltachts’, where Irish is the primary language of the people. If you visit these regions, it is common to hear the language spoken by all members of the community, and road signs are generally only in Irish, which can be confusing for tourists!”
“The phrase I spoke in the video, ‘Go mbeirimid beo ag an am seo arís,’ directly translates to ‘May we live to see this time again next year.‘ It would typically be written on a New Year’s greeting card, or more commonly spoken as a New Year’s toast after one (or two or three) beers.”
“Italian is a beautiful language with its rolled r’s, round vowels, and melodic rhythm,” says Naim. “We celebrate the New Year with a big dinner (we constantly think about food) and we light fireworks at midnight and drink Prosecco.”
Atsushi Takahashi (Martinos Imaging Center)
Nationality: Canadian, American
Other languages spoken: English, French, Danish (beginner), Mandarin (beginner)
The Japanese language is spoken natively by about 128 million people, primarily by Japanese people and primarily in Japan, the only country where it is the national language. Takahashi, who was born in Ireland, learned Japanese from his father.
Saima Malik Moraleda (Fedorenko lab)
Other languages spoken: Arabic (beginner), Catalan, English, French, Hindi/Urdu, Spanish
Kashmiri is spoken in Kashmir, a region split between India and Pakistan in the northwestern Indian subcontinent.
“While Kashmiri is spoken by approximately 8 million people, only a small percentage knows how to read and write it,” says Moraleda, whose father spoke Kashmiri in her childhood home. “I was lucky that Harvard started offering a Kashmiri course last year, so I’ve finally started to learn to read a language I have known since I was born,” she adds. “There are three different scripts for it, none of which are standardized. I ended up picking the Romanized script for the greeting since that’s what the youth use when texting.”
Maya Taliaferro (Fedorenko lab)
Other languages spoken: English, Japanese
Klingon is the constructed language (conlang) spoken by the Klingons in the the Star Trek universe. As a conlang, Klingon has no real regional specificity and therefore has speakers from all over the world. Where there are fans of Star Trek there can be Klingon speakers. Fictionally, however, it originates on the planet Qo’noS where the Klingon people are from. The Fedorenko lab has found that conlangs activate the language network the same way natural languages do.
“While Klingon is a relatively niche language with an estimated 50-60 fluent speakers, anyone can learn it by taking a course on Duolingo/joining the Klingon Language Institute,” says Taliaferro, whose father is a “huge fan” of Star Trek.
Rahul Brito (Ghosh lab)
Other languages spoken: English, French (beginner)
Konkani is primarily spoken in Konkan, India which includes parts of modern states on the west coast of India such as Goa, Karnataka, Maharashtra, and Kerala. Although Brito’s extended family speaks Konkani, he actually does not speak it himself.
“To learn how to say ‘happy new year,’ I had to ask my mom (who did not remember), my aunt in India (who did not know for sure), and then her friend (who sent me a voice recording),” says Brito.
Jaeyoung Yoon (Harnett lab)
Other languages spoken: English, Italian (beginner)
Korean is the native language for about 80 million people, mostly of Korean descent. Yoon was born in South Korea and has spoken the language his entire life.
Yiting “Veronica” Su (Desimone lab)
Other languages spoken: English
Chinese New Year, also called Lunar New Year, is an annual 15-day festival in China and Chinese communities around the world that begins with the new moon that occurs sometime between January 21 and February 20 according to Western calendars. Festivities last until the following full moon.
“In my culture, we celebrate the new year by cleaning and decorating the house with red things, offering sacrifices to ancestors, exchanging red envelopes and other gifts, watching lion and dragon dances, and of course, eating food at family reunion dinners!”
Aalok Sathe (Fedorenko lab)
Other languages spoken: English, Hindi, Sanskrit
Marathi is an Indo-Aryan language predominantly spoken in the central-west and coastal regions of India.
“We typically celebrate the new year in March/April by raising a gudhi in a window or a balcony of the home and by drawing colorful rangoli on the floor outside of entrances to homes and other establishments like schools and offices,” says Sathe. “The gudhi is a kind of flag made from a long wooden stick with a festive cloth, mango and neem leaves, marigold flowers, sugar crystals, and an upside-down silver/copper vessel on top to hold everything in place. This day also symbolizes the day Rama returned from a 14-year exile after defeating Ravana. Rama was a king whose dynasty and story (Ramayana) finds mention in mythologies of many cultures of South and East Asia including India, Nepal, Tibet, Thailand, Indonesia, the Philippines, and more. Some also consider this the day Brahma created the universe.”
Vinayak “Vin” Agarwal (McDermott lab)
Other languages spoken: English, Hindi
Marwari is spoken in the Indian state of Rajasthan, where Agarwal grew up. Rajasthan is the largest Indian state by area and is located on India’s northwestern side, where it comprises most of the Thar Desert, or Great Indian Desert.
Sujaya Neupane (Jazayeri lab)
Nationality: Nepalese, Canadian
Other languages spoken: English, Hindi
Nepali is an Indo-Aryan language native to the Himalayas region of South Asia. It is the official, and most widely spoken, language of Nepal, where Neupane was born and raised.
Yasaman Bagherzadeh (Desimone lab)
Other languages spoken: English
Persian language or Farsi is spoken in Iran, Afghanistan, and Tajikistan. In Iran, 68% of the population speaks Persian as a first language.
“The new year and the first day of the Iranian calendar is different from most parts of the world,” explains Bagherzadeh. “The first day of the Iranian calendar falls on the March equinox, the first day of spring, around 21 March. We call it ‘Nowruz’ which means new day. The day of Nowruz has its origins in the Iranian religion of Zoroastrianism and is thus rooted in the traditions of the Iranian people for over 3,000 years. We celebrate Nowruz by cleaning our house (we call it home shaking), buying new clothes for the new year, visiting friends and family, and food preparation. Instead of a Christmas tree, we have 7-sin. Typically, before the arrival of Nowruz, family members gather around the Haft-sin table and await the exact moment of the March equinox to celebrate the New Year. The number 7 and the letter S are related to the seven Ameshasepantas as mentioned in the Zend-Avesta. They relate to the four elements of Fire, Earth, Air, Water, and the three life forms of Humans, Animals and Plants.”
Julia Dziubek (Harnett lab)
Other languages spoken: English, German
“In Poland, we believe that the way you spend the last twelve days of your year will represent how you will spend the twelve months of the new year,” explains Dziubek. “For people who do not spend their last 12 days well, we have another belief,” she adds. “The way you spend your New Year’s Eve will determine how you will spend your new year.”
Willian De Faria (Kanwisher lab)
Other languages spoken: English, Spanish
Portuguese is a western Romance language originating in the Iberian Peninsula of Europe. Approximately 274 million people speak Portuguese and is usually listed as the sixth-most spoken language in the world. Today, Portuguese is spoken in the Iberian peninsula, South America, and parts of Africa. The countries where Portuguese is spoken as the primary native languages are Portugal, Brazil, Angola, and São Tomé e Príncipe. However, Portuguese is the primary administrative language of many other countries like Mozambique and Cabo Verde.
“Fun fact,” says De Faria, who was born in Brazil and lived there until he was six. “It is easier for Portuguese native speakers to learn Spanish than the other way around. Also, Portuguese is a well represented language in New England! Aside from immigrants from Portugal, lots of lusophone communities have called Massachusetts, Rhode Island, and Connecticut home. Many of these communities have Brazilian and Cabo Verdean origins. To note, Cabo Verdeans speak a beautiful Portuguese-based creole.”
Elvira Kinzina (AbuGoot lab)
Other languages spoken: Arabic (beginner), English
Russian is an East Slavic language mainly spoken across Russia with over 258 million total speakers worldwide.
Saint Lucian Creole French (Kwéyòl)
Quilee Simeon (Yang lab)
Nationality: Saint Lucian
Other languages spoken: English
Saint Lucian Creole French (Kwéyòl), known locally as Patwa, is the French-based Creole widely spoken in Saint Lucia, where Simeon was born. It is the vernacular language of the country and is spoken alongside the official language of English. Though Kwéyòl is not an official language, the government and media houses present information in Kwéyòl, alongside English.
Raul Mojica Soto-Albors (Harnett lab)
Nationality: Puerto Rican, American
Other languages spoken: English
“In Puerto Rico, most people speak Spanglish – a combination of Spanish and English,” explains Soto-Albors, who was born in Puerto Rico. “We constantly switch words up in a single sentence when speaking, with a seemingly arbitrary yet consistent set of rules.”
Regarding his new year’s greeting, Soto-Albors says, “it is common (more as a courtesy for acquaintances, service workers, and anyone you won’t see until after the new year) for people to wish each other ‘Feliz navidad y próspero año nuevo,’ which roughly translates to ‘Merry Christmas and Happy New Year,’ or, literally, ‘Merry Christmas and have a prosperous new year.’
Karthik Srinivasan (Desimone lab)
Other languages spoken: English, Hindi, and to varying degrees of comprehension and spoken ability Malayalam, Telugu, and Kannada (the other three major languages of the Dravidian language family)
Tamil is a Dravidian language natively spoken by the Tamil people of South Asia. Roughly 70 million people are native Tamil speakers. Tamil is an official language of the Indian state of Tamil Nadu, the sovereign nations of Sri Lanka and Singapore, and the Indian territory of Puducherry. According to Srinivasan, “Tamil is one of the classical languages of India with literature dating back to antiquity and before (~atleast 1500 BCE if not earlier). It is possibly the oldest continuously spoken civilizational language and culture in the world with written records.”
Syed Suleman Abbas Zaidi
Other languages spoken: English
Urdu is an Indo-Aryan language spoken chiefly in South Asia. It is the national language of Pakistan, where it is also an official language alongside English. Similar to celebrations in the United States, Pakistanis ring in the new year with lots of fireworks, says Zaidi.
For a few days in November, the McGovern Institute hummed with invented languages. Strangers greeted one another in Esperanto; trivia games were played in High Valyrian; Klingon and Na’vi were heard inside MRI scanners. Creators and users of these constructed languages (conlangs) had gathered at MIT in the name of neuroscience. McGovern Institute investigator Evelina Fedorenko and her team wanted to know what happened in their brains when they heard and understood these “foreign” tongues.
The constructed languages spoken by attendees had all been created for specific purposes. Most, like the Na’vi language spoken in the movie Avatar, had given identity and voice to the inhabitants of fictional worlds, while Esperanto was created to reduce barriers to international communication. But despite their distinct origins, a familiar pattern of activity emerged when researchers scanned speakers’ brains. The brain, they found, processes constructed languages with the same network of areas it uses for languages that evolved naturally over millions of years.
The meaning of language
“There’s all these things that people call language,” Fedorenko says. “Music is a kind of language and math is a kind of language.” But the brain processes these metaphorical languages differently than it does the languages humans use to communicate broadly about the world. To neuroscientists like Fedorenko, they can’t legitimately be considered languages at all. In contrast, she says, “these constructed languages seem really quite like natural languages.”
The “Brains on Conlangs” event that Fedorenko’s team hosted was part of its ongoing effort to understand the way language is generated and understood by the brain. Her lab and others have identified specific brain regions involved in linguistic processing, but it’s not yet clear how universal the language network is. Most studies of language cognition have focused on languages widely spoken in well-resourced parts of the world—primarily English, German, and Dutch. There are thousands of languages—spoken or signed—that have not been included.
Fedorenko and her team are deliberately taking a broader approach. “If we’re making claims about language as a whole, it’s kind of weird to make it based on a handful of languages,” she says. “So we’re trying to create tools and collect some data on as many languages as possible.”
So far, they have found that the language networks used by native speakers of dozens of different languages do share key architectural similarities. And by including a more diverse set of languages in their research, Fedorenko and her team can begin to explore how the brain makes sense of linguistic features that are not part of English or other well studied languages. The Brains on Conlangs event was a chance to expand their studies even further.
Nearly 50 speakers of Esperanto, Klingon, High Valyrian, Dothraki, and Na’vi attended Brains on Conlangs, drawn by the opportunity to connect with other speakers, hear from language creators, and contribute to the science. Graduate student Saima Malik-Moraleda and postbac research assistant Maya Taliaferro, along with other members of both the Fedorenko lab and brain and cognitive sciences professor Ted Gibson’s lab, and with help from Steve Shannon, Operations Manager of the Martinos Imaging Center, worked tirelessly to collect data from all participants. Two MRI scanners ran nearly continuously as speakers listened to passages in their chosen languages and researchers captured images of the brain’s response. To enable the research team to find the language-specific network in each person’s brain, participants also performed other tasks inside the scanner, including a memory task and listening to muffled audio in which the constructed languages were spoken, but unintelligible. They performed language tasks in English, as well.
Prior to the study, Fedorenko says, she had suspected constructed languages would activate the brain’s natural language-processing network, but she couldn’t be sure. Another possibility was that languages like Klingon and Esperanto would be handled instead by a problem-solving network known to be used when people work with some other so-called “languages,” like mathematics or computer programming. But once the data was in, the answer was clear. The five constructed languages included in the study all activated the brain’s language network.
That makes sense, Fedorenko says, because like natural languages, constructed languages enable people to communicate by associating words or signs with objects and ideas. Any language is essentially a way of mapping forms to meanings, she says. “You can construe it as a set of memories of how a particular sequence of sounds corresponds to some meaning. You’re learning meanings of words and constructions, and how to put them together to get more complex meanings. And it seems like the brain’s language system is very well suited for that set of computations.”
Many people barely consider how their bodies move — at least not until movement becomes more difficult due to injury or disease. But the McGovern scientists who are working to understand human movement and restore it after it has been lost know that the way we move is an engineering marvel.
Muscles, bones, brain, and nerves work together to navigate and interact with an ever-changing environment, making constant but often imperceptible adjustments to carry out our goals. It’s an efficient and highly adaptable system, and the way it’s put together is not at all intuitive, says Hugh Herr, a new associate investigator at the Institute.
That’s why Herr, who also co-directs MIT’s new K. Lisa Yang Center for Bionics, looks to biology to guide the development of artificial limbs that aim to give people the same agency, control, and comfort of natural limbs. McGovern Associate Investigator Nidhi Seethapathi, who like Herr joined the Institute in September, is also interested in understanding human movement in all its complexity. She is coming at the problem from a different direction, using computational modeling to predict how and why we move the way we do.
Moving through change
The computational models that Seethapathi builds in her lab aim to predict how humans will move under different conditions. If a person is placed in an unfamiliar environment and asked to navigate a course under time pressure, what path will they take? How will they move their limbs, and what forces will they exert? How will their movements change as they become more comfortable on the terrain?
Seethapathi uses the principles of robotics to build models that answer these questions, then tests them by placing real people in the same scenarios and monitoring their movements. So far, that has mostly meant inviting study subjects to her lab, but as she expands her models to predict more complex movements, she will begin monitoring people’s activity in the real world, over longer time periods than laboratory experiments typically allow.
Seethapathi’s hope is that her findings will inform the way doctors, therapists, and engineers help patients regain control over their movements after an injury or stroke, or learn to live with movement disorders like Parkinson’s disease. To make a real difference, she stresses, it’s important to bring studies of human movement out of the lab, where subjects are often limited to simple tasks like walking on a treadmill, into more natural settings. “When we’re talking about doing physical therapy, neuromotor rehabilitation, robotic exoskeletons — any way of helping people move better — we want to do it in the real world, for everyday, complex tasks,” she says.
When we’re talking about helping people move better — we want to do it in the real world, for everyday, complex tasks,” says Seethapathi.
Seethapathi’s work is already revealing how the brain directs movement in the face of competing priorities. For example, she has found that when people are given a time constraint for traveling a particular distance, they walk faster than their usual, comfortable pace — so much so that they often expend more energy than necessary and arrive at their destination a bit early. Her models suggest that people pick up their pace more than they need to because humans’ internal estimations of time are imprecise.
Her team is also learning how movements change as a person becomes familiar with an environment or task. She says people find an efficient way to move through a lot of practice. “If you’re walking in a straight line for a very long time, then you seem to pick the movement that is optimal for that long-distance walk,” she explains. But in the real world, things are always changing — both in the body and in the environment. So Seethapathi models how people behave when they must move in a new way or navigate a new environment. “In these kinds of conditions, people eventually wind up on an energy-optimal solution,” she says. “But initially, they pick something that prevents them from falling down.”
To capture the complexity of human movement, Seethapathi and her team are devising new tools that will let them monitor people’s movements outside the lab. They are also drawing on data from other fields, from architecture to physical therapy, and even from studies of other animals. “If I have general principles, they should be able to tell me how modifications in the body or in how the brain is connected to the body would lead to different movements,” she says. “I’m really excited about generalizing these principles across timescales and species.”
Building new bodies
In Herr’s lab, a deepening understanding of human movement is helping drive the development of increasingly sophisticated artificial limbs and other wearable robots. The team designs devices that interface directly with a user’s nervous system, so they are not only guided by the brain’s motor control systems, but also send information back to the brain.
Herr, a double amputee with two artificial legs of his own, says prosthetic devices are getting better at replicating natural movements, guided by signals from the brain. Mimicking the design and neural signals found in biology can even give those devices much of the extraordinary adaptability of natural human movement. As an example, Herr notes that his legs effortlessly navigate varied terrain. “There’s adaptive, stabilizing features, and the machine doesn’t have to detect every pothole and pebble and banana peel on the ground, because the morphology and the nervous system control is so inherently adaptive,” he says.
But, he notes, the field of bionics is in its infancy, and there’s lots of room for improvement. “It’s only a matter of time before a robotic knee, for example, can be as good as the biological knee or better,” he says. “But the problem is the human attached to that knee won’t feel it’s their knee until they can feel it, and until their central nervous system has complete agency over that knee,” he says. “So if you want to actually build new bodies and not just more and more powerful tools for humans, you have to link to the brain bidirectionally.”
Herr’s team has found that surgically restoring natural connections between pairs of muscles that normally work in opposition to move a limb, such as the arm’s biceps and triceps, gives the central nervous system signals about how that limb is moving, even when a natural limb is gone. The idea takes a cue from the work of McGovern Emeritus Investigator Emilio Bizzi, who found that the coordinated activation of groups of muscles by the nervous system, called muscle synergies, is important for motor control.
“It’s only a matter of time before a robotic knee can be as good as the biological knee or better,” says Herr.
“When a person thinks and moves their phantom limb, those muscle pairings move dynamically, so they feel, in a natural way, the limb moving — even though the limb is not there,” Herr explains. He adds that when those proprioceptive signals communicate instead how an artificial limb is moving, a person experiences “great agency and ownership” of that limb. Now, his group is working to develop sensors that detect and relay information usually processed by sensory neurons in the skin, so prosthetic devices can also perceive pressure and touch.
At the same time, they’re working to improve the mechanical interface between wearable robots and the body to optimize comfort and fit — whether that’s by using detailed anatomical imaging to guide the design of an individual’s device or by engineering devices that integrate directly with a person’s skeleton. There’s no “average” human, Herr says, and effective technologies must meet individual needs, not just for fit, but also for function. At that same time, he says it’s important to plan for cost-effective, mass production, because the need for these technologies is so great.
“The amount of human suffering caused by the lack of technology to address disability is really beyond comprehension,” he says. He expects tremendous progress in the growing field of bionics in the coming decades, but he’s impatient. “I think in 50 years, when scientists look back to this era, it’ll be laughable,” he says. “I’m always anxiously wanting to be in the future.”
Bipolar disorder often begins in childhood or adolescence, triggering dramatic mood shifts and intense emotions that cause problems at home and school. But the condition is often overlooked or misdiagnosed until patients are older. New research suggests that machine learning, a type of artificial intelligence, could help by identifying children who are at risk of bipolar disorder so doctors are better prepared to recognize the condition if it develops.
On October 13, 2022, researchers led by McGovern Institute investigator John Gabrieli and collaborators at Massachusetts General Hospital reported in the Journal of Psychiatric Research that when presented with clinical data on nearly 500 children and teenagers, a machine learning model was able to identify about 75 percent of those who were later diagnosed with bipolar disorder. The approach performs better than any other method of predicting bipolar disorder, and could be used to develop a simple risk calculator for health care providers.
Gabrieli says such a tool would be particularly valuable because bipolar disorder is less common in children than conditions like major depression, with which it shares symptoms, and attention-deficit/ hyperactivity disorder (ADHD), with which it often co-occurs. “Humans are not well tuned to watch out for rare events,” he says. “If you have a decent measure, it’s so much easier for a machine to identify than humans. And in this particular case, [the machine learning prediction] was surprisingly robust.”
Detecting bipolar disorder
Mai Uchida, Director of Massachusetts General Hospital’s Child Depression Program, says that nearly two percent of youth worldwide are estimated to have bipolar disorder, but diagnosing pediatric bipolar disorder can be challenging. A certain amount of emotional turmoil is to be expected in children and teenagers, and even when moods become seriously disruptive, children with bipolar disorder are often initially diagnosed with major depression or ADHD. That’s a problem, because the medications used to treat those conditions often worsen the symptoms of bipolar disorder. Tailoring treatment to a diagnosis of bipolar disorder, in contrast, can lead to significant improvements for patients and their families. “When we can give them a little bit of ease and give them a little bit of control over themselves, it really goes a long way,” Uchida says.
In fact, a poor response to antidepressants or ADHD medications can help point a psychiatrist toward a diagnosis of bipolar disorder. So too can a child’s family history, in addition to their own behavior and psychiatric history. But, Uchida says, “it’s kind of up to the individual clinician to pick up on these things.”
Uchida and Gabrieli wondered whether machine learning, which can find patterns in large, complex datasets, could focus in on the most relevant features to identify individuals with bipolar disorder. To find out, they turned to data from a study that began in the 1990s. The study, headed by Joseph Biederman, Chief of the Clinical and Research Programs in Pediatric Psychopharmacology and Adult ADHD at Massachusetts General Hospital, had collected extensive psychiatric assessments of hundreds of children with and without ADHD, then followed those individuals for ten years.
To explore whether machine learning could find predictors of bipolar disorder within that data, Gabrieli, Uchida, and colleagues focused on 492 children and teenagers without ADHD, who were recruited to the study as controls. Over the ten years of the study, 45 of those individuals developed bipolar disorder.
Within the data collected at the study’s outset, the machine learning model was able to find patterns that associated with a later diagnosis of bipolar disorder. A few behavioral measures turned out to be particularly relevant to the model’s predictions: children and teens with combined problems with attention, aggression, and anxiety were most likely to later be diagnosed with bipolar disorder. These indicators were all picked up by a standard assessment tool called the Child Behavior Checklist.
Uchida and Gabrieli say the machine learning model could be integrated into the medical record system to help pediatricians and child psychiatrists catch early warning signs of bipolar disorder. “The information that’s collected could alert a clinician to the possibility of a bipolar disorder developing,” Uchida says. “Then at least they’re aware of the risk, and they may be able to maybe pick up on some of the deterioration when it’s happening and think about either referring them or treating it themselves.”
by Asaf Shtull-Trauring | Brain and Cognitive Sciences |
The Society for Neuroscience (SfN) has awarded the Swartz Prize for Theoretical and Computational Neuroscience to Ila Fiete, professor in the Department of Brain and Cognitive Sciences, associate member of the McGovern Institute for Brain Research, and director of the K. Lisa Yang Integrative Computational Neuroscience Center. The SfN, the world’s largest neuroscience organization, announced that Fiete received the prize for her breakthrough research modeling hippocampal grid cells, a component of the navigational system of the mammalian brain.
“Fiete’s body of work has already significantly shaped the field of neuroscience and will continue to do so for the foreseeable future,” states the announcement from SfN.
“Fiete is considered one of the strongest theorists of her generation who has conducted highly influential work demonstrating that grid cell networks have attractor-like dynamics,” says Hollis Cline, a professor at the Scripps Research Institute of California and head of the Swartz Prize selection committee.
Grid cells are found in the cortex of all mammals. Their unique firing properties, creating a neural representation of our surroundings, allow us to navigate the world. Fiete and collaborators developed computational models showing how interactions between neurons can lead to the formation of periodic lattice-like firing patterns of grid cells and stabilize these patterns to create spatial memory. They showed that as we move around in space, these neural patterns can integrate velocity signals to provide a constantly updated estimate of our position, as well as detect and correct errors in the estimated position.
Fiete also proposed that multiple copies of these patterns at different spatial scales enabled efficient and high-capacity representation. Next, Fiete and colleagues worked with multiple collaborators to design experimental tests and establish rare evidence that these pattern-forming mechanisms underlie the function of memory pattern dynamics in the brain.
“I’m truly honored to receive the Swartz Prize,” says Fiete. “This prize recognizes my group’s efforts to decipher the circuit-level mechanisms of cognitive functions involving navigation, integration, and memory. It also recognizes, in its focus, the bearing-of-fruit of dynamical circuit models from my group and others that explain how individually simple elements combine to generate the longer-lasting memory states and complex computations of the brain. I am proud to be able to represent, in some measure, the work of my incredible students, postdocs, collaborators, and intellectual mentors. I am indebted to them and grateful for the chance to work together.”
According to the SfN announcement, Fiete has contributed to the field in many other ways, including modeling “how entorhinal cortex could interact with the hippocampus to efficiently and robustly store large numbers of memories and developed a remarkable method to discern the structure of intrinsic dynamics in neuronal circuits.” This modeling led to the discovery of an internal compass that tracks the direction of one’s head, even in the absence of external sensory input.
“Recently, Fiete’s group has explored the emergence of modular organization, a line of work that elucidates how grid cell modularity and general cortical modules might self-organize from smooth genetic gradients,” states the SfN announcement. Fiete and her research group have shown that even if the biophysical properties underlying grid cells of different scale are mostly similar, continuous variations in these properties can result in discrete groupings of grid cells, each with a different function.
Fiete was recognized with the Swartz Prize, which includes a $30,000 award, during the SfN annual meeting in San Diego.
Other recent MIT winners of the Swartz Prize include Professor Emery Brown (2020) and Professor Tomaso Poggio (2014).