MIT response to Wall Street Journal opinion essay

Following is an open statement in response to “Is MIT’s Research Helping the Chinese Military?”, an opinion essay by Michelle Bethel posted by the Wall Street Journal on Dec. 10, 2021. This statement is jointly from Prof. Robert Desimone, director of the McGovern Institute for Brain Research at MIT, Prof. Nergis Mavalvala, dean of MIT’s School of Science, and Prof. Maria T. Zuber, vice president for research at MIT.  

Ms. Bethel is absolutely right that research relationships with institutions in China require the most serious care and consideration. MIT brings a thorough and rigorous approach to these matters.

First let us be clear about the work of the MIT McGovern Institute for Brain Research. Of the dozens of research projects currently under way at the McGovern Institute, there is one active research collaboration with China. It involves better identifying and ultimately developing treatments for severe forms of autism or neurological disorders that often render individuals unable to speak and frequently require lifelong care. That project was thoroughly vetted and approved by the U.S. National Institutes of Health in 2019. MIT receives no funding from China for this research, and all findings will be published in peer-reviewed journals, meaning that they are open to medical researchers anywhere in the world. This is the collaboration with the Shenzhen Institute of Advanced Technology that Ms. Bethel referenced in vague terms.

This does not eliminate general concerns about how research may be conducted or used, however. That’s why MIT has strong processes for evaluating and managing the risks of research involving countries, including China, whose behavior affects U.S. national and economic security. Every proposed engagement that involves an organization or funding source from China, once it has been evaluated for compliance with U.S. law and regulation, is further reviewed by committees of senior administrators to consider risks related to national security, economic competitiveness, and civil and human rights. Projects have been variously turned down, modified, or approved under this process.

Ms. Bethel raises important points with respect to U.S.-China relations – but not with respect to the work of the McGovern Institute. We regret that Ms. Bethel felt it necessary to step away from the McGovern, but we respect her views and continue in conversation with her. We note that two other members of the McGovern family, including the McGovern Institute’s co-founder and another daughter, continue to proudly serve on the McGovern board. We are grateful to all three family members.

MIT Future Founders Initiative announces prize competition to promote female entrepreneurs in biotech

In a fitting sequel to its entrepreneurship “boot camp” educational lecture series last fall, the MIT Future Founders Initiative has announced the MIT Future Founders Prize Competition, supported by Northpond Ventures, and named the MIT faculty cohort that will participate in this year’s competition. The Future Founders Initiative was established in 2020 to promote female entrepreneurship in biotech.

Despite increasing representation at MIT, female science and engineering faculty found biotech startups at a disproportionately low rate compared with their male colleagues, according to research led by the initiative’s founders, MIT Professor Sangeeta Bhatia, MIT Professor and President Emerita Susan Hockfield, and MIT Amgen Professor of Biology Emerita Nancy Hopkins. In addition to highlighting systemic gender imbalances in the biotech pipeline, the initiative’s founders emphasize that the dearth of female biotech entrepreneurs represents lost opportunities for society as a whole — a bottleneck in the proliferation of publicly accessible medical and technological innovation.

“A very common myth is that representation of women in the pipeline is getting better with time … We can now look at the data … and simply say, ‘that’s not true’,” said Bhatia, who is the John and Dorothy Wilson Professor of Health Sciences and Technology and Electrical Engineering and Computer Science, and a member of MIT’s Koch Institute for Integrative Cancer Research and the Institute for Medical Engineering and Science, in an interview for the March/April 2021 MIT Faculty Newsletter. “We need new solutions. This isn’t just about waiting and being optimistic.”

Inspired by generous funding from Northpond Labs, the research and development-focused affiliate of Northpond Ventures, and by the success of other MIT prize incentive competitions such as the Climate Tech and Energy Prize, the Future Founders Initiative Prize Competition will be structured as a learning cohort in which participants will be supported in commercializing their existing inventions with instruction in market assessments, fundraising, and business capitalization, as well as other programming. The program, which is being run as a partnership between the MIT School of Engineering and the Martin Trust Center for MIT Entrepreneurship, provides hands-on opportunities to learn from industry leaders about their experiences, ranging from licensing technology to creating early startup companies. Bhatia and Kit Hickey, an entrepreneur-in-residence at the Martin Trust Center and senior lecturer at the MIT Sloan School of Management, are co-directors of the program.

“The competition is an extraordinary effort to increase the number of female faculty who translate their research and ideas into real-world applications through entrepreneurship,” says Anantha Chandrakasan, dean of the MIT School of Engineering and Vannevar Bush Professor of Electrical Engineering and Computer Science. “Our hope is that this likewise serves as an opportunity for participants to gain exposure and experience to the many ways in which they could achieve commercial impact through their research.”

At the end of the program, the cohort members will pitch their ideas to a selection committee composed of MIT faculty, biotech founders, and venture capitalists. The grand prize winner will receive $250,000 in discretionary funds, and two runners-up will receive $100,000. The winners will be announced at a showcase event, at which the entire cohort will present their work. All participants will also receive a $10,000 stipend for participating in the competition.

“The biggest payoff is not identifying the winner of the competition,” says Bhatia. “Really, what we are doing is creating a cohort … and then, at the end, we want to create a lot of visibility for these women and make them ‘top of mind’ in the community.”

The Selection Committee members for the MIT Future Founders Prize Competition are:

  • Bill Aulet, professor of the practice in the MIT Sloan School of Management and managing director of the Martin Trust Center for MIT Entrepreneurship
  • Sangeeta Bhatia, the John and Dorothy Wilson Professor of Electrical Engineering and Computer Science at MIT; a member of MIT’s Koch Institute for Integrative Cancer Research and the Institute for Medical Engineering and Science; and founder of Hepregen, Glympse Bio, and Satellite Bio
  • Kit Hickey, senior lecturer in the MIT Sloan School of Management and entrepreneur-in-residence at the Martin Trust Center
  • Susan Hockfield, MIT president emerita and professor of neuroscience
  • Andrea Jackson, director at Northpond Ventures
  • Harvey Lodish, professor of biology and biomedical engineering at MIT and founder of Genzyme, Millennium, and Rubius
  • Fiona Murray, associate dean for innovation and inclusion in the MIT Sloan School of Management; the William Porter Professor of Entrepreneurship; co-director of the MIT Innovation Initiative; and faculty director of the MIT Legatum Center
  • Amy Schulman, founding CEO of Lyndra Therapeutics and partner at Polaris Partners
  • Nandita Shangari, managing director at Novartis Venture Fund

“As an investment firm dedicated to supporting entrepreneurs, we are acutely aware of the limited number of companies founded and led by women in academia. We believe humanity should be benefiting from brilliant ideas and scientific breakthroughs from women in science, which could address many of the world’s most pressing problems. Together with MIT, we are providing an opportunity for women faculty members to enhance their visibility and gain access to the venture capital ecosystem,” says Andrea Jackson, director at Northpond Ventures.

“This first cohort is representative of the unrealized opportunity this program is designed to capture. While it will take a while to build a robust community of connections and role models, I am pleased and confident this program will make entrepreneurship more accessible and inclusive to our community, which will greatly benefit society,” says Susan Hockfield, MIT president emerita.

The MIT Future Founders Prize Competition cohort members were selected from schools across MIT, including the School of Science, the School of Engineering, and Media Lab within the School of Architecture and Planning. They are:

Polina Anikeeva is professor of materials science and engineering and brain and cognitive sciences, an associate member of the McGovern Institute for Brain Research, and the associate director of the Research Laboratory of Electronics. She is particularly interested in advancing the possibility of future neuroprosthetics, through biologically-informed materials synthesis, modeling, and device fabrication. Anikeeva earned her BS in biophysics from St. Petersburg State Polytechnic University and her PhD in materials science and engineering from MIT.

Natalie Artzi is principal research scientist in the Institute of Medical Engineering and Science and an assistant professor in the department of medicine at Brigham and Women’s Hospital. Through the development of smart materials and medical devices, her research seeks to “personalize” medical interventions based on the specific presentation of diseased tissue in a given patient. She earned both her BS and PhD in chemical engineering from the Technion-Israel Institute of Technology.

Laurie A. Boyer is professor of biology and biological engineering in the Department of Biology. By studying how diverse molecular programs cross-talk to regulate the developing heart, she seeks to develop new therapies that can help repair cardiac tissue. She earned her BS in biomedical science from Framingham State University and her PhD from the University of Massachusetts Medical School.

Tal Cohen is associate professor in the departments of Civil and Environmental Engineering and Mechanical Engineering. She wields her understanding of how materials behave when they are pushed to their extremes to tackle engineering challenges in medicine and industry. She earned her BS, MS, and PhD in aerospace engineering from the Technion-Israel Institute of Technology.

Canan Dagdeviren is assistant professor of media arts and sciences and the LG Career Development Professor of Media Arts and Sciences. Her research focus is on creating new sensing, energy harvesting, and actuation devices that can be stretched, wrapped, folded, twisted, and implanted onto the human body while maintaining optimal performance. She earned her BS in physics engineering from Hacettepe University, her MS in materials science and engineering from Sabanci University, and her PhD in materials science and engineering from the University of Illinois at Urbana-Champaign.

Ariel Furst is the Raymond (1921) & Helen St. Laurent Career Development Professor in the Department of Chemical Engineering. Her research addresses challenges in global health and sustainability, utilizing electrochemical methods and biomaterials engineering. She is particularly interested in new technologies that detect and treat disease. Furst earned her BS in chemistry at the University of Chicago and her PhD at Caltech.

Kristin Knouse is assistant professor in the Department of Biology and the Koch Institute for Integrative Cancer Research. She develops tools to investigate the molecular regulation of organ injury and regeneration directly within a living organism with the goal of uncovering novel therapeutic avenues for diverse diseases. She earned her BS in biology from Duke University, her PhD and MD through the Harvard and MIT MD-PhD program.

Elly Nedivi is the William R. (1964) & Linda R. Young Professor of Neuroscience at the Picower Institute for Learning and Memory with joint appointments in the departments of Brain and Cognitive Sciences and Biology. Through her research of neurons, genes, and proteins, Nedivi focuses on elucidating the cellular mechanisms that control plasticity in both the developing and adult brain. She earned her BS in biology from Hebrew University and her PhD in neuroscience from Stanford University.

Ellen Roche is associate professor in the Department of Mechanical Engineering and Institute of Medical Engineering and Science, and the W.M. Keck Career Development Professor in Biomedical Engineering. Borrowing principles and design forms she observes in nature, Roche works to develop implantable therapeutic devices that assist cardiac and other biological function. She earned her bachelor’s degree in biomedical engineering from the National University of Ireland at Galway, her MS in bioengineering from Trinity College Dublin, and her PhD from Harvard University.

A key brain region responds to faces similarly in infants and adults

Within the visual cortex of the adult brain, a small region is specialized to respond to faces, while nearby regions show strong preferences for bodies or for scenes such as landscapes.

Neuroscientists have long hypothesized that it takes many years of visual experience for these areas to develop in children. However, a new MIT study suggests that these regions form much earlier than previously thought. In a study of babies ranging in age from two to nine months, the researchers identified areas of the infant visual cortex that already show strong preferences for either faces, bodies, or scenes, just as they do in adults.

“These data push our picture of development, making babies’ brains look more similar to adults, in more ways, and earlier than we thought,” says Rebecca Saxe, the John W. Jarve Professor of Brain and Cognitive Sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the new study.

Using functional magnetic resonance imaging (fMRI), the researchers collected usable data from more than 50 infants, a far greater number than any research lab has been able to scan before. This allowed them to examine the infant visual cortex in a way that had not been possible until now.

“This is a result that’s going to make a lot of people have to really grapple with their understanding of the infant brain, the starting point of development, and development itself,” says Heather Kosakowski, an MIT graduate student and the lead author of the study, which appears today in Current Biology.

MIT graduate student Heather Kosakowski prepares an infant for an MRI scan at the Martinos Imaging Center. Photo: Caitlin Cunningham

Distinctive regions

More than 20 years ago, Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience at MIT, used fMRI to discover the fusiform face area: a small region of the visual cortex that responds much more strongly to faces than any other kind of visual input.

Since then, Kanwisher and her colleagues have also identified parts of the visual cortex that respond to bodies (the extrastriate body area, or EBA), and scenes (the parahippocampal place area, or PPA).

“There is this set of functionally very distinctive regions that are present in more or less the same place in pretty much every adult,” says Kanwisher, who is also a member of MIT’s Center for Brains, Minds, and Machines, and an author of the new study. “That raises all these questions about how these regions develop. How do they get there, and how do you build a brain that has such similar structure in each person?”

One way to try to answer those questions is to investigate when these highly selective regions first develop in the brain. A longstanding hypothesis is that it takes several years of visual experience for these regions to gradually become selective for their specific targets. Scientists who study the visual cortex have found similar selectivity patterns in children as young as 4 or 5 years old, but there have been few studies of children younger than that.

In 2017, Saxe and one of her graduate students, Ben Deen, reported the first successful use of fMRI to study the brains of awake infants. That study, which included data from nine babies, suggested that while infants did have areas that respond to faces and scenes, those regions were not yet highly selective. For example, the fusiform face area did not show a strong preference for human faces over every other kind of input, including human bodies or the faces of other animals.

However, that study was limited by the small number of subjects, and also by its reliance on an fMRI coil that the researchers had developed especially for babies, which did not offer as high-resolution imaging as the coils used for adults.

For the new study, the researchers wanted to try to get better data, from more babies. They built a new scanner that is more comfortable for babies and also more powerful, with resolution similar to that of fMRI scanners used to study the adult brain.

After going into the specialized scanner, along with a parent, the babies watched videos that showed either faces, body parts such as kicking feet or waving hands, objects such as toys, or natural scenes such as mountains.

The researchers recruited nearly 90 babies for the study, collected usable fMRI data from 52, half of which contributed higher-resolution data collected using the new coil. Their analysis revealed that specific regions of the infant visual cortex show highly selective responses to faces, body parts, and natural scenes, in the same locations where those responses are seen in the adult brain. The selectivity for natural scenes, however, was not as strong as for faces or body parts.

The infant brain

The findings suggest that scientists’ conception of how the infant brain develops may need to be revised to accommodate the observation that these specialized regions start to resemble those of adults sooner than anyone had expected.

“The thing that is so exciting about these data is that they revolutionize the way we understand the infant brain,” Kosakowski says. “A lot of theories have grown up in the field of visual neuroscience to accommodate the view that you need years of development for these specialized regions to emerge. And what we’re saying is actually, no, you only really need a couple of months.”

Because their data on the area of the brain that responds to scenes was not as strong as for the other locations they looked at, the researchers now plan to pursue additional studies of that region, this time showing babies images on a much larger screen that will more closely mimic the experience of being within a scene. For that study, they plan to use near-infrared spectroscopy (NIRS), a non-invasive imaging technique that doesn’t require the participant to be inside a scanner.

“That will let us ask whether young babies have robust responses to visual scenes that we underestimated in this study because of the visual constraints of the experimental setup in the scanner,” Saxe says.

The researchers are now further analyzing the data they gathered for this study in hopes of learning more about how development of the fusiform face area progresses from the youngest babies they studied to the oldest. They also hope to perform new experiments examining other aspects of cognition, including how babies’ brains respond to language and music.

The research was funded by the National Science Foundation, the National Institutes of Health, the McGovern Institute, and the Center for Brains, Minds, and Machines.

Study finds a striking difference between neurons of humans and other mammals

McGovern Institute Investigator Mark Harnett. Photo: Justin Knight

Neurons communicate with each other via electrical impulses, which are produced by ion channels that control the flow of ions such as potassium and sodium. In a surprising new finding, MIT neuroscientists have shown that human neurons have a much smaller number of these channels than expected, compared to the neurons of other mammals.

The researchers hypothesize that this reduction in channel density may have helped the human brain evolve to operate more efficiently, allowing it to divert resources to other energy-intensive processes that are required to perform complex cognitive tasks.

“If the brain can save energy by reducing the density of ion channels, it can spend that energy on other neuronal or circuit processes,” says Mark Harnett, an associate professor of brain and cognitive sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Harnett and his colleagues analyzed neurons from 10 different mammals, the most extensive electrophysiological study of its kind, and identified a “building plan” that holds true for every species they looked at — except for humans. They found that as the size of neurons increases, the density of channels found in the neurons also increases.

However, human neurons proved to be a striking exception to this rule.

“Previous comparative studies established that the human brain is built like other mammalian brains, so we were surprised to find strong evidence that human neurons are special,” says former MIT graduate student Lou Beaulieu-Laroche.

Beaulieu-Laroche is the lead author of the study, which appears today in Nature.

A building plan

Neurons in the mammalian brain can receive electrical signals from thousands of other cells, and that input determines whether or not they will fire an electrical impulse called an action potential. In 2018, Harnett and Beaulieu-Laroche discovered that human and rat neurons differ in some of their electrical properties, primarily in parts of the neuron called dendrites — tree-like antennas that receive and process input from other cells.

One of the findings from that study was that human neurons had a lower density of ion channels than neurons in the rat brain. The researchers were surprised by this observation, as ion channel density was generally assumed to be constant across species. In their new study, Harnett and Beaulieu-Laroche decided to compare neurons from several different mammalian species to see if they could find any patterns that governed the expression of ion channels. They studied two types of voltage-gated potassium channels and the HCN channel, which conducts both potassium and sodium, in layer 5 pyramidal neurons, a type of excitatory neurons found in the brain’s cortex.

 

Former McGovern Institute graduate student Lou Beaulieu-Laroche is the lead author of the 2021 Nature paper.

They were able to obtain brain tissue from 10 mammalian species: Etruscan shrews (one of the smallest known mammals), gerbils, mice, rats, Guinea pigs, ferrets, rabbits, marmosets, and macaques, as well as human tissue removed from patients with epilepsy during brain surgery. This variety allowed the researchers to cover a range of cortical thicknesses and neuron sizes across the mammalian kingdom.

The researchers found that in nearly every mammalian species they looked at, the density of ion channels increased as the size of the neurons went up. The one exception to this pattern was in human neurons, which had a much lower density of ion channels than expected.

The increase in channel density across species was surprising, Harnett says, because the more channels there are, the more energy is required to pump ions in and out of the cell. However, it started to make sense once the researchers began thinking about the number of channels in the overall volume of the cortex, he says.

In the tiny brain of the Etruscan shrew, which is packed with very small neurons, there are more neurons in a given volume of tissue than in the same volume of tissue from the rabbit brain, which has much larger neurons. But because the rabbit neurons have a higher density of ion channels, the density of channels in a given volume of tissue is the same in both species, or any of the nonhuman species the researchers analyzed.

“This building plan is consistent across nine different mammalian species,” Harnett says. “What it looks like the cortex is trying to do is keep the numbers of ion channels per unit volume the same across all the species. This means that for a given volume of cortex, the energetic cost is the same, at least for ion channels.”

Energy efficiency

The human brain represents a striking deviation from this building plan, however. Instead of increased density of ion channels, the researchers found a dramatic decrease in the expected density of ion channels for a given volume of brain tissue.

The researchers believe this lower density may have evolved as a way to expend less energy on pumping ions, which allows the brain to use that energy for something else, like creating more complicated synaptic connections between neurons or firing action potentials at a higher rate.

“We think that humans have evolved out of this building plan that was previously restricting the size of cortex, and they figured out a way to become more energetically efficient, so you spend less ATP per volume compared to other species,” Harnett says.

He now hopes to study where that extra energy might be going, and whether there are specific gene mutations that help neurons of the human cortex achieve this high efficiency. The researchers are also interested in exploring whether primate species that are more closely related to humans show similar decreases in ion channel density.

The research was funded by the Natural Sciences and Engineering Research Council of Canada, a Friends of the McGovern Institute Fellowship, the National Institute of General Medical Sciences, the Paul and Daisy Soros Fellows Program, the Dana Foundation David Mahoney Neuroimaging Grant Program, the National Institutes of Health, the Harvard-MIT Joint Research Grants Program in Basic Neuroscience, and Susan Haar.

Other authors of the paper include Norma Brown, an MIT technical associate; Marissa Hansen, a former post-baccalaureate scholar; Enrique Toloza, a graduate student at MIT and Harvard Medical School; Jitendra Sharma, an MIT research scientist; Ziv Williams, an associate professor of neurosurgery at Harvard Medical School; Matthew Frosch, an associate professor of pathology and health sciences and technology at Harvard Medical School; Garth Rees Cosgrove, director of epilepsy and functional neurosurgery at Brigham and Women’s Hospital; and Sydney Cash, an assistant professor of neurology at Harvard Medical School and Massachusetts General Hospital.

McGovern Institute Director receives highest honor from the Society for Neuroscience

The Society for Neuroscience will present its highest honor, the Ralph W. Gerard Prize in Neuroscience, to McGovern Institute Director Robert Desimone at its annual meeting today.

The Gerard Prize is named for neuroscientist Ralph W. Gerard who helped establish the Society for Neuroscience, and honors “outstanding scientists who have made significant contributions to neuroscience throughout their careers.” Desimone will share the $30,000 prize with Vanderbilt University neuroscientist Jon Kaas.

Desimone is being recognized for his career contributions to understanding cortical function in the visual system. His seminal work on attention spans decades, including the discovery of a neural basis for covert attention in the temporal cortex and the creation of the biased competition model, suggesting that attention is biased towards material relevant to the task. More recent work revealed how synchronized brain rhythms help enhance visual processing. Desimone also helped discover both face cells and neural populations that identify objects even when the size or location of the object changes. His long list of contributions includes mapping the extrastriate visual cortex, publishing the first report of columns for motion processing outside the primary visual cortex, and discovering how the temporal cortex retains memories. Desimone’s work has moved the field from broad strokes of input and output to a more nuanced understanding of cortical function that allows the brain to make sense of the environment.

At its annual meeting, beginning today, the Society will honor Desimone and other leading researchers who have made significant contributions to neuroscience — including the understanding of cognitive processes, drug addiction, neuropharmacology, and theoretical models — with this year’s Outstanding Achievement Awards.

“The Society is honored to recognize this year’s awardees, whose groundbreaking research has revolutionized our understanding of the brain, from the level of the synapse to the structure and function of the cortex, shedding light on how vision, memory, perception of touch and pain, and drug
addiction are organized in the brain,” SfN President Barry Everitt, said. “This exceptional group of neuroscientists has made fundamental discoveries, paved the way for new therapeutic approaches, and introduced new tools that will lay the foundation for decades of research to come.”

Giving robots social skills

Press Mentions

Robots can deliver food on a college campus and hit a hole-in-one on the golf course, but even the most sophisticated robot can’t perform basic social interactions that are critical to everyday human life.

MIT researchers have now incorporated certain social interactions into a framework for robotics, enabling machines to understand what it means to help or hinder one another, and to learn to perform these social behaviors on their own. In a simulated environment, a robot watches its companion, guesses what task it wants to accomplish, and then helps or hinders this other robot based on its own goals.

The researchers also showed that their model creates realistic and predictable social interactions. When they showed videos of these simulated robots interacting with one another to humans, the human viewers mostly agreed with the model about what type of social behavior was occurring.

Enabling robots to exhibit social skills could lead to smoother and more positive human-robot interactions. For instance, a robot in an assisted living facility could use these capabilities to help create a more caring environment for elderly individuals. The new model may also enable scientists to measure social interactions quantitatively, which could help psychologists study autism or analyze the effects of antidepressants.

“Robots will live in our world soon enough, and they really need to learn how to communicate with us on human terms. They need to understand when it is time for them to help and when it is time for them to see what they can do to prevent something from happening. This is very early work and we are barely scratching the surface, but I feel like this is the first very serious attempt for understanding what it means for humans and machines to interact socially,” says Boris Katz, principal research scientist and head of the InfoLab Group in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and a member of the Center for Brains, Minds, and Machines (CBMM).

Joining Katz on the paper are co-lead author Ravi Tejwani, a research assistant at CSAIL; co-lead author Yen-Ling Kuo, a CSAIL PhD student; Tianmin Shu, a postdoc in the Department of Brain and Cognitive Sciences; and senior author Andrei Barbu, a research scientist at CSAIL and CBMM. The research will be presented at the Conference on Robot Learning in November.

A social simulation

To study social interactions, the researchers created a simulated environment where robots pursue physical and social goals as they move around a two-dimensional grid.

A physical goal relates to the environment. For example, a robot’s physical goal might be to navigate to a tree at a certain point on the grid. A social goal involves guessing what another robot is trying to do and then acting based on that estimation, like helping another robot water the tree.

The researchers use their model to specify what a robot’s physical goals are, what its social goals are, and how much emphasis it should place on one over the other. The robot is rewarded for actions it takes that get it closer to accomplishing its goals. If a robot is trying to help its companion, it adjusts its reward to match that of the other robot; if it is trying to hinder, it adjusts its reward to be the opposite. The planner, an algorithm that decides which actions the robot should take, uses this continually updating reward to guide the robot to carry out a blend of physical and social goals.

“We have opened a new mathematical framework for how you model social interaction between two agents. If you are a robot, and you want to go to location X, and I am another robot and I see that you are trying to go to location X, I can cooperate by helping you get to location X faster. That might mean moving X closer to you, finding another better X, or taking whatever action you had to take at X. Our formulation allows the plan to discover the ‘how’; we specify the ‘what’ in terms of what social interactions mean mathematically,” says Tejwani.

Blending a robot’s physical and social goals is important to create realistic interactions, since humans who help one another have limits to how far they will go. For instance, a rational person likely wouldn’t just hand a stranger their wallet, Barbu says.

The researchers used this mathematical framework to define three types of robots. A level 0 robot has only physical goals and cannot reason socially. A level 1 robot has physical and social goals but assumes all other robots only have physical goals. Level 1 robots can take actions based on the physical goals of other robots, like helping and hindering. A level 2 robot assumes other robots have social and physical goals; these robots can take more sophisticated actions like joining in to help together.

Evaluating the model

To see how their model compared to human perspectives about social interactions, they created 98 different scenarios with robots at levels 0, 1, and 2. Twelve humans watched 196 video clips of the robots interacting, and then were asked to estimate the physical and social goals of those robots.

In most instances, their model agreed with what the humans thought about the social interactions that were occurring in each frame.

“We have this long-term interest, both to build computational models for robots, but also to dig deeper into the human aspects of this. We want to find out what features from these videos humans are using to understand social interactions. Can we make an objective test for your ability to recognize social interactions? Maybe there is a way to teach people to recognize these social interactions and improve their abilities. We are a long way from this, but even just being able to measure social interactions effectively is a big step forward,” Barbu says.

Toward greater sophistication

The researchers are working on developing a system with 3D agents in an environment that allows many more types of interactions, such as the manipulation of household objects. They are also planning to modify their model to include environments where actions can fail.

The researchers also want to incorporate a neural network-based robot planner into the model, which learns from experience and performs faster. Finally, they hope to run an experiment to collect data about the features humans use to determine if two robots are engaging in a social interaction.

“Hopefully, we will have a benchmark that allows all researchers to work on these social interactions and inspire the kinds of science and engineering advances we’ve seen in other areas such as object and action recognition,” Barbu says.

“I think this is a lovely application of structured reasoning to a complex yet urgent challenge,” says Tomer Ullman, assistant professor in the Department of Psychology at Harvard University and head of the Computation, Cognition, and Development Lab, who was not involved with this research. “Even young infants seem to understand social interactions like helping and hindering, but we don’t yet have machines that can perform this reasoning at anything like human-level flexibility. I believe models like the ones proposed in this work, that have agents thinking about the rewards of others and socially planning how best to thwart or support them, are a good step in the right direction.”

This research was supported by the Center for Brains, Minds, and Machines; the National Science Foundation; the MIT CSAIL Systems that Learn Initiative; the MIT-IBM Watson AI Lab; the DARPA Artificial Social Intelligence for Successful Teams program; the U.S. Air Force Research Laboratory; the U.S. Air Force Artificial Intelligence Accelerator; and the Office of Naval Research.

A connectome for cognition

The lateral prefrontal cortex is a particularly well-connected part of the brain. Neurons there communicate with processing centers throughout the rest of the brain, gathering information and sending commands to implement executive control over behavior. Now, scientists at MIT’s McGovern Institute have mapped these connections and revealed an unexpected order within them: The lateral prefrontal cortex, they’ve found, contains maps of other major parts of the brain’s cortex.

The researchers, led by postdoctoral researcher Rui Xu and McGovern Institute Director Robert Desimone, report that the lateral prefrontal cortex contains a set of maps that represent the major processing centers in the other parts of the cortex, including the temporal and parietal lobes. Their organization likely supports the lateral prefrontal cortex’s roles managing complex functions such as attention and working memory, which require integrating information from multiple sources and coordinating activity elsewhere in the brain. The findings are published November 4, 2021, in the journal Neuron.

Topographic maps

The layout of the maps, which allows certain regions of the lateral prefrontal cortex to directly interact with multiple areas across the brain, indicates that this part of the brain is particularly well positioned for its role. “This function of integrating and then sending back control signals to appropriate levels in the processing hierarchies of the brain is clearly one of the reasons that prefrontal cortex is so important for cognition and executive control,” says Desimone.

In many parts of the brain, neurons’ physical organization has been found to reflect the information represented there. For example, individual neurons’ positions within the visual cortex mirror the layout of the cells in the retina from which they receive input, such that the spatial pattern of neuronal activity in this part of the brain provides an approximate view of the image seen by the eyes. For example, if you fixate on the first letter of a word, the next letters in the word will map to sequential locations in the visual cortex. Likewise, the arm and hand are mapped to adjacent locations in the somatic cortex, where the brain receives sensory information from the skin.

Topographic maps such as these, which have been found primarily in brain regions involved in sensory and motor processing, offer clues about how information is stored and processed in the brain. Neuroscientists have hoped that topographic maps within the lateral prefrontal cortex will provide insight into the complex cognitive processes that are carried out there—but such maps have been elusive.

Previous anatomical studies had given little indication how different parts of the brain communicate preferentially to specific locations within the prefrontal cortex to give rise to regional specialization of cognitive functions. Recently, however, the Desimone lab identified two areas within the lateral prefrontal cortex of monkeys with specific roles in focusing an animal’s visual attention. Knowing that some spots within the lateral prefrontal cortex were wired for specific functions, they wondered if others were, too. They decided they needed a detailed map of the connections emanating from this part of the brain, and devised a plan to plot connectivity from hundreds of points within the lateral prefrontal cortex.

Cortical connectome

To generate a wiring diagram, or connectome, Xu used functional MRI to monitor activity throughout a monkey’s brain as he stimulated specific points within its lateral prefrontal cortex. He moved systematically through the brain region, stimulating points spaced as close as one millimeter apart, and noting which parts of the brain lit up in response. Ultimately, the team collected data from about 100 sites for each of two monkeys.

As the data accumulated, clear patterns emerged. Different regions within the lateral prefrontal cortex formed orderly connections with each of five processing centers throughout the brain. Points within each of these maps connected to sites with the same relative positions in the distant processing centers. Because some parts of the lateral prefrontal cortex are wired to interact with more than one processing centers, these maps overlap, positioning the prefrontal cortex to integrate information from different sources.

The team found significant overlap, for example, between the maps of the temporal cortex, a part of the brain that uses visual information to recognize objects, and the parietal cortex, which computes the spatial relationships between objects. “It is mapping objects and space together in a way that would integrate the two systems,” explains Desimone. “And then on top of that, it has other maps of other brain systems that are partially overlapping with that—so they’re all sort of coming together.”

Desimone and Xu say the new connectome will help guide further investigations of how the prefrontal cortex orchestrates complex cognitive processes. “I think this really gives us a direction for the future, because we now need to understand the cognitive concepts that are mapped there,” Desimone says.

Already, they say, the connectome offers encouragement that a deeper understanding of complex cognition is within reach. “This topographic connectivity gives the lateral prefrontal some specific advantage to serve its function,” says Xu. “This suggests that lateral prefrontal cortex has a fine organization, just like the more studied parts of the brain, so the approaches that have been used to study these other regions may also benefit the studies of high-level cognition.”

Artificial intelligence sheds light on how the brain processes language

In the past few years, artificial intelligence models of language have become very good at certain tasks. Most notably, they excel at predicting the next word in a string of text; this technology helps search engines and texting apps predict the next word you are going to type.

The most recent generation of predictive language models also appears to learn something about the underlying meaning of language. These models can not only predict the word that comes next, but also perform tasks that seem to require some degree of genuine understanding, such as question answering, document summarization, and story completion.

Such models were designed to optimize performance for the specific function of predicting text, without attempting to mimic anything about how the human brain performs this task or understands language. But a new study from MIT neuroscientists suggests the underlying function of these models resembles the function of language-processing centers in the human brain.

Computer models that perform well on other types of language tasks do not show this similarity to the human brain, offering evidence that the human brain may use next-word prediction to drive language processing.

“The better the model is at predicting the next word, the more closely it fits the human brain,” says Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience, a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds, and Machines (CBMM), and an author of the new study. “It’s amazing that the models fit so well, and it very indirectly suggests that maybe what the human language system is doing is predicting what’s going to happen next.”

Joshua Tenenbaum, a professor of computational cognitive science at MIT and a member of CBMM and MIT’s Artificial Intelligence Laboratory (CSAIL); and Evelina Fedorenko, the Frederick A. and Carole J. Middleton Career Development Associate Professor of Neuroscience and a member of the McGovern Institute, are the senior authors of the study, which appears this week in the Proceedings of the National Academy of Sciences.

Martin Schrimpf, an MIT graduate student who works in CBMM, is the first author of the paper.

Making predictions

The new, high-performing next-word prediction models belong to a class of models called deep neural networks. These networks contain computational “nodes” that form connections of varying strength, and layers that pass information between each other in prescribed ways.

Over the past decade, scientists have used deep neural networks to create models of vision that can recognize objects as well as the primate brain does. Research at MIT has also shown that the underlying function of visual object recognition models matches the organization of the primate visual cortex, even though those computer models were not specifically designed to mimic the brain.

In the new study, the MIT team used a similar approach to compare language-processing centers in the human brain with language-processing models. The researchers analyzed 43 different language models, including several that are optimized for next-word prediction. These include a model called GPT-3 (Generative Pre-trained Transformer 3), which, given a prompt, can generate text similar to what a human would produce. Other models were designed to perform different language tasks, such as filling in a blank in a sentence.

As each model was presented with a string of words, the researchers measured the activity of the nodes that make up the network. They then compared these patterns to activity in the human brain, measured in subjects performing three language tasks: listening to stories, reading sentences one at a time, and reading sentences in which one word is revealed at a time. These human datasets included functional magnetic resonance (fMRI) data and intracranial electrocorticographic measurements taken in people undergoing brain surgery for epilepsy.

They found that the best-performing next-word prediction models had activity patterns that very closely resembled those seen in the human brain. Activity in those same models was also highly correlated with measures of human behavioral measures such as how fast people were able to read the text.

“We found that the models that predict the neural responses well also tend to best predict human behavior responses, in the form of reading times. And then both of these are explained by the model performance on next-word prediction. This triangle really connects everything together,” Schrimpf says.

“A key takeaway from this work is that language processing is a highly constrained problem: The best solutions to it that AI engineers have created end up being similar, as this paper shows, to the solutions found by the evolutionary process that created the human brain. Since the AI network didn’t seek to mimic the brain directly — but does end up looking brain-like — this suggests that, in a sense, a kind of convergent evolution has occurred between AI and nature,” says Daniel Yamins, an assistant professor of psychology and computer science at Stanford University, who was not involved in the study.

Game changer

One of the key computational features of predictive models such as GPT-3 is an element known as a forward one-way predictive transformer. This kind of transformer is able to make predictions of what is going to come next, based on previous sequences. A significant feature of this transformer is that it can make predictions based on a very long prior context (hundreds of words), not just the last few words.

Scientists have not found any brain circuits or learning mechanisms that correspond to this type of processing, Tenenbaum says. However, the new findings are consistent with hypotheses that have been previously proposed that prediction is one of the key functions in language processing, he says.

“One of the challenges of language processing is the real-time aspect of it,” he says. “Language comes in, and you have to keep up with it and be able to make sense of it in real time.”

The researchers now plan to build variants of these language processing models to see how small changes in their architecture affect their performance and their ability to fit human neural data.

“For me, this result has been a game changer,” Fedorenko says. “It’s totally transforming my research program, because I would not have predicted that in my lifetime we would get to these computationally explicit models that capture enough about the brain so that we can actually leverage them in understanding how the brain works.”

The researchers also plan to try to combine these high-performing language models with some computer models Tenenbaum’s lab has previously developed that can perform other kinds of tasks such as constructing perceptual representations of the physical world.

“If we’re able to understand what these language models do and how they can connect to models which do things that are more like perceiving and thinking, then that can give us more integrative models of how things work in the brain,” Tenenbaum says. “This could take us toward better artificial intelligence models, as well as giving us better models of how more of the brain works and how general intelligence emerges, than we’ve had in the past.”

The research was funded by a Takeda Fellowship; the MIT Shoemaker Fellowship; the Semiconductor Research Corporation; the MIT Media Lab Consortia; the MIT Singleton Fellowship; the MIT Presidential Graduate Fellowship; the Friends of the McGovern Institute Fellowship; the MIT Center for Brains, Minds, and Machines, through the National Science Foundation; the National Institutes of Health; MIT’s Department of Brain and Cognitive Sciences; and the McGovern Institute.

Other authors of the paper are Idan Blank PhD ’16 and graduate students Greta Tuckute, Carina Kauf, and Eghbal Hosseini.

Five with MIT ties elected to the National Academy of Medicine for 2021

The National Academy of Medicine (NAM) has announced the election of 100 new members for 2021, including two MIT faculty members and three additional Institute affiliates.

Faculty honorees include Linda G. Griffith, a professor in the MIT departments of Biological Engineering and Mechanical Engineering; and Feng Zhang, a professor in the MIT departments of Brain and Cognitive Sciences and Biological Engineering. Guillermo Antonio Ameer SCD ’99, a professor of biomedical engineering and surgery at Northwestern University; Darrell Gaskin SM ’87, a professor of health policy and management at Johns Hopkins University; and Vamsi Mootha, an institute member of the Broad Institute of MIT and Harvard and former student in the Harvard-MIT Program in Health Sciences and Technology, were also honored.

The new inductees were elected through a process that recognizes individuals who have made major contributions to the advancement of the medical sciences, health care, and public health. Election to the academy is considered one of the highest honors in the fields of health and medicine and recognizes individuals who have demonstrated outstanding professional achievement and commitment to service.

Griffith, the School of Engineering Professor of Teaching Innovation and director of the Center for Gynepathology Research at MIT, is credited for her longstanding leadership in research, education, and medical translation. Specifically, the NAM recognizes her pioneering work in tissue engineering, biomaterials, and systems biology, including the development of the first “liver chip” technology. Griffith is also recognized for inventing 3D biomaterials printing and organotypic models for systems gynopathology, and for the establishment of the biological engineering department at MIT.

The academy recognizes Zhang, the Patricia and James Poitras ’63 Professor in Neuroscience at MIT, for revolutionizing molecular biology and powering transformative leaps forward in our ability to study and treat human diseases. Zhang, who also is an investigator at the Howard Hughes Medical Institute and the McGovern Institute for Brain Research, and a core member of the Broad Institute of MIT and Harvard, is specifically credited for the discovery of novel microbial enzymes and their development as molecular technologies, including optogenetics and CRISPR-mediated genome editing. The academy also commends Zhang for his outstanding mentoring and professional services.

Ameer, the Daniel Hale Williams Professor of Biomedical Engineering and Surgery at the Northwestern University Feinberg School of Medicine, earned his Doctor of Science degree from the MIT Department of Chemical Engineering in 1999. A professor of biomedical engineering and of surgery who is also the director of the Center for Advanced Regenerative Engineering, he is cited by the NAM “For pioneering contributions to regenerative engineering and medicine through the development, dissemination, and translation of citrate-based biomaterials, a new class of biodegradable polymers that enabled the commercialization of innovative medical devices approved by the U.S. Food and Drug Administration for use in a variety of surgical procedures.”

Gaskin, the William C. and Nancy F. Richardson Professor in Health Policy and Management, Bloomberg School of Public Health at Johns Hopkins University, earned his Master of Science degree from the MIT Department of Economics in 1987. A health economist who advances community, neighborhood, and market-level policies and programs that reduce health disparities, he is cited by the NAM “For his work as a leading health economist and health services researcher who has advanced fundamental understanding of the role of place as a driver in racial and ethnic health disparities.”

Mootha, the founding co-director of the Broad Institute’s Metabolism Program, is a professor of systems biology and medicine at Harvard Medical School and a professor in the Department of Molecular Biology at Massachusetts General Hospital. An alumnus of the Harvard-MIT Program in Health Sciences and Technology and former postdoc with the Whitehead Institute for Biomedical Research, Mootha is an expert in the mitochondrion, the “powerhouse of the cell,” and its role in human disease. The NAM cites Mootha “For transforming the field of mitochondrial biology by creatively combining modern genomics with classical bioenergetics.”

Established in 1970 by the National Academy of Sciences, the NAM addresses critical issues in health, science, medicine, and related policy and inspires positive actions across sectors. NAM works alongside the National Academy of Sciences and National Academy of Engineering to provide independent, objective analysis and advice to the nation and conduct other activities to solve complex problems and inform public policy decisions. The National Academies of Sciences, Engineering, and Medicine also encourage education and research, recognize outstanding contributions to knowledge, and increase public understanding of STEMM. With their election, NAM members make a commitment to volunteer their service in National Academies activities.

Data transformed

With the tools of modern neuroscience, data accumulates quickly. Recording devices listen in on the electrical conversations between neurons, picking up the voices of hundreds of cells at a time. Microscopes zoom in to illuminate the brain’s circuitry, capturing thousands of images of cells’ elaborately branched paths. Functional MRIs detect changes in blood flow to map activity within a person’s brain, generating a complete picture by compiling hundreds of scans.

“When I entered neuroscience about 20 years ago, data were extremely precious, and ideas, as the expression went, were cheap. That’s no longer true,” says McGovern Associate Investigator Ila Fiete. “We have an embarrassment of wealth in the data but lack sufficient conceptual and mathematical scaffolds to understand it.”

Fiete will lead the McGovern Institute’s new K. Lisa Yang Integrative Computational Neuroscience (ICoN) Center, whose scientists will create mathematical models and other computational tools to confront the current deluge of data and advance our understanding of the brain and mental health. The center, funded by a $24 million donation from philanthropist Lisa Yang, will take a uniquely collaborative approach to computational neuroscience, integrating data from MIT labs to explain brain function at every level, from the molecular to the behavioral.

“Driven by technologies that generate massive amounts of data, we are entering a new era of translational neuroscience research,” says Yang, whose philanthropic investment in MIT research now exceeds $130 million. “I am confident that the multidisciplinary expertise convened by this center will revolutionize how we synthesize this data and ultimately understand the brain in health and disease.”

Data integration

Fiete says computation is particularly crucial to neuroscience because the brain is so staggeringly complex. Its billions of neurons, which are themselves complicated and diverse, interact with one other through trillions of connections.

“Conceptually, it’s clear that all these interactions are going to lead to pretty complex things. And these are not going to be things that we can explain in stories that we tell,” Fiete says. “We really will need mathematical models. They will allow us to ask about what changes when we perturb one or several components — greatly accelerating the rate of discovery relative to doing those experiments in real brains.”

By representing the interactions between the components of a neural circuit, a model gives researchers the power to explore those interactions, manipulate them, and predict the circuit’s behavior under different conditions.

“You can observe these neurons in the same way that you would observe real neurons. But you can do even more, because you have access to all the neurons and you have access to all the connections and everything in the network,” explains computational neuroscientist and McGovern Associate Investigator Guangyu Robert Yang (no relation to Lisa Yang), who joined MIT as a junior faculty member in July 2021.

Many neuroscience models represent specific functions or parts of the brain. But with advances in computation and machine learning, along with the widespread availability of experimental data with which to test and refine models, “there’s no reason that we should be limited to that,” he says.

Robert Yang’s team at the McGovern Institute is working to develop models that integrate multiple brain areas and functions. “The brain is not just about vision, just about cognition, just about motor control,” he says. “It’s about all of these things. And all these areas, they talk to one another.” Likewise, he notes, it’s impossible to separate the molecules in the brain from their effects on behavior – although those aspects of neuroscience have traditionally been studied independently, by researchers with vastly different expertise.

The ICoN Center will eliminate the divides, bringing together neuroscientists and software engineers to deal with all types of data about the brain. To foster interdisciplinary collaboration, every postdoctoral fellow and engineer at the center will work with multiple faculty mentors. Working in three closely interacting scientific cores, fellows will develop computational technologies for analyzing molecular data, neural circuits, and behavior, such as tools to identify pat-terns in neural recordings or automate the analysis of human behavior to aid psychiatric diagnoses. These technologies will also help researchers model neural circuits, ultimately transforming data into knowledge and understanding.

“Lisa is focused on helping the scientific community realize its goals in translational research,” says Nergis Mavalvala, dean of the School of Science and the Curtis and Kathleen Marble Professor of Astrophysics. “With her generous support, we can accelerate the pace of research by connecting the data to the delivery of tangible results.”

Computational modeling

In its first five years, the ICoN Center will prioritize four areas of investigation: episodic memory and exploration, including functions like navigation and spatial memory; complex or stereotypical behavior, such as the perseverative behaviors associated with autism and obsessive-compulsive disorder; cognition and attention; and sleep. The goal, Fiete says, is to model the neuronal interactions that underlie these functions so that researchers can predict what will happen when something changes — when certain neurons become more active or when a genetic mutation is introduced, for example. When paired with experimental data from MIT labs, the center’s models will help explain not just how these circuits work, but also how they are altered by genes, the environment, aging, and disease.

These focus areas encompass circuits and behaviors often affected by psychiatric disorders and neurodegeneration, and models will give researchers new opportunities to explore their origins and potential treatment strategies. “I really think that the future of treating disorders of the mind is going to run through computational modeling,” says McGovern Associate Investigator Josh McDermott.

In McDermott’s lab, researchers are modeling the brain’s auditory circuits. “If we had a perfect model of the auditory system, we would be able to understand why when somebody loses their hearing, auditory abilities degrade in the very particular ways in which they degrade,” he says. Then, he says, that model could be used to optimize hearing aids by predicting how the brain would interpret sound altered in various ways by the device.

Similar opportunities will arise as researchers model other brain systems, McDermott says, noting that computational models help researchers grapple with a dauntingly vast realm of possibilities. “There’s lots of different ways the brain can be set up, and lots of different potential treatments, but there is a limit to the number of neuroscience or behavioral experiments you can run,” he says. “Doing experiments on a computational system is cheap, so you can explore the dynamics of the system in a very thorough way.”

The ICoN Center will speed the development of the computational tools that neuroscientists need, both for basic understanding of the brain and clinical advances. But Fiete hopes for a culture shift within neuroscience, as well. “There are a lot of brilliant students and postdocs who have skills that are mathematics and computational and modeling based,” she says. “I think once they know that there are these possibilities to collaborate to solve problems related to psychiatric disorders and how we think, they will see that this is an exciting place to apply their skills, and we can bring them in.”