Machines that learn language more like kids do

Children learn language by observing their environment, listening to the people around them, and connecting the dots between what they see and hear. Among other things, this helps children establish their language’s word order, such as where subjects and verbs fall in a sentence.

In computing, learning language is the task of syntactic and semantic parsers. These systems are trained on sentences annotated by humans that describe the structure and meaning behind words. Parsers are becoming increasingly important for web searches, natural-language database querying, and voice-recognition systems such as Alexa and Siri. Soon, they may also be used for home robotics.

But gathering the annotation data can be time-consuming and difficult for less common languages. Additionally, humans don’t always agree on the annotations, and the annotations themselves may not accurately reflect how people naturally speak.

In a paper being presented at this week’s Empirical Methods in Natural Language Processing conference, MIT researchers describe a parser that learns through observation to more closely mimic a child’s language-acquisition process, which could greatly extend the parser’s capabilities. To learn the structure of language, the parser observes captioned videos, with no other information, and associates the words with recorded objects and actions. Given a new sentence, the parser can then use what it’s learned about the structure of the language to accurately predict a sentence’s meaning, without the video.

This “weakly supervised” approach — meaning it requires limited training data — mimics how children can observe the world around them and learn language, without anyone providing direct context. The approach could expand the types of data and reduce the effort needed for training parsers, according to the researchers. A few directly annotated sentences, for instance, could be combined with many captioned videos, which are easier to come by, to improve performance.

In the future, the parser could be used to improve natural interaction between humans and personal robots. A robot equipped with the parser, for instance, could constantly observe its environment to reinforce its understanding of spoken commands, including when the spoken sentences aren’t fully grammatical or clear. “People talk to each other in partial sentences, run-on thoughts, and jumbled language. You want a robot in your home that will adapt to their particular way of speaking … and still figure out what they mean,” says co-author Andrei Barbu, a researcher in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Center for Brains, Minds, and Machines (CBMM) within MIT’s McGovern Institute.

The parser could also help researchers better understand how young children learn language. “A child has access to redundant, complementary information from different modalities, including hearing parents and siblings talk about the world, as well as tactile information and visual information, [which help him or her] to understand the world,” says co-author Boris Katz, a principal research scientist and head of the InfoLab Group at CSAIL. “It’s an amazing puzzle, to process all this simultaneous sensory input. This work is part of bigger piece to understand how this kind of learning happens in the world.”

Co-authors on the paper are: first author Candace Ross, a graduate student in the Department of Electrical Engineering and Computer Science and CSAIL, and a researcher in CBMM; Yevgeni Berzak PhD ’17, a postdoc in the Computational Psycholinguistics Group in the Department of Brain and Cognitive Sciences; and CSAIL graduate student Battushig Myanganbayar.

Visual learner

For their work, the researchers combined a semantic parser with a computer-vision component trained in object, human, and activity recognition in video. Semantic parsers are generally trained on sentences annotated with code that ascribes meaning to each word and the relationships between the words. Some have been trained on still images or computer simulations.

The new parser is the first to be trained using video, Ross says. In part, videos are more useful in reducing ambiguity. If the parser is unsure about, say, an action or object in a sentence, it can reference the video to clear things up. “There are temporal components — objects interacting with each other and with people — and high-level properties you wouldn’t see in a still image or just in language,” Ross says.

The researchers compiled a dataset of about 400 videos depicting people carrying out a number of actions, including picking up an object or putting it down, and walking toward an object. Participants on the crowdsourcing platform Mechanical Turk then provided 1,200 captions for those videos. They set aside 840 video-caption examples for training and tuning, and used 360 for testing. One advantage of using vision-based parsing is “you don’t need nearly as much data — although if you had [the data], you could scale up to huge datasets,” Barbu says.

In training, the researchers gave the parser the objective of determining whether a sentence accurately describes a given video. They fed the parser a video and matching caption. The parser extracts possible meanings of the caption as logical mathematical expressions. The sentence, “The woman is picking up an apple,” for instance, may be expressed as: λxy. woman x, pick_up x y, apple y.

Those expressions and the video are inputted to the computer-vision algorithm, called “Sentence Tracker,” developed by Barbu and other researchers. The algorithm looks at each video frame to track how objects and people transform over time, to determine if actions are playing out as described. In this way, it determines if the meaning is possibly true of the video.

Connecting the dots

The expression with the most closely matching representations for objects, humans, and actions becomes the most likely meaning of the caption. The expression, initially, may refer to many different objects and actions in the video, but the set of possible meanings serves as a training signal that helps the parser continuously winnow down possibilities. “By assuming that all of the sentences must follow the same rules, that they all come from the same language, and seeing many captioned videos, you can narrow down the meanings further,” Barbu says.

In short, the parser learns through passive observation: To determine if a caption is true of a video, the parser by necessity must identify the highest probability meaning of the caption. “The only way to figure out if the sentence is true of a video [is] to go through this intermediate step of, ‘What does the sentence mean?’ Otherwise, you have no idea how to connect the two,” Barbu explains. “We don’t give the system the meaning for the sentence. We say, ‘There’s a sentence and a video. The sentence has to be true of the video. Figure out some intermediate representation that makes it true of the video.’”

The training produces a syntactic and semantic grammar for the words it’s learned. Given a new sentence, the parser no longer requires videos, but leverages its grammar and lexicon to determine sentence structure and meaning.

Ultimately, this process is learning “as if you’re a kid,” Barbu says. “You see world around you and hear people speaking to learn meaning. One day, I can give you a sentence and ask what it means and, even without a visual, you know the meaning.”

“This research is exactly the right direction for natural language processing,” says Stefanie Tellex, a professor of computer science at Brown University who focuses on helping robots use natural language to communicate with humans. “To interpret grounded language, we need semantic representations, but it is not practicable to make it available at training time. Instead, this work captures representations of compositional structure using context from captioned videos. This is the paper I have been waiting for!”

In future work, the researchers are interested in modeling interactions, not just passive observations. “Children interact with the environment as they’re learning. Our idea is to have a model that would also use perception to learn,” Ross says.

This work was supported, in part, by the CBMM, the National Science Foundation, a Ford Foundation Graduate Research Fellowship, the Toyota Research Institute, and the MIT-IBM Brain-Inspired Multimedia Comprehension project.

Future Forward: Leadership Lessons from Patrick McGovern

More than half a century ago in a small gray house in Newton, Massachusetts, Patrick McGovern ’59 started what would eventually become the global publishing, research and technology investment powerhouse IDG. In the year 2000, he became a world-renowned philanthropist with his establishment of MIT’s McGovern Institute for Brain Research, one of the top neuroscience institutes in the world.

In the new book Future Forward: Leadership Lessons from Patrick McGovern, the Visionary Who Circled the Globe and Built a Technology Media Empire, author Glenn Rifkin details the legendary principles that McGovern relied on to drive the success of both IDG and the McGovern Institute: forge a clear mission that brings together everyone at all levels in an organization; empower employees to make decisions and propose new ideas; and create invigorating, positive atmospheres that bring out the best in people.

These lessons and more are detailed in Future Forward, available now at bookstores everywhere.

Tracking down changes in ADHD

Attention deficit hyperactivity disorder (ADHD) is marked by difficulty maintaining focus on tasks, and increased activity and impulsivity. These symptoms ultimately interfere with the ability to learn and function in daily tasks, but the source of the problem could lie at different levels of brain function, and it is hard to parse out exactly what is going wrong.

A new study co-authored by McGovern Institute Associate Investigator Michael Halassa has managed to develop tasks that dissociate lower from higher level brain functions so that disruption to these processes can be more specifically checked in ADHD. The results of this study, carried out in collaboration with co-corresponding authors Wei Ji Ma, Andra Mihali and researchers from New York University, illuminate how brain function is disrupted in ADHD, and highlights a role for perceptual deficits in this condition.

The underlying deficit in ADHD has largely been attributed to executive function — higher order processing and the ability of the brain to integrate information and focus attention. But there have been some hints, largely through reports from those with ADHD, that the very ability to accurately receive sensory information, might be altered. Some people with ADHD, for example, have reported impaired visual function and even changes in color processing. Cleanly separating these perceptual brain functions from the impact of higher order cognitive processes has proven difficult, however. It is not clear whether people with and without ADHD encode visual signals received by the eye in the same way.

“We realized that psychiatric diagnoses in general are based on clinical criteria and patient self-reporting,” says Halassa, who is also a board certified psychiatrist and an assistant professor in MIT’s Department of Brain and Cognitive Sciences. “Psychiatric diagnoses are imprecise, but neurobiology is progressing to the point where we can use well-controlled parameters to standardize criteria, and relate disorders to circuits,” he explains. “If there are problems with attention, is it the spotlight of attention itself that’s affected in ADHD, or the ability of a person to control where this spotlight is focused?”

To test how people with and without ADHD encode visual signals in the brain, Halassa, Ma, Mihali, and collaborators devised a perceptual encoding task in which subjects were asked to provide answers to simple questions about the orientation and color of lines and shapes on a screen. The simplicity of this test aimed to remove high-level cognitive input and provide a measure of accurate perceptual coding.

To measure higher-level executive function, the researchers provided subjects with rules about which features and screen areas were relevant to the task, and they switched relevance throughout the test. They monitored whether subjects cognitively adapted to the switch in rules – an indication of higher-order brain function. The authors also analyzed psychometric curve parameters, common in psychophysics, but not yet applied to ADHD.

“These psychometric parameters give us specific information about the parts of sensory processing that are being affected,” explains Halassa. “So, if you were to put on sunglasses, that would shift threshold, indicating that input is being affected, but this wouldn’t necessarily affect the slope of the psychometric function. If the slope is affected, this starts to reflect difficulty in seeing a line or color. In other words, these tests give us a finer readout of behavior, and how to map this onto particular circuits.”

The authors found that changes in visual perception were robustly associated with ADHD, and these changes were also correlated with cognitive function. Individuals with more clinically severe ADHD scored lower on executive function, and basic perception also tracked with these clinical records of disease severity. The authors could even sort ADHD from control subjects, based on their perceptual variability alone. All of this goes to say that changes in perception itself are clearly present in this ADHD cohort, and that they decline alongside changes in executive function.

“This was unexpected,” points out Halassa. “We didn’t expect so much to be explained by lower sensitivity to stimuli, and to see that these tasks become harder as cognitive pressure increases. It wasn’t clear that cognitive circuits might influence processing of stimuli.”

Understanding the true basis of changes in behavior in disorders such as ADHD can be hard to tease apart, but the study gives more insight into changes in the ADHD brain, and supports the idea that quantitative follow up on self-reporting by patients can drive a stronger understanding — and possible targeted treatment — of such disorders. Testing a larger number of ADHD patients and validating these measures on a larger scale is now the next research priority.

Meeting of the minds

In the summer of 2006, before their teenage years began, Mahdi Ramadan and Alexi Choueiri were spirited from their homes amid political unrest in Lebanon. Evacuated on short notice by the U.S. Marines, they were among 2,000 refugees transported to the U.S. on the aircraft carrier USS Nashville.

The two never met in their homeland, nor on the transatlantic journey, and after arriving in the U.S. they went their separate ways. Ramadan and his family moved to Seattle, Washington. Choueiri’s family settled in Chandler, Arizona, where they already had some extended family.

Yet their paths converged 11 years later as graduate students in MIT’s Department of Brain and Cognitive Sciences (BCS). One day last fall, on a walk across campus, Ramadan and Choueiri slowly unraveled their connection. With increasing excitement, they narrowed it down by year, by month, and eventually, by boat, to discover just how closely their lives had once come to one another.

Lebanon, the only Middle Eastern country without a desert, enjoys a lush, Mediterranean climate. Amid this natural beauty, though, the country struggles under the weight of deep political and cultural divides that sometimes erupt into conflict.

Despite different Lebanese cultural backgrounds — Ramadan’s family is Muslim and Choueiri’s Christian — they have had remarkably similar experiences as refugees from Lebanon. Both credit those experiences with motivating their interest in neuroscience. Questions about human behavior — How do people form beliefs about the world? Can those beliefs really change? — led them to graduate work at MIT.

In pursuit of knowledge

When they first immigrated to the U.S., school symbolized survival for Ramadan and Choueiri. Not only was education a mode of improving their lives and supporting their families, it was a search for objectivity in their recently upended worlds.

As the family’s primary English speaker, Ramadan became a bulwark for his family in their new country, especially in medical matters; his little sister, Ghida, has cerebral palsy. Though his family has limited financial resources, he emphasizes that both he and his sister have been constantly supported by their parents in pursuit of their educations.

In fact, Ramadan feels motivated by Ghida’s determination to complete her degree in occupational therapy: “That to me is really inspirational, her resilience in the face of her disability and in the face of assumptions that people make about capability. She’s really sassy, she’s really witty, she’s really funny, she’s really intelligent, and she doesn’t see her disability as a disability. She actually thinks it’s an advantage — it actually motivated her to pursue [her education] even more.”

Ramadan hopes his own educational journey, from a low-income evacuee to a neuroscience PhD, can show others like him that success is possible.

Choueiri also relied on academics to adapt to his new world in Arizona. Even in Lebanon, he remembers taking solace from a chaotic world in his education, and once in the U.S., he dove headfirst into his studies.

Choueiri’s hometown in Arizona sometimes felt homogenous, so coming to MIT has been a staggering — and welcome — experience. “The diversity here is phenomenal: meeting people from different cultures, upbringings, countries,” he says. “I love making friends from all over and learning their stories. Being a neuroscientist, I like to know how they were brought up and how their ideas were formed. … It’s like Disneyland for me. I feel like I’m coming to Disneyland every day and high-fiving Mickey Mouse.”

At home at MIT

Ramadan and Choueiri revel in the freedom of thought they have found in their academic home here. They say they feel taken seriously as students and, more importantly, as thinkers. The BCS department values interdisciplinary thought, and cultivates extracurricular student activities like philosophy discussion groups, the development of neuroscience podcasts, and independent, student-led lectures on myriad neuroscience-adjacent topics.

Both students were drawn to neuroscience not only by their experiences as Lebanese-Americans, but by trying to make sense of what happened to them at a young age.

Ramadan became interested in neuroplasticity through self-observation. “You know that feeling of childhood you have where everything is magical and you’re not really aware of things around you? I feel like when I immigrated to the U.S., that feeling went away and I had to become extra-aware of everything because I had to adapt so quickly. So, something that intrigued me about neuroscience is how the brain is able to adapt so quickly and how different experiences can shape and rewire your brain.”

Now in his second year, Ramadan plans to pursue his interest in neuroplasticity in Professor Mehrdad Jazayeri’s lab at the McGovern Institute by investigating how learning changes the brain’s underlying neural circuits; understanding the physical mechanism of plasticity has application to both disease states and artificial intelligence.

Choueiri, a third-year student in the program, is a member of Professor Ed Boyden’s lab at the McGovern Institute. While his interest in neuroscience was similarly driven by his experience as an evacuee, his approach is outward-looking, focused on making sense of people’s choices. Ultimately, the brain controls human ability to perceive, learn, and choose through physiological changes; Choueiri wants to understand not just the human brain, but also the human condition — and to use that understanding to alleviate pain and suffering.

“Growing up in Lebanon, with different religions and war … I became fundamentally interested in human behavior, irrationality, and conflict, and how can we resolve those things … and maybe there’s an objective way to really make sense of where these differences are coming from,” he says. In the Synthetic Neurobiology Group, Choueiri’s research involves developing neurotechnologies to map the molecular interactions of the brain, to reveal the fundamental mechanisms of brain function and repair dysfunction.

Shared identities

As evacuees, Ramadan and Choueiri left their country without notice and without saying goodbye. However, in other ways, their experience was not unlike an immigrant experience. This sometimes makes identifying as a refugee in the current political climate complex, as refugees from Syria and other war-ravaged regions struggle to make a home in the U.S. Still, both believe that sharing their personal experience may help others in difficult positions to see that they do belong in the U.S., and at MIT.

Despite their American identity, Ramadan and Choueiri also share a palpable love for Lebanese culture. They extol the diversity of Lebanese cuisine, which is served mezze-style, making meals an experience full of variety, grilled food, and yogurt dishes. The Lebanese diaspora is another source of great pride for them. Though the population of Lebanon is less than 5 million, as many as 14 million live abroad.

It’s all the more remarkable, then, that Ramadan and Choueiri intersected at MIT, some 6,000 miles from their homeland. The bond they have forged since, through their common heritage, experiences, and interests, is deeply meaningful to both of them.

“I was so happy to find another student who has this story because it allows me to reflect back on those experiences and how they changed me,” says Ramadan. “It’s like a mirror image. … Was it a coincidence, or were our lives so similar that they led to this point?”

This story was written by Bridget E. Begg at MIT’s Office of Graduate Education.

Study reveals how the brain overcomes its own limitations

Imagine trying to write your name so that it can be read in a mirror. Your brain has all of the visual information you need, and you’re a pro at writing your own name. Still, this task is very difficult for most people. That’s because it requires the brain to perform a mental transformation that it’s not familiar with: using what it sees in the mirror to accurately guide your hand to write backward.

MIT neuroscientists have now discovered how the brain tries to compensate for its poor performance in tasks that require this kind of complicated transformation. As it also does in other types of situations where it has little confidence in its own judgments, the brain attempts to overcome its difficulties by relying on previous experiences.

“If you’re doing something that requires a harder mental transformation, and therefore creates more uncertainty and more variability, you rely on your prior beliefs and bias yourself toward what you know how to do well, in order to compensate for that variability,” says Mehrdad Jazayeri, the Robert A. Swanson Career Development Professor of Life Sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

This strategy actually improves overall performance, the researchers report in their study, which appears in the Oct. 24 issue of the journal Nature Communications. Evan Remington, a McGovern Institute postdoc, is the paper’s lead author, and technical assistant Tiffany Parks is also an author on the paper.

Noisy computations

Neuroscientists have known for many decades that the brain does not faithfully reproduce exactly what the eyes see or what the ears hear. Instead, there is a great deal of “noise” — random fluctuations of electrical activity in the brain, which can come from uncertainty or ambiguity about what we are seeing or hearing. This uncertainty also comes into play in social interactions, as we try to interpret the motivations of other people, or when recalling memories of past events.

Previous research has revealed many strategies that help the brain to compensate for this uncertainty. Using a framework known as Bayesian integration, the brain combines multiple, potentially conflicting pieces of information and values them according to their reliability. For example, if given information by two sources, we’ll rely more on the one that we believe to be more credible.

In other cases, such as making movements when we’re uncertain exactly how to proceed, the brain will rely on an average of its past experiences. For example, when reaching for a light switch in a dark, unfamiliar room, we’ll move our hand toward a certain height and close to the doorframe, where past experience suggests a light switch might be located.

All of these strategies have been previously shown to work together to increase bias toward a particular outcome, which makes our overall performance better because it reduces variability, Jazayeri says.

Noise can also occur in the mental conversion of sensory information into a motor plan. In many cases, this is a straightforward task in which noise plays a minimal role — for example, reaching for a mug that you can see on your desk. However, for other tasks, such as the mirror-writing exercise, this conversion is much more complicated.

“Your performance will be variable, and it’s not because you don’t know where your hand is, and it’s not because you don’t know where the image is,” Jazayeri says. “It involves an entirely different form of uncertainty, which has to do with processing information. The act of performing mental transformations of information clearly induces variability.”

That type of mental conversion is what the researchers set out to explore in the new study. To do that, they asked subjects to perform three different tasks. For each one, they compared subjects’ performance in a version of the task where mapping sensory information to motor commands was easy, and a version where an extra mental transformation was required.

In one example, the researchers first asked participants to draw a line the same length as a line they were shown, which was always between 5 and 10 centimeters. In the more difficult version, they were asked to draw a line 1.5 times longer than the original line.

The results from this set of experiments, as well as the other two tasks, showed that in the version that required difficult mental transformations, people altered their performance using the same strategies that they use to overcome noise in sensory perception and other realms. For example, in the line-drawing task, in which the participants had to draw lines ranging from 7.5 to 15 centimeters, depending on the length of the original line, they tended to draw lines that were closer to the average length of all the lines they had previously drawn. This made their responses overall less variable and also more accurate.

“This regression to the mean is a very common strategy for making performance better when there is uncertainty,” Jazayeri says.

Noise reduction

The new findings led the researchers to hypothesize that when people get very good at a task that requires complex computation, the noise will become smaller and less detrimental to overall performance. That is, people will trust their computations more and stop relying on averages.

“As it gets easier, our prediction is the bias will go away, because that computation is no longer a noisy computation,” Jazayeri says. “You believe in the computation; you know the computation is working well.”

The researchers now plan to further study whether people’s biases decrease as they learn to perform a complicated task better. In the experiments they performed for the Nature Communications study, they found some preliminary evidence that trained musicians performed better in a task that involved producing time intervals of a specific duration.

The research was funded by the Alfred P. Sloan Foundation, the Esther A. and Joseph Klingenstein Fund, the Simons Foundation, the McKnight Endowment Fund for Neuroscience, and the McGovern Institute.

Monitoring electromagnetic signals in the brain with MRI

Researchers commonly study brain function by monitoring two types of electromagnetism — electric fields and light. However, most methods for measuring these phenomena in the brain are very invasive.

MIT engineers have now devised a new technique to detect either electrical activity or optical signals in the brain using a minimally invasive sensor for magnetic resonance imaging (MRI).

MRI is often used to measure changes in blood flow that indirectly represent brain activity, but the MIT team has devised a new type of MRI sensor that can detect tiny electrical currents, as well as light produced by luminescent proteins. (Electrical impulses arise from the brain’s internal communications, and optical signals can be produced by a variety of molecules developed by chemists and bioengineers.)

“MRI offers a way to sense things from the outside of the body in a minimally invasive fashion,” says Aviad Hai, an MIT postdoc and the lead author of the study. “It does not require a wired connection into the brain. We can implant the sensor and just leave it there.”

This kind of sensor could give neuroscientists a spatially accurate way to pinpoint electrical activity in the brain. It can also be used to measure light, and could be adapted to measure chemicals such as glucose, the researchers say.

Alan Jasanoff, an MIT professor of biological engineering, brain and cognitive sciences, and nuclear science and engineering, and an associate member of MIT’s McGovern Institute for Brain Research, is the senior author of the paper, which appears in the Oct. 22 issue of Nature Biomedical Engineering. Postdocs Virginia Spanoudaki and Benjamin Bartelle are also authors of the paper.

Detecting electric fields

Jasanoff’s lab has previously developed MRI sensors that can detect calcium and neurotransmitters such as serotonin and dopamine. In this paper, they wanted to expand their approach to detecting biophysical phenomena such as electricity and light. Currently, the most accurate way to monitor electrical activity in the brain is by inserting an electrode, which is very invasive and can cause tissue damage. Electroencephalography (EEG) is a noninvasive way to measure electrical activity in the brain, but this method cannot pinpoint the origin of the activity.

To create a sensor that could detect electromagnetic fields with spatial precision, the researchers realized they could use an electronic device — specifically, a tiny radio antenna.

MRI works by detecting radio waves emitted by the nuclei of hydrogen atoms in water. These signals are usually detected by a large radio antenna within an MRI scanner. For this study, the MIT team shrank the radio antenna down to just a few millimeters in size so that it could be implanted directly into the brain to receive the radio waves generated by water in the brain tissue.

The sensor is initially tuned to the same frequency as the radio waves emitted by the hydrogen atoms. When the sensor picks up an electromagnetic signal from the tissue, its tuning changes and the sensor no longer matches the frequency of the hydrogen atoms. When this happens, a weaker image arises when the sensor is scanned by an external MRI machine.

The researchers demonstrated that the sensors can pick up electrical signals similar to those produced by action potentials (the electrical impulses fired by single neurons), or local field potentials (the sum of electrical currents produced by a group of neurons).

“We showed that these devices are sensitive to biological-scale potentials, on the order of millivolts, which are comparable to what biological tissue generates, especially in the brain,” Jasanoff says.

The researchers performed additional tests in rats to study whether the sensors could pick up signals in living brain tissue. For those experiments, they designed the sensors to detect light emitted by cells engineered to express the protein luciferase.

Normally, luciferase’s exact location cannot be determined when it is deep within the brain or other tissues, so the new sensor offers a way to expand the usefulness of luciferase and more precisely pinpoint the cells that are emitting light, the researchers say. Luciferase is commonly engineered into cells along with another gene of interest, allowing researchers to determine whether the genes have been successfully incorporated by measuring the light produced.

Smaller sensors

One major advantage of this sensor is that it does not need to carry any kind of power supply, because the radio signals that the external MRI scanner emits are enough to power the sensor.

Hai, who will be joining the faculty at the University of Wisconsin at Madison in January, plans to further miniaturize the sensors so that more of them can be injected, enabling the imaging of light or electrical fields over a larger brain area. In this paper, the researchers performed modeling that showed that a 250-micron sensor (a few tenths of a millimeter) should be able to detect electrical activity on the order of 100 millivolts, similar to the amount of current in a neural action potential.

Jasanoff’s lab is interested in using this type of sensor to detect neural signals in the brain, and they envision that it could also be used to monitor electromagnetic phenomena elsewhere in the body, including muscle contractions or cardiac activity.

“If the sensors were on the order of hundreds of microns, which is what the modeling suggests is in the future for this technology, then you could imagine taking a syringe and distributing a whole bunch of them and just leaving them there,” Jasanoff says. “What this would do is provide many local readouts by having sensors distributed all over the tissue.”

The research was funded by the National Institutes of Health.

Electrical properties of dendrites help explain our brain’s unique computing power

Neurons in the human brain receive electrical signals from thousands of other cells, and long neural extensions called dendrites play a critical role in incorporating all of that information so the cells can respond appropriately.

Using hard-to-obtain samples of human brain tissue, MIT neuroscientists have now discovered that human dendrites have different electrical properties from those of other species. Their studies reveal that electrical signals weaken more as they flow along human dendrites, resulting in a higher degree of electrical compartmentalization, meaning that small sections of dendrites can behave independently from the rest of the neuron.

These differences may contribute to the enhanced computing power of the human brain, the researchers say.

“It’s not just that humans are smart because we have more neurons and a larger cortex. From the bottom up, neurons behave differently,” says Mark Harnett, the Fred and Carole Middleton Career Development Assistant Professor of Brain and Cognitive Sciences. “In human neurons, there is more electrical compartmentalization, and that allows these units to be a little bit more independent, potentially leading to increased computational capabilities of single neurons.”

Harnett, who is also a member of MIT’s McGovern Institute for Brain Research, and Sydney Cash, an assistant professor of neurology at Harvard Medical School and Massachusetts General Hospital, are the senior authors of the study, which appears in the Oct. 18 issue of Cell. The paper’s lead author is Lou Beaulieu-Laroche, a graduate student in MIT’s Department of Brain and Cognitive Sciences.

Neural computation

Dendrites can be thought of as analogous to transistors in a computer, performing simple operations using electrical signals. Dendrites receive input from many other neurons and carry those signals to the cell body. If stimulated enough, a neuron fires an action potential — an electrical impulse that then stimulates other neurons. Large networks of these neurons communicate with each other to generate thoughts and behavior.

The structure of a single neuron often resembles a tree, with many branches bringing in information that arrives far from the cell body. Previous research has found that the strength of electrical signals arriving at the cell body depends, in part, on how far they travel along the dendrite to get there. As the signals propagate, they become weaker, so a signal that arrives far from the cell body has less of an impact than one that arrives near the cell body.

Dendrites in the cortex of the human brain are much longer than those in rats and most other species, because the human cortex has evolved to be much thicker than that of other species. In humans, the cortex makes up about 75 percent of the total brain volume, compared to about 30 percent in the rat brain.

Although the human cortex is two to three times thicker than that of rats, it maintains the same overall organization, consisting of six distinctive layers of neurons. Neurons from layer 5 have dendrites long enough to reach all the way to layer 1, meaning that human dendrites have had to elongate as the human brain has evolved, and electrical signals have to travel that much farther.

In the new study, the MIT team wanted to investigate how these length differences might affect dendrites’ electrical properties. They were able to compare electrical activity in rat and human dendrites, using small pieces of brain tissue removed from epilepsy patients undergoing surgical removal of part of the temporal lobe. In order to reach the diseased part of the brain, surgeons also have to take out a small chunk of the anterior temporal lobe.

With the help of MGH collaborators Cash, Matthew Frosch, Ziv Williams, and Emad Eskandar, Harnett’s lab was able to obtain samples of the anterior temporal lobe, each about the size of a fingernail.

Evidence suggests that the anterior temporal lobe is not affected by epilepsy, and the tissue appears normal when examined with neuropathological techniques, Harnett says. This part of the brain appears to be involved in a variety of functions, including language and visual processing, but is not critical to any one function; patients are able to function normally after it is removed.

Once the tissue was removed, the researchers placed it in a solution very similar to cerebrospinal fluid, with oxygen flowing through it. This allowed them to keep the tissue alive for up to 48 hours. During that time, they used a technique known as patch-clamp electrophysiology to measure how electrical signals travel along dendrites of pyramidal neurons, which are the most common type of excitatory neurons in the cortex.

These experiments were performed primarily by Beaulieu-Laroche. Harnett’s lab (and others) have previously done this kind of experiment in rodent dendrites, but his team is the first to analyze electrical properties of human dendrites.

Unique features

The researchers found that because human dendrites cover longer distances, a signal flowing along a human dendrite from layer 1 to the cell body in layer 5 is much weaker when it arrives than a signal flowing along a rat dendrite from layer 1 to layer 5.

They also showed that human and rat dendrites have the same number of ion channels, which regulate the current flow, but these channels occur at a lower density in human dendrites as a result of the dendrite elongation. They also developed a detailed biophysical model that shows that this density change can account for some of the differences in electrical activity seen between human and rat dendrites, Harnett says.

Nelson Spruston, senior director of scientific programs at the Howard Hughes Medical Institute Janelia Research Campus, described the researchers’ analysis of human dendrites as “a remarkable accomplishment.”

“These are the most carefully detailed measurements to date of the physiological properties of human neurons,” says Spruston, who was not involved in the research. “These kinds of experiments are very technically demanding, even in mice and rats, so from a technical perspective, it’s pretty amazing that they’ve done this in humans.”

The question remains, how do these differences affect human brainpower? Harnett’s hypothesis is that because of these differences, which allow more regions of a dendrite to influence the strength of an incoming signal, individual neurons can perform more complex computations on the information.

“If you have a cortical column that has a chunk of human or rodent cortex, you’re going to be able to accomplish more computations faster with the human architecture versus the rodent architecture,” he says.

There are many other differences between human neurons and those of other species, Harnett adds, making it difficult to tease out the effects of dendritic electrical properties. In future studies, he hopes to explore further the precise impact of these electrical properties, and how they interact with other unique features of human neurons to produce more computing power.

The research was funded by the National Sciences and Engineering Research Council of Canada, the Dana Foundation David Mahoney Neuroimaging Grant Program, and the National Institutes of Health.

Mark Harnett’s “Holy Grail” experiment

Neurons in the human brain receive electrical signals from thousands of other cells, and long neural extensions called dendrites play a critical role in incorporating all of that information so the cells can respond appropriately.

Using hard-to-obtain samples of human brain tissue, McGovern neuroscientist Mark Harnett has now discovered that human dendrites have different electrical properties from those of other species. Their studies reveal that electrical signals weaken more as they flow along human dendrites, resulting in a higher degree of electrical compartmentalization, meaning that small sections of dendrites can behave independently from the rest of the neuron.

These differences may contribute to the enhanced computing power of the human brain, the researchers say.

Fujitsu Laboratories and MIT’s Center for Brains, Minds and Machines broaden partnership

Fujitsu Laboratories Ltd. and MIT’s Center for Brains, Minds and Machines (CBMM) has announced a multi-year philanthropic partnership focused on advancing the science and engineering of intelligence while supporting the next generation of researchers in this emerging field. The new commitment follows on several years of collaborative research among scientists at the two organizations.

Founded in 1968, Fujitsu Laboratories has conducted a wide range of basic and applied research in the areas of next-generation services, computer servers, networks, electronic devices, and advanced materials. CBMM, a multi-institutional, National Science Foundation funded science and technology center focusing on the interdisciplinary study of intelligence, was established in 2013 and is headquartered at MIT’s McGovern Institute for Brain Research. CBMM is also the foundation of “The Core” of the MIT Quest for Intelligence launched earlier this year. The partnership between the two organizations started in March 2017 when Fujitsu Laboratories sent a visiting scientist to CBMM.

“A fundamental understanding of how humans think, feel, and make decisions is critical to developing revolutionary technologies that will have a real impact on societal problems,” said Shigeru Sasaki, CEO of Fujitsu Laboratories. “The partnership between MIT’s Center for Brains, Minds and Machines and Fujitsu Laboratories will help advance critical R&D efforts in both human intelligence and the creation of next-generation technologies that will shape our lives,” he added.

The new Fujitsu Laboratories Co-Creation Research Fund, established with a philanthropic gift from Fujitsu Laboratories, will fuel new, innovative and challenging projects in areas of interest to both Fujitsu and CBMM, including the basic study of computations underlying visual recognition and language processing, creation of new machine learning methods, and development of the theory of deep learning. Alongside funding for research projects, Fujitsu Laboratories will also fund fellowships for graduate students attending CBMM’s summer course from 2019 to contribute to the future of research and society on a long term basis. The intensive three-week course gives advanced students from universities worldwide a “deep end” introduction to the problem of intelligence. These students will later have the opportunity to travel to Fujitsu Laboratories in Japan or its overseas locations in the U.S., Canada, U.K., Spain, and China to meet with Fujitsu researchers.

“CBMM faculty, students, and fellows are excited for the opportunity to work alongside scientists from Fujitsu to make advances in complex problems of intelligence, both real and artificial,” said CBMM’s director Tomaso Poggio, who is also an investigator at the McGovern Institute and the Eugene McDermott Professor in MIT’s Department of Brain and Cognitive Sciences. “Both Fujitsu Laboratories and MIT are committed to creating revolutionary tools and systems that will transform many industries, and to do that we are first looking to the extraordinary computations made by the human mind in everyday life.”

As part of the partnership, Poggio will be a featured keynote speaker at the Fujitsu Laboratories Advanced Technology Symposium on Oct. 9. In addition, Tomotake Sasaki, a former visiting scientist and current research affiliate in the Poggio Lab, will continue to collaborate with CBMM scientists and engineers on reinforcement learning and deep learning research projects. Moyuru Yamada, a visiting scientist in the Lab of Professor Josh Tenenbaum, is also studying the computational model of human cognition and exploring its industrial applications. Moreover, Fujitsu Laboratories is planning to invite CBMM researchers to Japan or overseas offices and arrange internships for interested students.

Model helps robots navigate more like humans do

When moving through a crowd to reach some end goal, humans can usually navigate the space safely without thinking too much. They can learn from the behavior of others and note any obstacles to avoid. Robots, on the other hand, struggle with such navigational concepts.

MIT researchers have now devised a way to help robots navigate environments more like humans do. Their novel motion-planning model lets robots determine how to reach a goal by exploring the environment, observing other agents, and exploiting what they’ve learned before in similar situations. A paper describing the model was presented at this week’s IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

Popular motion-planning algorithms will create a tree of possible decisions that branches out until it finds good paths for navigation. A robot that needs to navigate a room to reach a door, for instance, will create a step-by-step search tree of possible movements and then execute the best path to the door, considering various constraints. One drawback, however, is these algorithms rarely learn: Robots can’t leverage information about how they or other agents acted previously in similar environments.

“Just like when playing chess, these decisions branch out until [the robots] find a good way to navigate. But unlike chess players, [the robots] explore what the future looks like without learning much about their environment and other agents,” says co-author Andrei Barbu, a researcher at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Center for Brains, Minds, and Machines (CBMM) within MIT’s McGovern Institute. “The thousandth time they go through the same crowd is as complicated as the first time. They’re always exploring, rarely observing, and never using what’s happened in the past.”

The researchers developed a model that combines a planning algorithm with a neural network that learns to recognize paths that could lead to the best outcome, and uses that knowledge to guide the robot’s movement in an environment.

In their paper, “Deep sequential models for sampling-based planning,” the researchers demonstrate the advantages of their model in two settings: navigating through challenging rooms with traps and narrow passages, and navigating areas while avoiding collisions with other agents. A promising real-world application is helping autonomous cars navigate intersections, where they have to quickly evaluate what others will do before merging into traffic. The researchers are currently pursuing such applications through the Toyota-CSAIL Joint Research Center.

“When humans interact with the world, we see an object we’ve interacted with before, or are in some location we’ve been to before, so we know how we’re going to act,” says Yen-Ling Kuo, a PhD student in CSAIL and first author on the paper. “The idea behind this work is to add to the search space a machine-learning model that knows from past experience how to make planning more efficient.”

Boris Katz, a principal research scientist and head of the InfoLab Group at CSAIL, is also a co-author on the paper.

Trading off exploration and exploitation

Traditional motion planners explore an environment by rapidly expanding a tree of decisions that eventually blankets an entire space. The robot then looks at the tree to find a way to reach the goal, such as a door. The researchers’ model, however, offers “a tradeoff between exploring the world and exploiting past knowledge,” Kuo says.

The learning process starts with a few examples. A robot using the model is trained on a few ways to navigate similar environments. The neural network learns what makes these examples succeed by interpreting the environment around the robot, such as the shape of the walls, the actions of other agents, and features of the goals. In short, the model “learns that when you’re stuck in an environment, and you see a doorway, it’s probably a good idea to go through the door to get out,” Barbu says.

The model combines the exploration behavior from earlier methods with this learned information. The underlying planner, called RRT*, was developed by MIT professors Sertac Karaman and Emilio Frazzoli. (It’s a variant of a widely used motion-planning algorithm known as Rapidly-exploring Random Trees, or  RRT.) The planner creates a search tree while the neural network mirrors each step and makes probabilistic predictions about where the robot should go next. When the network makes a prediction with high confidence, based on learned information, it guides the robot on a new path. If the network doesn’t have high confidence, it lets the robot explore the environment instead, like a traditional planner.

For example, the researchers demonstrated the model in a simulation known as a “bug trap,” where a 2-D robot must escape from an inner chamber through a central narrow channel and reach a location in a surrounding larger room. Blind allies on either side of the channel can get robots stuck. In this simulation, the robot was trained on a few examples of how to escape different bug traps. When faced with a new trap, it recognizes features of the trap, escapes, and continues to search for its goal in the larger room. The neural network helps the robot find the exit to the trap, identify the dead ends, and gives the robot a sense of its surroundings so it can quickly find the goal.

Results in the paper are based on the chances that a path is found after some time, total length of the path that reached a given goal, and how consistent the paths were. In both simulations, the researchers’ model more quickly plotted far shorter and consistent paths than a traditional planner.

“This model is interesting because it allows a motion planner to adapt to what it sees in the environment,” says Stephanie Tellex, an assistant professor of computer science at Brown University, who was not involved in the research. “This can enable dramatic improvements in planning speed by customizing the planner to what the robot knows. Most planners don’t adapt to the environment at all. Being able to traverse long, narrow passages is notoriously difficult for a conventional planner, but they can solve it. We need more ways that bridge this gap.”

Working with multiple agents

In one other experiment, the researchers trained and tested the model in navigating environments with multiple moving agents, which is a useful test for autonomous cars, especially navigating intersections and roundabouts. In the simulation, several agents are circling an obstacle. A robot agent must successfully navigate around the other agents, avoid collisions, and reach a goal location, such as an exit on a roundabout.

“Situations like roundabouts are hard, because they require reasoning about how others will respond to your actions, how you will then respond to theirs, what they will do next, and so on,” Barbu says. “You eventually discover your first action was wrong, because later on it will lead to a likely accident. This problem gets exponentially worse the more cars you have to contend with.”

Results indicate that the researchers’ model can capture enough information about the future behavior of the other agents (cars) to cut off the process early, while still making good decisions in navigation. This makes planning more efficient. Moreover, they only needed to train the model on a few examples of roundabouts with only a few cars. “The plans the robots make take into account what the other cars are going to do, as any human would,” Barbu says.

Going through intersections or roundabouts is one of the most challenging scenarios facing autonomous cars. This work might one day let cars learn how humans behave and how to adapt to drivers in different environments, according to the researchers. This is the focus of the Toyota-CSAIL Joint Research Center work.

“Not everybody behaves the same way, but people are very stereotypical. There are people who are shy, people who are aggressive. The model recognizes that quickly and that’s why it can plan efficiently,” Barbu says.

More recently, the researchers have been applying this work to robots with manipulators that face similarly daunting challenges when reaching for objects in ever-changing environments.