Igor Mordatch is working to build machines that can carry on a conversation. That’s something so many people are working on. In Silicon Valley, chatbot is now a bona fide buzzword.
But Mordatch is different. He’s not a linguist. He doesn’t deal in the techniques that typically reach for language. He’s a roboticist who began his career as an animator. He spent time at Pixar and worked on Toy Story 3, in between stints as an academic at places like Stanford and the University of Washington, where he taught robots to move like humans. “Creating movement from scratch is what I was always interested in,” he says. Now, all this expertise is coming together in an unexpected way.
A bot is a piece of code, which does a predefined set of actions on behalf of someone. Bots are used to manage Twitter Followers, they answer email requests or order more supplies as soon a certain item runs low. teach themselves a new language
Born in Ukraine and raised in Toronto, the 31-year-old is now a visiting researcher at OpenAI, the Artificial Intelligence knows many different definitions, but in general it can be defined as a machine completing complex tasks intelligently, meaning that it mirrors human intelligence and evolves with time. lab started by Tesla founder Elon Musk and Y combinator president Sam Altman. There, Mordatch is exploring a new path to machines that can not only converse with humans, but with each other. He’s building virtual worlds where software bots learn to create their own language out of necessity. As detailed in a research paper published by OpenAI this week, Mordatch and his collaborators created a world where bots are charged with completing certain tasks, like moving themselves to a particular landmark. The world is simple, just a big white square—all of two dimensions—and the bots are colored shapes: a green, red, or blue circle. But the point of this universe is more complex. The world allows the bots to create their own language as a way collaborating, helping each other complete those tasks.
All this happens through what’s called reinforcement , the same fundamental technique that underpinned AlphaGo, the machine from Google’s DeepMind lab that cracked the ancient game of Go. Basically, the bots navigate their world through extreme trial and error, carefully keeping track of what works and what doesn’t as they reach for a reward, like arriving at a landmark. If a particular action helps them achieve that reward, they know to keep doing it. In this same way, they learn to build their own language. Telling each other where to go helps them all get places more quickly. As Mordatch says: “We can reduce the success of dialogue to: Did you end up getting to the green can or not?”
Successful Language Building
To build their language, the bots assign random abstract characters to simple concepts they learn as they navigate their virtual world. They assign characters to each other, to locations or objects in the virtual world, and to actions like “go to” or “look at.” Mordatch and his colleagues hope that as these bot languages become more complex, related techniques can then translate them into languages like English. That is a long way off—at least as a practical piece of software—but another OpenAI researcher is already working on this kind of “translator bot.” Ultimately, Mordatch says, these methods can give machines a deeper grasp of language, actually show them why language exists—and that provides a springboard to real conversation, a computer interface that computer scientists have long dreamed of but never actually pulled off.