Precious data for a baby AI. (OpenAI)
Virtual reality companies like Oculus and HTC mainly market their devices for entertainment: Play in a virtual world, or be immersed in 360 video. But researchers at OpenAI, the non-profit AI firm backed by Elon Musk, have found another use for the technology: instructing robots how to move.
By recording an action in virtual reality just once, an system of two AI algorithms were able to decode and replicate the action with a physical machine, according to a post on OpenAI’s blog today. The action demonstrated was simple—stacking blocks—but this kind of learning through imitation could be more widely applicable to robots that carry boxes or flip burgers.
OpenAI’s system works using two AI algorithms: One that interprets where everything is, and one that guesses why and how an action is happening. The researchers call these vision and imitation networks. Usually, these kinds of algorithms need to be trained on thousands of real-world examples of machines looking at blocks to tell where they are, and then examples of real machines picking up blocks, to figure out how to do the action itself.
OpenAI’s breakthrough is accomplishing all of that through virtual simulation. The machine was able to accurately copy a human’s VR action in the real world, without ever having moved before.
“Nothing in our technique is specific to blocks,” says Josh Tobin, a researcher at OpenAI, in a video. “This system is an early prototype, that will form the backbone of the general-purpose robotics systems we’re developing here at OpenAI.”
The system is expected to work well with other objects that are rigid, OpenAI communications and strategy director Jack Clark tells Quartz, but computers struggle to perfectly simulate fluid or flexible objects, so the data necessary to train robots to carry such objects is harder to generate. The OpenAI researchers plan to work on objects beyond blocks, as well as robots’ ability to carry objects, instead of just stacking them.