All it took was a graphics card, a game engine, a software engineer, and AI.
Creating A Virtual World
Video game designers routinely pour thousands of hours of labor into painstakingly creating complex 3D objects and environments. But that could soon become much less labor intensive, thanks to a new technique that dreams up trippy video games inspired by real life video.
NVIDIA showed off new tech, which can produce compelling but glitchy gameplay based on real-life video footage, at the NeurIPS AI conference in Montreal.
The algorithm isn’t quite as impressive as it sounds. The virtual environment was still rendered using a traditional game engine, but the graphics were produced by the AI.
The result: a playable video game demo that allows you to drive a car down a series of city blocks. It doesn’t sound like much, but it does suggest a future in which deep learning could be used to create studio-quality gaming content — making video games a lot easier to produce.
Rendering The Future
It’s an impressive use of machine learning technology, but the results aren’t quite photorealistic — at least yet. And to be fair, the demo was created by only one engineer at NVIDIA.
“It’s proof-of-concept rather than a game that’s fun to play,” Bryan Catanzaro, NVIDIA’s vice president of applied deep learning, told The Verge.
But it could take a while for this to actually be used by video game designers. Catanzaro tells The Verge it could take decades until this kind of technology could actually be used in consumer video games.