Smartphones are doing things we didn’t think possible only a few years ago, such as video editing and helping drivers navigate. But can tools to create images for virtual and augmented reality actually fit into such a tiny form factor?
The answer, according to Min H. Kim, associate professor of computer science at KAIST in South Korea, is a confident “yes.” Kim -- along with Diego Gutierrez, professor of computer science at Universidad de Zaragoza in Spain, and two graduate students -- has developed software for a standard smartphone that can create 3D models.
To reproduce a physical object in AR/VR, the appearance of an object as well as its 3D geometry must be replicated, which has traditionally been created with either specialized hardware or manual artwork. Kim said.
When single-camera methods have been used, they are only able to capture 3D geometry of objects and not the textured appearance of objects, Kim said. "Using only 3D geometry cannot reproduce the realistic appearance of the object in the AR/VR environment,” he said. “Our technique can capture high-quality 3D geometry as well as its material appearance so that the objects can be realistically rendered in any virtual environment."
What’s more, Kim added, the team’s algorithm can do the job without specialized systems, such as light stages or commercial 3D scanners that are “extremely expensive and hard to operate.”
For their experiments capturing a variety of object with different geometries and surface textures -- including metal, wood, plastic, ceramic, resin and paper -- the team members took up to 200 images or video frames from as many angles as possible, without having to keep the camera at a fixed distance from the subject. “The distance does not affect our performance significantly since we take into account the distance between the object and the camera in our mathematical model,” Kim said.
The images are stitched together using a process called “numerical optimization.” “We first formulate a mathematical model that describes how light transports from a light source to an observer via the surfaces of the object,” he said. “Then we find the parameters of the mathematical model that best describes the input photographs using the numerical optimization. Finally, we use a physically based photorealistic rendering technique to render the appearance of the object accurately.”
The research was demonstrated at ACM SIGGRAPH Asia in December 2018 using a Nikon D7000 digital camera and the built-in camera on an Android mobile phone.
According to Kim, the team has been contacted by companies interested in taking this technology to market, though he declined to share details.