VR faces huge production and consumer challenges and is still evolving at a rapid rate. So why is the industry already talking about standardization?
I recently had the pleasure of sitting on a panel at Light Reading’s 2018 Big Communications Event in Austin, Texas. My fellow panelists, Arianne Hinds, Ph.D. (principal architect, video and standards strategy at CableLabs), and Ozgur Oyman, Ph.D. (from Intel and a board member of the VR Industry Forum group), provided a very lively discussion on the topic of VR, MR, and AR. During the course of the talk, Hinds explained how CableLabs is working on a container for VR based on existing technology pioneered in Hollywood’s visual effects (VFX) industry.
First, let me state that I am a big fan of technology. What Hinds and CableLabs are doing is very innovative and nothing short of remarkable. During her explanation of the container format, she said it will massively reduce the amount of bandwidth required to deliver VR while still maintaining incredible resolution. And, on top of that, it won’t need a headset to render. Of course, it’s still very nascent and relies on display technologies that have not yet been commercialized. But it’s a bold step forward in an industry that is still fraught with significant challenges to consumer adoption.
Yet, as she talked about this new development, I couldn’t help but cringe. VR, as the next generation (or even evolution) of video content, has numerous issues that need to be solved right now. First, there is production. It requires significantly more manual processes to create and publish VR video content than it does 2D. For example, in a 360° environment, how do you hide the camera crew and equipment? And, much of the production requires specialized stitching software (still in its infancy) to make the final product. Outside of expensive volumetric video studios (like Intel’s recently announced Intel Studio) that can create more fluid VR content without the need to stitch, creating high-quality VR videos is still hard, complex, and expensive.
Second, there is delivery. If you want to stream VR video, you really need 8K or more resolution (to reduce the pixilation), which requires next-generation, head-mounted displays. And even with the newer codecs (i.e., HEVC and AV1), we are still talking about 10GB or more in bandwidth. And when that VR video is in a chunked HTTP format, there’s the latency to contend with—which can, unfortunately, result in nausea.
Finally, there’s the cost. Yes, there are some reasonably priced prosumer cameras hitting the market, but it’s still an investment. But with VR camera technology changing at such a rapid pace, gear acquired this year could easily be obsolete in 12 months’ time. And these are only some of the technical challenges!
But, now we are talking about developing a VR container? Is that even a problem to solve right now?
Although, again, I laud CableLabs’ efforts here, there is a danger in trying to bring standardization too soon to an industry that is still trying to figure itself out, one in which the technology is evolving at breakneck speed. Perhaps this container will help galvanize the industry, but I fear that it will divide people into camps— those who support the CableLabs container and those who don’t. Will we end up seeing the kind of container fragmentation we have in HTTP chunked streaming? If we do, VR video adoption may be severely hampered.
And, on top of that, there are no content owners involved in the efforts. One of the requirements to be a member of CableLabs is to be an operator. So, it’s possible that members like Comcast could bring their NBC folks to the table but, in speaking to Hinds, that is not happening. But it’s not the operators who will implement this container in video production workf lows. Shouldn’t content owners have a say in how the container is developed, given that they’ll be the ones who employ it?
I see the merits of the effort—the VR industry needs technology guidance. I just don’t agree with the timing.