New imaging compression technology, 10K 360-degree content and 3D human body reconstruction.
The Fraunhofer Heinrich Hertz Institute (HHI) is presenting their current innovations in the field of immersive imaging technologies at the NAB 2018 event in Las Vegas.
As resolution and framerates increase so to does the data needing to be streamed and requiring compression. Current compression encoding such as High Effficieny Video Coding standard (HEVC) is only just capable of managing the task and this is where Fraunhofer HHI come in.
As part of a collaboration between the ITU Video Coding Expert Group (VCEG) and the ISO/IEC Moving Pictures Expert Group (MPEG) known as the Joint Video Experts Team (JVET). Fraunhofer HHI submitted a proposal for a cutting edge coding technology to address the problem and be included in the final standard by 2020.
At NAB 2018, Fraunhofer HHI will be displaying the codec to the publich for the time and already shows significant coding efficiency improvements over HEVC for content ranging from standard High Definition (HD) to High Dynamic Range Ultra-HD content. This also means that it is far more suited to the likes of 360-degree video and virtual reality (VR) application thanks to the improved efficiency. At NAB 2018 Fraunhofer HHI will also present the world’s first demonstration of continues live video streaming of VR 360-degree videos with a resolution beyond 4K.
This demonstration will be made possible thanks to high-resolution VR 360-degree 10K video capturing and live rendering from the Fraunhofer HHI Omnicam-360, tile-based live encoding with the Fraunhofer HHI HEVC encoder, packaging according to the MPEG-OMAF Viewport-Dependent Media Profile and high-quality playback on VR glasses and TV screens.
All of this is made possible thanks to the continued improvements and developments that Fraunhofer HHI have made within the immersive technology sector.
Elsewhere, Fraunhofer HHI has developed a new and uniquely integrated 360-degree multi-camera capture and lighting system for the creation of highly realistic Volumetric Video content of moving persons. This is achieved thanks to a set of 16 stereo cameras that capture and create 3D information from all different angles, before it is then processed to create a natural and dynamic 3D representation of the person.
This process is completely automatic meaning the generated meshes can be integrated directly into VR and augmented reality (AR) applications.