InterDigital’s video technology development capability and advanced research are top-notch in the video industry. Over the past years, numerous video coding, storage and delivery innovations and solutions have been contributed to video standards by our engineers. InterDigital video standards and platforms team actively participates in state-of-the-art video standardizations developed by international standardization organizations such as ISO/IEC/MPEG, ITU-T/VCEG, JVET and 3GPP SA4. InterDigital has been involved in the development of High Efficiency Video Coding (HEVC), officially approved as ITU-T H.265 | ISO/IEC 23008-2, and led the standardization of HEVC Scalable Video Coding (SHVC) extension.
InterDigital is also a lead partner in next generation video coding standard technology development projects related to standard dynamic range, high dynamic range and wide color gamut, and 360-degree video coding and other areas. The industry-leading adaptive 360-degree video streaming platform developed by our engineers delivers real-time 4K high quality video with very low latency at MWC 2018 and 5G-Coral VR Trial. Our areas of expertise encompass versatile video coding standards, immersive media representation and delivery, video broadcasting and multicasting, real-time interactive video applications, and optimized video delivery over 5G networks.
The InterDigital Artificial Intelligence Lab specializes in adapting and applying the latest advances in Artificial Intelligence (AI) to the creation and delivery of next generation content – and to innovating how these advances can be applied at home for the benefit of consumers everywhere.
We harness the deep learning revolution to build novel services and cognitive apps on resource-constrained devices such as OTT boxes and IoT devices. This includes transparently monitoring user interactions with OTT services to analyze interests and enhance and personalize the user experience.
InterDigital R&I’s Artificial Intelligence Lab transforms large-scale workflow and asset data collected during VFX production into business insights by creating predictive analytics, which accurately forecast artist and computing resources, improve VFX production performance and optimize digital asset lifecycle.
InterDigital R&I’s Home Experience Lab is committed to improving the user experience in
the home and developing future connected home technologies. Use cases include
audio/video/media consumption, entertainment (including cloud gaming), and IoT and
Smart City technology.
To do so, we leverage our best in class expertise in Communication, Networking (5G, WiFi, virtualization and SDR), Video and VR streaming, Cloud computing/gaming, and ML/AI (machine learning, visualization, for better in-home understanding & control, EDGE computing)
The Home Experience Lab explores ways to offer the most advanced AI-based services in the home while minimizing cloud access, to provide the utmost responsiveness, preserve privacy and seamlessly empower users. To do so, we abstract underlying complexity and work on transparently interfacing CE devices to distribute processing and enhance capabilities.
A main area of focus is home connectivity, supporting user mobility at high throughput and low latency with possible QoS constraints across a large number of devices. Key technologies include local processing capabilities (CPU, video processing, model inference and training) and storage, and innovative AI solutions to steer traffic toward the best connectivity in a multi-radio environment.
With the ever-growing need by users and applications to get to zero latency, especially given VR needs, InterDigital R&I is developing ultra-low-latency streaming solutions that can meet the most demanding application needs, including volumetric streaming, AR/VR, cloud gaming, uni/broad/multi-casting and V2X.
At the cross-roads of data fusion, machine learning and multi-modal sensing, we develop technologies that synergistically sense the home and provide a platform for new in-home capabilities and services. We combine off-the- shelf sensors with the newest unobtrusive and privacy-aware technologies like geo-phone or RADAR.
InterDigital R&I’s Imaging Science Lab partners with key players across the film and
consumer electronics industries to develop cutting-edge tools to analyze, process, represent,
compress and render content – enabling the production and delivery of high quality real and
Several key areas on which the Imaging Science Lab is focusing include:
The Imaging Science Lab develops new technologies to improve compression efficiency and make key contributions to the new ISO MPEG/ITU-T VCEG standard. We also are conducting advanced research into the use of deep learning to develop disruptive video codec solutions.
InterDigital R&I’s Imaging Science Lab is actively involved in the development of a complete joint SDR/HDR solution, ensuring backward compatibility and optimal delivery and rendering on next generation displays.
A focus of research at the Imaging Science Lab is the development of new display functionalities and their associated usage at home.
Point Cloud technology is one of the most exciting and multi-purpose visual technologies being developed today. InterDigital R&I is developing pioneering compression solutions for Point Cloud technology, leveraging the joint geometry and video nature of point clouds. This includes active participation in the new PCC MPEG standard.
InterDigital R&I builds on its heritage of developing cutting-edge solutions for Technicolor, a leader in production services, with a range of technologies that will eventually see broad application beyond specialized services. These include AI-based capture to assist VFX artists, animated face rig extraction (in collaboration with Max Planck Institute), digital make-up, and color management in VFX.
InterDigital’s Immersive Lab develops today’s solutions for tomorrow’s interactive media environment. Through the application of innovations in computer graphics, computer vision, video processing and optics, we offer professionals and end-users alike the solutions they need to amplify their immersive experiences. Focus areas include:
Light Field capture, editing and rendering in view of defining more powerful image representations. Potential usage on mobile phones with multiple cameras.
New enabling technology for guiding light at nanoscale. Abrupt light deviation. Near field beam forming. Potential usage to AR glasses, displays and camera sensors.
FACET (Facial Animation Control for Expression Transfer) is a proprietary tool we developed, which streamlines 3D facial animation for VFX and animation artists. We also developed a fully automatic pipeline to create a full rigged CG character ready to be animated with full facial expressions system setup from a photogrammetric capture rig.
Allow users to view and interact with virtual objects inserted in the real-world environment. Characterize lighting parameters to better blend virtual objects within the real environment. Use an Augmented Reality shared server (AR Hub) to build, store and share advanced scene descriptions, enabling device localization in the environment, remote interactions with real objects and advanced augmented reality experiences.
The development of Virtual Reality capabilities and applications is a fertile area for new research. InterDigital R&I’s Immersive Lab is pioneering technologies in VR-based advanced collaborative production environments, game engine technologies that enable the creation and viewing of high-end visuals in close to real-time, social VR, and VR interactivity.