Posts by Roya Stephens



Posts by Roya Stephens

An interview with InterDigital’s Diana Pani

InterDigitalThe 3GPP wireless standards are vital to our work and shape much of the core of our business at InterDigital. In light of the novel Coronavirus and its impact on communities in every corner of our globe, we want to explore the potential impact the pandemic will have on the 3GPP releases scheduled for the coming years, namely Release 16 this year and Release 17 next year. As a future-looking company, we believe it’s important to consider the foreseeable impacts the pandemic might have on the long awaited 5G rollout and its evolution.
 
This week, InterDigital’s Communications team had a virtual chat with Senior Director of 5G Standards and Research Diana Pani, an active contributor to the standardization of radio access protocols within 3GPP, former RAN2 Vice Chair, and current chair for 3GPP 5G sessions, to better understand the state of 3GPP standards and outlook for 5G. Read on below.
 
This interview has been edited for length and clarity. 
 
IDCC: Diana, like many others at InterDigital, you have been directly involved with 3GPP standards work for years. We understand that with Release 16 and 17 have now been pushed back three months, due to the global pandemic?
 
Diana: Yes, Rel-17 and some aspects of Rel-16 have been officially pushed back by 3 months.
 
The key and most important aspect to consider is that Rel-16 ASN.1 freeze date in June remains unchanged, but the functional freeze date has been postponed from March until June. ASN.1 is used to define message syntax of certain protocols between the network and devices. Typically, we have three months between the functional freeze and ASN.1 freeze in order to allow us to do a thorough review and make corrections to  both functional aspects and to ASN.1. Given the importance of completing the ASN.1 freeze and the release on time, 3GPP working groups are doing both ASN.1 review and completing some remaining functional aspects in parallel.   The main target is to complete Rel-16 on time as scheduled.   This of course increases the chance of finding issues that cannot be solved in a non-backward compatible manner, but this risk is always present, and we have means to deal with it. 
 
From my perspective, 3GPP has not shifted Release 16 completion. There is a huge effort by the 3GPP community to keep the freeze dates by using virtual meetings and progressing discussions by email and conference calls. The plenary session in June, when the freeze was and is still scheduled, was shifted by two weeks to allow more time to finalize the corrections and make Rel-16 more stable.
 
IDCC: Will the plenary session take place in the virtual space somehow to follow social distancing practices?
 
Diana: Yes. The March plenary took place about a week ago, and was all done by email.  The June plenary is also expected to be done in the virtual space. 
 
3GPP has actually been doing these virtual meetings since February, and every working group in 3GPP has tried different methods to move things forward. So, for example, RAN1 meetings, which address physical layer aspects, were done purely by email over two weeks, instead of the typical one-week in person meetings. The RAN2 working group, which I'm involved in, also took place over two weeks. In addition to email discussions we also had conference calls, which actually helped our progress significantly.  Other groups are considering introducing conference calls in follow up virtual meetings.
 
The virtual meetings in February allowed the groups to make a surprising amount of progress and complete an important part of the work.  This is why I think we will maintain the June timeline for Release 16. I know people were very skeptical about how much progress could be made over email and conference calls, but in the end, I think we were pretty productive. Of course, we were much less efficient than before, but we were still very productive.
 
IDCC: Why do you think these efforts were so productive?
 
Diana: There are two aspects that contributed to the progress we made. First, quite simply, everybody knew we were in an unusual situation because of the pandemic. Secondly, Release 16 is at the end of the release, so whatever remains as an open issue is likely to be very specific and detailed, while most of the more complicated and controversial issues that require face-to-face discussion were already completed by the end of the last year.  For Release 16, we were left with several small but detailed issues, and given the global circumstance, delegates were more willing to compromise and finish the release for the good of the whole industry, rather than fighting for specific individual objectives. There was a nice atmosphere of people wanting to compromise and progress things, which was very nice to see.  Of course, the virtual meetings were a lot more work for delegates and leadership, but 3GPP leadership did a great job organizing and facilitating the discussions in a way to encourage progress and consensus. 
 
IDCC: Like the rest of the world, we don't know how long this pandemic will endure, or how long we're going to have to practice social distancing. What kind of impact do you think this will have on the overall standards development outlook for the next couple of years?
 
Diana: That's a very good question and it's been a topic of discussion with 3GPP leadership. Leadership has suggested and hopes that we'll be back to normal functioning by August. I and a few others proposed that we should be more conservative and prepare as if we'll have no more face-to-face meetings until the end of the year and will need to continue meeting in a virtual space. What we're trying to do in 3GPP is find ways to make meetings more efficient. Every group is exchanging ideas on how we can make progress, assuming that we may have to do this virtually for a year.
 
IDCC: We've discussed Release 16, but what about Release 17? That release was shifted to December 2021, correct?
 
Diana: Yes. Rel-17 has been shifted by three months. When the first meetings were cancelled in February – coincidentally when Release 17 meetings were supposed to start – 3GPP leadership decided to not conduct any Release 17 work until groups could meet again face-to-face. The rationale behind that decision is because the beginning of a release always produces a lot of diverging views, and it's difficult to reach consensus unless you're having a coffee, a chat, or explaining the technical details face-to-face.
 
However, given the current projections for the pandemic, 3GPP leadership has decided that we will start Release 17 over virtual meetings.
 
IDCC: Do you think the pandemic will have a significant impact on the timelines or the efficiency of the standardization process? We don't know the timeline for the pandemic, and probably won't for some time -- how significant do you think that impact will be? 
 
Diana: It depends on how long it will last, of course. I think it's inevitable that it will have an impact and delay of things. Like I said, virtual meetings are not as efficient as meeting face-to-face. Whatever we could achieve in one week of face-to-face meetings, now requires two weeks of emails and conference calls. Even then, I don't even think we can achieve 50 percent of what we were achieving face-to-face in one week.
 
So of course, it's going to delay things, but at the same time, it might also force a prioritization of our features. Maybe 3GPP would consider prioritizing some of the most important features and re-scope Rel-17 work to complete them on time. The alternative is to slow down the entire release schedule and prolong the implementation of feature improvements in future releases. The other option being considered is hosting additional ‘ad-hoc’ meetings in January next year.
 
IDCC: Early industry analysis suggests that consumer demand for 5G wireless services may fall somewhat because consumers impacted economically by the pandemic may not have as much money to spend on services. Do you think the other aspects of 5G will follow suit? Given the 5G consumer focus on enhanced mobile broadband (eMBB) use cases, and increasing enterprise focus on ultra-reliable low latency (URLLC) and the massive machine type communication (mMTC) 5G use cases, will the pandemic’s effects be felt equally across all three corners of the spectrum?
 
Diana: I don't think it will necessarily impact one use case more than others. I think what could be impacted is the priority in which things are developed.  
 
The pandemic has inevitably impacted life as we know it, and certain things like remote diagnostics, surgery, etc. that require URLLC may become more important and necessary than ever with 5G.  Proximity detection, gaming, AR/VR, virtualization, etc. may also become very important and go up the priority list. At the same time, there are certain things within eMBB that still need to be improved to support some of the high data rate requirements of emerging use cases.
 
IDCC: Isn’t the eMBB use case increasingly important right now because so many people are in home isolation watching Netflix and streaming video, causing some video services to throttle down their streaming speeds because the demand is so high?
 
Diana: Right, but don't forget that some of the capacity issues are actually on the network side and not really on the wireless side.
 
IDCC: That's true.
 
Diana: I personally feel it's very difficult to know which use cases will be impacted. The way 3GPP works is, if they have time, they will address several use cases simultaneously, because they always prepare for the future. We're preparing for use cases that most customers don't even have in mind yet – everything we do today will not be deployed for another four years, at least.
 
So, I think the short-term impact won't be felt immediately. If we must prioritize, that's where we may feel the impact. The operators and industry players will let us know what's essential to them so that we can focus our attention accordingly. 
 
This scenario could actually re-scope Release 17 a little bit, but as of now, 3GPP is not planning on revisiting the scope of Rel-17. The plan is that 3GPP will get things done on time with this shift. For example, they are already considering adding new meetings during the year or next year (virtual, of course), in an effort to adhere to this new three-month timeline shift. 
 
IDCC: Thank you for sharing these considerations. To switch topics a bit, will the pandemic have any impacts on spectrum we should address?
 
Diana: I know that some of the International Telecommunication Union (ITU) meetings where future use of certain spectrum is discussed have been delayed, and some of the spectrum auctions are being postponed as well. That certainly could delay some of the operators from getting the spectrum they need for deployment.
 
IDCC: Do you mean, that is because they don't have all of the spectrum they need right now for those deployments?
 
Diana: Well, I think some operators have already gotten some spectrum, but they also rely on future spectrum to expand and be able to provide all the services that they've promised or want to provide. So far, operators have purchased some spectrum both in the mmWave spectrum and below 6GHz spectrum.  However, additional spectrum will be dependent on further auctions and availability. Until then, operators cannot plan for further deployments.   
 
That might cause some delays, but most operators already have one part of the spectrum to kick off their initial 5G deployments and further 5G enhancements.
 
IDCC: Finally, what does this pandemic tell us about the importance of the wireless industry – and 5G – to the world?
 
Diana: First of all, I think it shows the importance of being able to stay connected, especially during these critical times and while the majority of the world population is in full isolation. It's one of the first times I started to truly appreciate the criticality and importance of having the technology we have today – to allow us to function remotely for a large number of aspects.  We can stay in touch with family and friends, work from home, learn online, be diagnosed remotely without going to the hospital, and be able to do almost anything from our phones.
 
If you look at what is going on right now, we see that 5G is being used for health monitoring, remote diagnostics for doctors, and 5G robots used in hospitals in Wuhan to protect staff from the virus. We'll even see an importance placed on supporting video gaming. Virtualization, a key feature of 5G, is also proving extremely important nowadays because everything has been moving towards the cloud and it is what allows us to function remotely. I think everybody now understands the importance of being able to be virtual and have remote capabilities. And 5G offers all those opportunities.
 
*****
 
Forward-Looking Statements

This blog contains forward-looking statements within the meaning of Section 21E of the Securities Exchange Act of 1934, as amended. Such statements include information regarding the company’s current expectations with respect to the impact the coronavirus pandemic will have on 3GPP wireless standards, the timeline for their development, and demand for 5G services. Words such as "expects," "projects," "forecast," “anticipates,” and variations of such words or similar expressions are intended to identify such forward-looking statements.

Forward-looking statements are subject to risks and uncertainties. Actual outcomes could differ materially from those expressed in or anticipated by such forward-looking statements due to a variety of factors, including, but not limited to, the duration and long-term scope of the ongoing coronavirus pandemic and its potential impacts on standards-setting organizations and the company’s business. We undertake no duty to update publicly any forward-looking statement, whether as a result of new information, future events or otherwise except as may be required by applicable law, regulation or other competent legal authority.

InterDigital is a registered trademark of InterDigital, Inc.
For more information, visit: www.interdigital.com.

December 16, 2019 / IEEE / Posted By: Roya Stephens

This week, InterDigital became the latest donor to the IEEE Information Theory Goldsmith Lecture Program, an award created to highlight the achievements of exceptional female researchers early in their careers, while providing opportunities to acknowledge and publicize their work. With our donation to the program, established this year in honor of Dr. Andrea Goldsmith, InterDigital stands alongside Microsoft, Intel, Nokia, Google and others across in tech ecosystem in elevating female researchers and supporting diversity and inclusion both within and outside of our business.

InterDigital knows that a diversity of backgrounds and perspectives enables us to approach big problems, conduct complex research, and develop effective solutions that work across ecosystems and experiences, despite our modest size. We feel a strong responsibility to acknowledge the benefits of diversity in engineering and innovation, while also encouraging gender diversity of researchers in engineering and the field of information theory.

As part of the Goldsmith Lecture Program, each yearly award recipient will deliver a lecture of her choice to students and postdoctoral researchers at one of the IEEE Information Theory Society’s (ITSoc’s) Schools of Information Theory. In addition to driving inclusion and diversity of thought within the IEEE and the ITSoc, the Goldsmith Lecture Program helps provide more visibility and acknowledgment of the contributions of female researchers to technology. The impact of this program extends beyond IEEE and InterDigital, giving female researchers a platform to share their work while inspiring and encouraging new, more diverse students to explore and innovate with technology.

InterDigital is proud to support the Goldsmith Lecture Program and we congratulate the 2020 award recipient, Ayfer Özgür. Ayfer is currently an assistant professor within the Stanford University’s Electrical Engineering Department, and conducted her postdoctoral research with the Algorithmic Research on Networked Information Group.

To learn more, about the IEEE Information Theory Society and the Goldsmith Lecture Program, please visit: https://www.itsoc.org/honors/goldsmith-lecture

October 22, 2019 / Posted By: Roya Stephens

As a company dedicated to advanced research and development in wireless and video, we know that our innovations are only enhanced by partnerships with industry leaders and respected academic institutions. That’s why InterDigital is so proud that Dr. Mohammed El-Hajjar has been awarded the Royal Academy of Engineering (RAEng) Industrial Fellowship to work alongside InterDigital to support the evolution of 6G technology.

Dr. El-Hajjar, a professor at the School of Electronics and Computer Science at the University of Southampton, was recently awarded the prestigious RAEng Industrial Fellowship to spearhead a joint project with InterDigital to advance the research and design of transmission receivers for 6G wireless systems. Just 19 professors and researchers were bestowed Industrial Fellowships this year, based on the caliber of research proposal, direct support from an industry partner, and a clear and significant impact on the industry. Other engineering fellowships were awarded for innovative research in plastic waste recycling, carbon dioxide capture, 3D-reconstructed human skin, and more.

The Royal Academy of Engineering Industrial Fellowship

Being awarded the coveted RAEng Industrial Fellowship is an excellent acknowledgment of the proposed benefit our research will bring to industry and validation of InterDigital’s industry leadership in developing the foundations for 5G and 6G networks.

“InterDigital is proud to work alongside Dr. Mohammed El-Hajjar on this project and join the more than 50 industrial partners that have supported RAEng Industrial Fellowship recipients over the past five years” said Dr. Alain Mourad, Director Engineering R&D at InterDigital. “This is a nice validation of InterDigital’s industry leadership in developing the foundations for 5G and 6G networks alongside our global partners and top-of-class university professors.”

During his fellowship with InterDigital, Dr. El-Hajjar’s research will build upon several years of collaboration with InterDigital to advance the research and design of wireless transceivers for 6G systems. Specifically, Dr. El-Hajjar will design and develop new signal processing techniques based on the concept of Holographic Multiple-Input Multiple-Output (MIMO), which enables unprecedented data rates up to Terabits per second whilst mitigating the challenges of complexity, energy consumption and cost of large antenna arrays in Massive MIMO. The value of this collaborative research will be foundational for the long-term evolution of 5G into 6G.

“With mobile subscribers continuing to demonstrate an insatiable demand for data and billions of smart wireless devices predicted in future services for smart homes, cities, transport, healthcare and environments, the explosive demand for wireless access will soon surpass the data transfer capacity of existing mobile systems, said Dr. El-Hajjar. “Achieving the vision of fiber-like wireless data rates relies on efficiently harnessing the benefits of massive MIMO and millimeter wave frequencies. A major challenge for achieving this vision is the design trade-off of the underlying cost, complexity and performance requirements of massive MIMO in future wireless communications.”

As a result, Dr. El-Hajjar’s research on Holographic MIMO will improve upon the current, state-of-the-art Massive MIMO framework. Today’s 5G New Radio (NR) networks have largely adopted Massive MIMO, a concept in which base stations are equipped with an array of antennas to simultaneously serve many terminals with the same time-frequency resource. Massive MIMO utilizes hybrid digital-analogue beamforming, in which the number of users, or streams, depends on the number of available radio frequency chains. While Massive MIMO has enabled high energy and spectral efficiency, scalability to the number of base station antennas, and the ability to employ simple data processing at the transmitter and receiver edge, this method faces several hardware impairments. Namely, hybrid beamforming requires a significant number of radio frequency chains and faces inaccuracies in the angular resolution of phase shifters in analogue beamforming.

Holographic MIMO

Dr. El-Hajjar and InterDigital’s joint project will center around the concept of Holographic MIMO, a new and dynamic beamforming technique that uses a software-defined antenna to help lower the costs, size, weight, and power requirements of wireless communications. In other words, the Holographic MIMO method implements a phased antenna array in a conformable and affordable way so that each antenna array has a single radio frequency input and a distribution network to vary the directivity of the beamforming. Utilizing machine learning tools within the Holographic MIMO design ensures a high level of adaptability and reduction of signal overhead at the transmitter and receiver levels, while enabling support for Massive MIMO that is 10 times greater than what is available in 5G NR today.

The Holographic MIMO technique will be foundational to the long-term evolution of 5G into 6G networks. Though this technique has a timeframe of five to ten years before it matures and can be implemented in future iterations of 5G NR, our collaborative research will enable unprecedented data rates while mitigating the challenges of cost, complexity, and energy consumption presented by large antenna arrays in Massive MIMO situations. This fellowship project also aligns with InterDigital’s ongoing research on meta-materials based large intelligent surfaces with the 6G Flagship program at the University of Oulu, as large intelligent surfaces include large antenna arrays that would require techniques like Holographic MIMO to support efficient and operational beamforming.

The year-long Industrial Fellowship will run until September 2020, and InterDigital’s collaboration with the University of Southampton on Beyond 5G intelligent holographic MIMO extends through 2022 as part of InterDigital’s sponsorship of a three-year PhD studentship program at the university.

September 26, 2019 / Posted By: Roya Stephens

InterDigital marked its debut at the IBC trade show in Amsterdam by showcasing five cutting-edge video demonstrations and taking home an award for Best in Show for the Digital Double technology  

A first impression is a lasting one. At last week’s International Broadcasting Convention (IBC) trade show in Amsterdam, InterDigital not only made its debut as a company with expertise and advanced research in wireless and video technologies, but also left a lasting impression with our award-winning technologies and contributions to immersive video.

Group Photo Engineers from InterDigital's Home Experience, Imaging Science, and Immersive Labs at IBC 2019

 

Throughout the week, engineers from InterDigital R&I’s Home Experience, Immersive, and Imaging Science Labs in Rennes, France displayed their contributions to next-generation video coding standards, volumetric video frameworks, compression schemes, and streaming applications, as well as a cutting-edge tool to automate the creation of digital avatars in VFX, gaming, VR, and other video applications. At the end of the five-day convention, InterDigital received a prestigious prize and recognition of our significant work to enable immersive video and streaming capabilities of the future.

 

InterDigital Wins Best of Show for the Digital Double  

InterDigital received the IBC Best of Show award, presented by TVB Europe for innovations and outstanding products in media and entertainment, for our cutting-edge “Digital Double” technology. Developed in InterDigital’s Immersive Lab, the Digital Double tool improves upon the traditionally time- and labor-intensive 3D avatar creation process to automatically create a person’s digital avatar in less than 30 minutes! Although the Digital Double technology completely automates the avatar creation process, it also gives users the option to make stops and manually finetune the avatar at each step. Using a rig of 14 cameras, the technology computes a full 3D mesh of a person’s face and upper body from the cameras’ images to create more human-like avatars and a precise set of facial expressions for animation.

Bernard Denis and Fabien Danieau hold the IBC Best of Show award for the digital double
As we enter the 5G era of ultra-low latency and high bandwidth, video viewers will desire, and be able to enjoy, more immersive video experiences, and our Digital Double tool will become increasingly important to content producers.    

The Best in Show award recognized the Digital Double’s potential to enhance immersive video opportunities of the future, where individuals could see themselves in real-time as a character in a film or on television or even virtually participate in a game show alongside a presenter, contestants, and audience on screen. The Digital Double technology started at the highest end of the market, this time in Hollywood film production, and is likely to eventually make its way into the consumer mainstream.

The Digital Double’s foundational facial animation control for expression transfer (FACET) technology has already been used by production companies like Disney and Paramount in blockbuster films such as the Jungle Book remake and the Shape of Water. We are excited to explore this award-winning tech’s applications in virtual reality, gaming, and other immersive experiences where an individual’s digital avatar can be adapted to each context.

 

 

InterDigital’s Contributions to Tech Innovation in the Digital Domain

In addition to the Digital Double technology, InterDigital’s Research and Innovation teams displayed 5G their advanced research to support next generation and future video streaming capabilities. Laurent Depersin, Director of the InterDigital R&I Home Experience Lab, provided an overview of InterDigital’s contributions to video innovations during a panel discussion on “Technological Innovation in the Digital Domain.” Laurent spoke alongside peers from VoiceInteraction and Haivision, to explore the innovations needed to support high resolution and intensive data applications for the video content of the future. You may view Laurent’s panel discussion here.  

During his presentation, Laurent outlined new video applications that drive the need for technological innovation, as well as InterDigital’s Home Experience Lab’s commitment to develop technologies that both connect and improve user experience in the home. Laurent identified mass increases in video consumption, the popularity of interactive and immersive content like VR and gaming, and the trend towards ultra-high bandwidth and ultra-low latency content in the form of immersive communication and 8K video, as the key drivers of InterDigital’s innovative work in video technology.

Laurent Depersin outlines technological innovation in the digital domain
 

Versatile Video Coding: Improving on the High-Efficiency Video Coding (HEVC) Standard

Lionel Oisel demonstrates the enhanced capabilities of the VVC standard

5G InterDigital’s demonstration on Versatile Video Coding (VVC), presented by Michel Kerdranvat and Imaging Science Lab Director Lionel Oisel, reflects our work to develop cutting-edge tools that analyze, process, present, compress, and render content to improve the production and delivery of high-quality images.

The InterDigital R&I lab’s contribution to the VVC standard enhances the video compression efficiency of the existing High-Efficiency Video Coding (HEVC) standard published in 2013. Specifically, its demonstration compared the HEVC and VVC video standards and showed how VVC can compress and improve video delivery by lowering the bandwidth and bitrate required for Standard Dynamic Range (SDR), High Dynamic Range (HDR) and immersive, 360-degree video content.    

 

The Need for Point Cloud Compression for Immersive Video  

The InterDigital Imaging Science Lab’s demo on Point Cloud Compression, presented by Céline Guede and Ralf Schaefer, built upon the HEVC video coding standard to showcase the vital need for video compression mechanisms to enjoy increasingly immersive and interactive video experiences in VR, AR, and 3D imagery.        

Point Clouds are sets of tiny “points” grouped together to make a 3D image. Point Cloud has become a popular method for AR and VR video composition, 3D cultural heritage and modeling, and geographic maps for autonomous cars. While this method has many benefits, it is important to remember that each Point Cloud video frame typically has 800,000 points, which translates to 1,5000 MBps uncompressed – a massive amount of video bandwidth. To address this challenge, our Imaging Science Lab has participated in the development of a Point Cloud Compression method being standardized in MPEG to support widespread industry adoption of the Point Cloud format for immersive video. InterDigital showcased its video-based Point Cloud Compression capabilities in a Point Cloud-created AR video demo streamed to a commercially available smartphone in real time. This technique will support the crisp, low-latency deployment of immersive video experiences through existing network infrastructure and devices.

Ralf Schaefer displays Point Cloud compression on a comercially available smartphone
 

The Challenges and Potential for Volumetric Video  

In concert with our efforts to compress and deliver high bandwidth video, InterDigital R&I’s Immersive Lab also demonstrated its innovative work to enhance immersive experiences that meet our interactive media demands. To give context to the importance of its technological contributions, Immersive Lab Technical Area Leader Valérie Allié delivered a presentation on the challenges and potential of volumetric video and the various applications in which it might be deployed.  

   

Valérie Allié delivers a presentation on the opportunities of volumetric video content

 

Volumetric video is hailed as the next generation of video content where users can feel the sensations of depth and parallax for more natural and immersive video experiences. As AR, VR, and 3D video become a more mainstream consumer demand, providers will require tools to deliver the metadata necessary to produce a fluid, immersive or mixed reality video experience from the perspective of each viewer. As a result, content providers may face challenges in maintaining high video quality while supporting user viewpoint adaptation and low latency.  

MPEG Metadata for Immersive Video: A Roadmap for Volumetric Video Distribution  

Valérie Allié and Julian Fleureau’s demo on MPEG Metadata for Immersive Video outlined both the steps to create volumetric video and the requisite format for its distribution. Unlike flat 2D video experiences, volumetric video is much larger and cannot be streamed over traditional networks. In addition, volumetric video requires the capture of real video through camera rigs, the development of computer-generated content, the creation of a composite film sequence using VFX tools, and the interpolation of a video’s view to create a smooth, unbroken rendering of immersive content from the user’s point of view.                    

 

Addressing the Challenges of Six Degrees of Freedom (6DoF) Streaming

Visitor experiences InterDigital's 6DoF streaming video capabilities on a VR Headset

The significance of the MPEG codec for immersive and volumetric video was put on display in the InterDigital R&I Home Experience Lab’s Six Degrees of Freedom (6DoF) streaming demo, presented by Charline Taibi and Rémi Houdaille. 6DoF refers to the six movements of a viewer in a 3D context, including heave for up and down movements, sway for left and right movements, surge for back and forward movements, yaw for rotation along the normal axis, pitch for rotation along the transverse axis, and roll for rotation along the longitudinal axis.

Using a computer-generated video streamed through a VR headset, the demonstration showed how the standards and codecs developed by InterDigital’s labs can be utilized to stream fully immersive volumetric video with six degrees of freedom over current network infrastructure.

The demonstration achieved a seamless and immersive experience by streaming only content from the viewers’ point of view.  

InterDigital left a lasting impression on all who visited our IBC booth and networking hub and experienced the Research and Innovation Labs’ innovative demos. We are excited to play a role in the pioneering compression solutions and streaming capabilities that will drive and enable the immersive video experiences of the future.