Ask An Engineer: What’s Next for Mixed Reality?

Jul 2020/ Posted By: Roya Stephens

<p><em>Caroline Baillard, Immersive Lab Technical Area Leader outlines new technologies and workflows in the future of mixed reality</em></p>
<p><img style="float: left; margin: 10px 15px 15px 0;" src="https://www.interdigital.com/resources/Newsletter/Caroline.png" alt="InterDigital" width="190" height="253" /></p>
<p>I was recently asked, "What is mixed reality?" It&rsquo;s a simple question, but the answers vary based on who you ask.</p>
<p>While augmented reality (AR) has been in the public consciousness for years now &ndash; and virtual reality (VR) for decades &ndash; mixed reality (MR) is not yet widely understood. The rise of AR, coupled with the further development of VR, has set the stage for MR, a technology that encompasses both its predecessors but goes further than each.</p>
<p>In practical terms, MR provides more than AR's ability to overlay virtual content onto a real world, and more than VR's ability to display interactive and immersive virtual content -- MR is about being able to integrate the two. By blending virtual elements into the real world, we create an entirely new environment &ndash; one that mixes the real with the virtual.</p>
<p>To bring the technology into greater mainstream use and enable next-generation mixed reality experiences, we must develop new technologies and reimagine their workflows. These technologies and workflows address two key needs for MR, the first being the need to make MR experiences more realistic and immersive. The second is a need to equip MR technology with context awareness, which enables the experience to adapt both to the user <em>and</em> the environment.</p>
<p>These technology developments are a response to two major trends in MR. The first trend is, quite simply, the increasing importance of consumer AR. Industrial use cases for AR showcased significant early potential for this technology, but consumer AR will bring maturing MR technologies into the home and the mainstream. The second trend is the rise of the AR cloud. Significant developments in cloud computing and networking- enabled AR experiences that are delivered remotely will serve as a vital underpinning of MR. The AR Cloud enables people to collaborate and share and express information in a centralized way &ndash; for example in 3D-modeled scenes &ndash; to experiment with them and share knowledge.</p>
<h3>The Technologies</h3>
<p>To bring MR capabilities to the mainstream, a few technologies must be addressed. The overall goal for MR is to develop visual enhancements and allow for personalization of our environment, but how do we do that? Scene modeling holds an important key.</p>
<p>Scene modeling technologies allow users to make a detailed, 3D virtual model of a real physical space &ndash; a task far more difficult than it sounds. The first challenge is modeling indoor spaces, which technically are constrained by more boundaries than outdoor spaces. Surprisingly, it is more challenging to model outdoor spaces, citing the additional challenges current technology faces in modeling to an unconstrained environment.</p>
<p>There are several factors to consider when modeling a boundary-constrained indoor space, including 3D geometry and the configuration and placement of furniture, the configuration of objects around the room, the color, texture, and reflectiveness of surfaces, the location, type, and intensity of light sources, and more. All these factors create a very complex scene model &ndash; and that is just for relatively fixed objects. Adding avatars and other people into the scene can dramatically increase the complexity of the modeling, as it introduces new dynamics to the room based on the user's movement throughout the space.</p>
<p>Further complicating this scenario is interaction management &ndash; the next stage in technology development and where the "mixed" in mixed reality begins to shine. When virtual objects interact with the real environment (through occlusions, reflections, collisions, etc), or when a remote user can begin to interact with the virtual space that has been modeled &ndash;the experience becomes very immersive&hellip;and increasingly complex task, as addressed below.</p>
<h3>Workflows</h3>
<p>In addition to technologies, certain workflows must be developed to enable MR. The nature of mixed reality is such that it benefits from running on a platform based on a central server, which we refer to as an "AR Hub". The architecture of the AR Hub server platform delivers a range of benefits, including centralized data management, multi-user control, higher quality of experience in terms of latency and customization, and a greater degree of privacy. The AR Hub can be installed at a user's location to form the core of the hardware required to deliver MR in a home, for instance. In this way, the MR experience would be primarily tied to the Hub, and critical components of the system would not be limited by variations in personal computing capabilities or network inconsistencies.</p>
<p>Like all things, the AR Hub model is not without its challenges. The model is dependent upon the maturation of earlier technologies, specifically advanced scene analysis and modeling, 3D geometry, texture and light source mapping, and other features that are computationally intensive and require dedicated resources. In addition, MR faces unique challenges because of the benchmarks that indicate its success. Namely, each time an object is moved within an MR space &ndash; whether it's a book on the desk, or a chair, or a person &ndash; the impact of those changes (geometry, appearance, or semantics) must be taken into account and the rendering of the scene must be adjusted.</p>
<h3>Making It Real</h3>
<p>Despite the current challenges with this model, maturation should dramatically improve the realism of mixed reality experiences. For example, when a user is able to remotely alter and engage with a virtual object &ndash; for instance, pick up a virtual book on the desk, interact with it, and then set it down in a different place &ndash; it makes the experience feel more real. When we can enhance the lighting in a room or change the appearance of the furniture, it makes the experience feel more personal. Adding, replacing, or entirely removing an item from the experience exceeds the capabilities of augmented or virtual reality, and starts to become something more.</p>
<p>An important final steppingstone to next-generation MR is the ways the innovations described above will make MR more context aware, meaning that the MR experience can adapt to the environment and the user. Context awareness is integral to enable the technology to become truly usable in interactive settings. From MR model and the related workflows, new applications and prototypes will continue to be built. In the first wave of innovation, we will likely see MR utilized to enhance interactive games, training, education, healthcare, interior design, building management, and more.</p>
<p>Yet, these MR vertical applications are just the tip of the iceberg. MR will lead us towards a spatial web of opportunities, and it will change our everyday life. If we continue this path of research and innovation, MR devices will be capable of replacing smartphones as the predominant form factor for communication, replacing the way we access and browse the internet to get information and content. MR will enable a seamless interaction with information, both in the ways we search for information about anything we see in a space, and how we receive and visualize it. But we still have a lot of exciting work to do before we get there.</p>

InterDigital, Blacknut, Nvidia Demonstrate Cloud Gaming Solution with AI-enabled User Interface

Jun 2020/ Posted By: Roya Stephens

<p>The world&rsquo;s first cloud-gaming solution with AI and machine learning-enabled user interface, developed by InterDigital and presented in collaborative partnership with Blacknut and in cooperation with Nvidia, represents the first time an AI/ML -driven user interface is utilized with a live cloud gaming solution, without the use of any wearables.</p>
<p>The technology demonstrates the incredible potential of integrating localized and far-Edge enabled AI capabilities into home gaming experiences. The solution leverages unique technologies, including real-time video analysis on home and local edge devices, dynamic adaptation to available compute resources, and shared AI models managed through an in-home AI hub to implement a cutting-edge gaming experience.</p>
<p><img src="InterDigital, Blacknut, Nvidia Demonstrate Cloud Gaming Solution with AI-enabled User Interface" alt="" /><img src="https://www.interdigital.com/resources/uploads/blog_posts/Diagram_for_Cloud_Gaming_with_Far_Edge_AI.jpg" alt="" width="806" height="456" /></p>
<p>Users play a first-person view (FPV) snowboarding game displayed on a commercial television. Without need for a joystick or handheld controller, players&rsquo; movements and interactions are tracked by AI processing of the live video capture of their movements. The user&rsquo;s presence is detected with an AI model and his or her body movements are matched with the snowboarder in the game, in real time, using InterDigital&rsquo;s low latency Edge AI running on a local AI accelerator.</p>
<p><iframe src="https://www.interdigital.com/embedded/ai-cloud-gaming-solutions-demo" frameborder="0" width="640" height="360" type="text/html" allowfullscreen="allowfullscreen" webkitallowfullscreen="webkitallowfullscreen" mozallowfullscreen="mozallowfullscreen"></iframe></p>
<p>This groundbreaking solution addresses the challenges of ensuring the lowest possible end-to-end latency from gesture capture to game action, while accelerating inference of concurrent AI models serving multiple applications to deliver a more interactive and seamless gaming experience.</p>
<p>&nbsp;</p>
<p><a href="https://www.interdigital.com/resources/uploads/blog_posts/InterDigital-Cloud_Gaming_FarEDGE_AI_Deck.pdf"><img src="https://www.interdigital.com/resources/uploads/blog_posts/Cloud_Gaming_Deck.png" alt="" width="806" height="456" /></a></p>
<p>&nbsp;<a href="https://www.interdigital.com/resources/uploads/blog_posts/InterDigital-Cloud_Gaming_FarEDGE_AI_Deck.pdf">Cloud Gaming with Far Edge AI Deck</a></p>
<p>&nbsp;</p>
<p><a href="https://www.interdigital.com/resources/uploads/blog_posts/InterDigital-Cloud_Gaming_FarEDGE_AI_Deck.pdf"><img src="https://www.interdigital.com/resources/uploads/blog_posts/InterDigital-Cloud_Gaming_FarEDGE_AI_Deck.pdf" alt="" /></a></p>

Glass to Glass: Assessing the Video Ecosystem through the Lens of Crisis

May 2020/ Posted By: Roya Stephens

<p>Coronavirus has left millions of people confined to their homes and home offices, resulting in a massive shift to virtual work and online engagement, driving demand for streaming and OTT content to record highs, and exposing new hurdles for video streaming and delivery.</p>
<p>InterDigital recently sponsored a webinar discussion in conjunction with&nbsp;<a href="https://www.fiercevideo.com/">FierceVideo</a> that discussed the innovations defining today's video ecosystem and exploring how this global pandemic might shape demand for, and access to, interactive video experiences. The webinar featured perspectives from four industry experts &ndash; InterDigital&rsquo;s Alan Stein, ATEME&rsquo;s Micka&euml;l Raulet, Sinclair Broadcast Group&rsquo;s Mark Aitken, and Ultra HD Forum&rsquo;s Benjamin Schwarz &ndash; each representing different facets of the video standards, technology, and delivery arenas.</p>
<p><span style="text-decoration: underline;"><strong> Video Access over Quality </strong></span></p>
<p>With so many aspects of our lives dependent on video technologies even before the pandemic took hold of the world, the unique circumstances of the coronavirus are causing the industry and consumers to look at video technologies in new ways. We all knew that the coronavirus would spark change in the ecosystem, but several changes are happening in unexpected ways. InterDigital's VP of Technology&nbsp;<a href="https://www.interdigital.com/talent/?id=23">Alan Stein</a> remarked that most people had an assumption that video "would just work" as our lives shifted online, an alternative that could not have happened 10 years ago. Many schools, for instance "turned to online education in a matter of weeks," Stein added. He noted that there have been reductions in "quality of experience, quality of service, and even a little downtick in quality of the video itself, but access became the main focus" for schools.</p>
<p><span style="text-decoration: underline;"><strong>New Content Trends </strong></span></p>
<p>Viewing habits have shifted almost entirely away from live sports, also, which was previously a huge user of bandwidth. "All of a sudden a whole lot of those pipes have gone empty," said <a href="https://ultrahdforum.org/">Benjamin Schwarz, Communications Chair of the Ultra HD Forum</a>. "But as we stopped watching all of that sport, a lot of us are watching a lot more news, which doesn't really fill the pipes up in the same way," so broadcasting companies have had to adjust their approach accordingly. Despite the very obvious economic downturn, the entertainment industry has largely maintained its resilience to global crisis, and with many more people spending more time at home, content viewing has reasonably increased since the pandemic began.</p>
<p><span style="text-decoration: underline;"><strong>Broadband Access </strong></span></p>
<p>Regional differences in broadband access can have a significant impact on some users' viewing experiences, which creates unique challenges for technology providers and policymakers around the world. Consumers don&rsquo;t often think about the amount of broadband necessary to enable video experiences. "When you have sufficient broadband for video, particularly upstream video, you don't think twice," said Stein. "The people who provision and configure networks may have to respond accordingly," he added, if their networks experience bottlenecks that could impact their users' ability to have their desired video experiences.</p>
<p><span style="text-decoration: underline;"><strong>Video Standards Development</strong></span></p>
<p>Panelists also briefly assessed the intersection of standards, wireless, and broadcasting, which have faced a series of interesting challenges and opportunities since the pandemic emerged. Coronavirus "has been delaying the 3GPP standards," said <a href="https://www.ateme.com/">Micka&euml;l Raulet, VP of Innovation at ATEME</a>. "But at the same time, people are looking at new markets for 5G broadcast." You may learn more about how the pandemic is&nbsp;<a href="https://www.interdigital.com/post/ask-an-engineer-what-impact-will-the-coronavirus-pandemic-have-on-3gpp-wireless-standards">impacting 3GPP standards</a> development here.</p>
<p><span style="text-decoration: underline;"><strong> Compression is Key to Broadcast HD </strong></span></p>
<p>On the production side, local news, in particular, is changing considerably, as news anchors are increasingly delivering the evening news and weather reports from their home living rooms. It's not just a change in appearance, as <a href="http://sbgi.net/">Mark Aitken, SVP of Advanced Technology for Sinclair Broadcast Grou</a>p observed. Broadcasters are using newer tools and new models "like scalable HEVC: the ability to take the best of HEVC in the compression domain, to offer hybrid services where over the air broadcast at HD becomes a true enhancement channel with an IT backbone."</p>
<p>As the video ecosystem showcase its flexibility and resilience in the face of the coronavirus pandemic, it is important to recognize that the glass-to-glass video industry is actively tackling the challenge, and may change as a result. We will continue to watch these trends, and propose solutions and find new opportunities to drive immersive video experiences in the years to come.</p>
<p>You may download and watch the full webinar discussion, <a href="https://www.interdigital.com/webinars/glass-to-glass-assessing-the-video-ecosystem-through-the-lens-of-crisis">here</a>.</p>

Ask An Engineer: What Impact will the Coronavirus Pandemic have on 3GPP Wireless Standards?

Apr 2020/ Posted By: Roya Stephens

<p><em>An interview with InterDigital&rsquo;s Diana Pani</em></p>
<p><img style="float: left; margin: 10px 15px 15px 0;" src="https://s3.amazonaws.com/files.interdigital.com/55bd0288af0b0930ba599bd0c4b7ca38/resources/uploads/direct_editor_uploads/DianaPani.jpg" alt="InterDigital" width="176" height="235" />The 3GPP wireless standards are vital to our work and shape much of the core of our business at InterDigital. In light of the novel Coronavirus and its impact on communities in every corner of our globe, we want to explore the potential impact the pandemic will have on the 3GPP releases scheduled for the coming years, namely Release 16 this year and Release 17 next year. As a future-looking company, we believe it&rsquo;s important to consider the foreseeable impacts the pandemic might have on the long awaited 5G rollout and its evolution.<br />&nbsp;<br />This week, InterDigital&rsquo;s Communications team had a virtual chat with Senior Director of 5G Standards and Research Diana Pani, an active contributor to the standardization of radio access protocols within 3GPP, former RAN2 Vice Chair, and current chair for 3GPP 5G sessions, to better understand the state of 3GPP standards and outlook for 5G. Read on below.<br />&nbsp;<br /><em>This interview has been edited for length and clarity.</em>&nbsp; <br />&nbsp;<br /><strong>IDCC:</strong> Diana, like many others at InterDigital, you have been directly involved with 3GPP standards work for years. We understand that with Release 16 and 17 have now been pushed back three months, due to the global pandemic?<br />&nbsp;<br /><strong>Diana:</strong> Yes, Rel-17 and some aspects of Rel-16 have been officially pushed back by 3 months.<br />&nbsp;<br />The key and most important aspect to consider is that <strong>Rel-16 ASN.1 freeze date in June remains unchanged</strong>, but the functional freeze date has been postponed from March until June. ASN.1 is used to define message syntax of certain protocols between the network and devices. Typically, we have three months between the functional freeze and ASN.1 freeze in order to allow us to do a thorough review and make corrections to &nbsp;both functional aspects and to ASN.1. Given the importance of completing the ASN.1 freeze and the release on time, 3GPP working groups are doing both ASN.1 review and completing some remaining functional aspects in parallel.&nbsp;&nbsp; The main target is to complete Rel-16 on time as scheduled.&nbsp;&nbsp; This of course increases the chance of finding issues that cannot be solved in a non-backward compatible manner, but this risk is always present, and we have means to deal with it.&nbsp;<br />&nbsp;<br />From my perspective, 3GPP has not shifted Release 16 completion. There is a huge effort by the 3GPP community to keep the freeze dates by using virtual meetings and progressing discussions by email and conference calls. The plenary session in June, when the freeze was and is still scheduled, was shifted by two weeks to allow more time to finalize the corrections and make Rel-16 more stable.<br />&nbsp;<br /><strong>IDCC:</strong> Will the plenary session take place in the virtual space somehow to follow social distancing practices?<br />&nbsp;<br /><strong>Diana:</strong> Yes. The March plenary took place about a week ago, and was all done by email. &nbsp;The June plenary is also expected to be done in the virtual space.&nbsp;<br />&nbsp;<br />3GPP has actually been doing these virtual meetings since February, and every working group in 3GPP has tried different methods to move things forward. So, for example, RAN1 meetings, which address physical layer aspects, were done purely by email over two weeks, instead of the typical one-week in person meetings. The RAN2 working group, which I'm involved in, also took place over two weeks. In addition to email discussions we also had conference calls, which actually helped our progress significantly. &nbsp;Other groups are considering introducing conference calls in follow up virtual meetings.<br />&nbsp;<br />The virtual meetings in February allowed the groups to make a surprising amount of progress and complete an important part of the work.&nbsp; This is why I think we will maintain the June timeline for Release 16. I know people were very skeptical about how much progress could be made over email and conference calls, but in the end, I think we were pretty productive. Of course, we were much less efficient than before, but we were still very productive.<br />&nbsp;<br /><strong>IDCC:</strong> Why do you think these efforts were so productive?<br />&nbsp;<br /><strong>Diana:</strong> There are two aspects that contributed to the progress we made. First, quite simply, everybody knew we were in an unusual situation because of the pandemic. Secondly, Release 16 is at the end of the release, so whatever remains as an open issue is likely to be very specific and detailed, while most of the more complicated and controversial issues that require face-to-face discussion were already completed by the end of the last year. &nbsp;For Release 16, we were left with several small but detailed issues, and given the global circumstance, delegates were more willing to compromise and finish the release for the good of the whole industry, rather than fighting for specific individual objectives. There was a nice atmosphere of people wanting to compromise and progress things, which was very nice to see. &nbsp;Of course, the virtual meetings were a lot more work for delegates and leadership, but 3GPP leadership did a great job organizing and facilitating the discussions in a way to encourage progress and consensus.&nbsp;<br />&nbsp;<br /><strong>IDCC:</strong> Like the rest of the world, we don't know how long this pandemic will endure, or how long we're going to have to practice social distancing. What kind of impact do you think this will have on the overall standards development outlook for the next couple of years?<br />&nbsp;<br /><strong>Diana:</strong> That's a very good question and it's been a topic of discussion with 3GPP leadership. Leadership has suggested and hopes that we'll be back to normal functioning by August. I and a few others proposed that we should be more conservative and prepare as if we'll have no more face-to-face meetings until the end of the year and will need to continue meeting in a virtual space. What we're trying to do in 3GPP is find ways to make meetings more efficient. Every group is exchanging ideas on how we can make progress, assuming that we may have to do this virtually for a year.<br />&nbsp;<br /><strong>IDCC:</strong> We've discussed Release 16, but what about Release 17? That release was shifted to December 2021, correct?<br />&nbsp;<br /><strong>Diana:</strong> Yes. Rel-17 has been shifted by three months. When the first meetings were cancelled in February &ndash; coincidentally when Release 17 meetings were supposed to start &ndash; 3GPP leadership decided to not conduct any Release 17 work until groups could meet again face-to-face. The rationale behind that decision is because the beginning of a release always produces a lot of diverging views, and it's difficult to reach consensus unless you're having a coffee, a chat, or explaining the technical details face-to-face.<br />&nbsp;<br />However, given the current projections for the pandemic, 3GPP leadership has decided that we will start Release 17 over virtual meetings.<br />&nbsp;<br /><strong>IDCC:</strong> Do you think the pandemic will have a significant impact on the timelines or the efficiency of the standardization process? We don't know the timeline for the pandemic, and probably won't for some time -- how significant do you think that impact will be?&nbsp;<br />&nbsp;<br /><strong>Diana:</strong> It depends on how long it will last, of course. I think it's inevitable that it will have an impact and delay of things. Like I said, virtual meetings are not as efficient as meeting face-to-face. Whatever we could achieve in one week of face-to-face meetings, now requires two weeks of emails and conference calls. Even then, I don't even think we can achieve 50 percent of what we were achieving face-to-face in one week.<br />&nbsp;<br />So of course, it's going to delay things, but at the same time, it might also force a prioritization of our features. Maybe 3GPP would consider prioritizing some of the most important features and re-scope Rel-17 work to complete them on time. The alternative is to slow down the entire release schedule and prolong the implementation of feature improvements in future releases. The other option being considered is hosting additional &lsquo;ad-hoc&rsquo; meetings in January next year.<br />&nbsp;<br /><strong>IDCC:</strong> Early industry analysis suggests that consumer demand for 5G wireless services may fall somewhat because consumers impacted economically by the pandemic may not have as much money to spend on services. Do you think the other aspects of 5G will follow suit? Given the 5G consumer focus on enhanced mobile broadband (eMBB) use cases, and increasing enterprise focus on ultra-reliable low latency (URLLC) and the massive machine type communication (mMTC) 5G use cases, will the pandemic&rsquo;s effects be felt equally across all three corners of the spectrum?<br />&nbsp;<br /><strong>Diana:</strong> I don't think it will necessarily impact one use case more than others. I think what could be impacted is the priority in which things are developed. &nbsp;<br />&nbsp;<br />The pandemic has inevitably impacted life as we know it, and certain things like remote diagnostics, surgery, etc. that require URLLC may become more important and necessary than ever with 5G.&nbsp; Proximity detection, gaming, AR/VR, virtualization, etc. may also become very important and go up the priority list. At the same time, there are certain things within eMBB that still need to be improved to support some of the high data rate requirements of emerging use cases.<br />&nbsp;<br /><strong>IDCC:</strong> Isn&rsquo;t the eMBB use case increasingly important right now because so many people are in home isolation watching Netflix and streaming video, causing some video services to throttle down their streaming speeds because the demand is so high?<br />&nbsp;<br /><strong>Diana:</strong> Right, but don't forget that some of the capacity issues are actually on the network side and not really on the wireless side.<br />&nbsp;<br /><strong>IDCC:</strong> That's true.<br />&nbsp;<br /><strong>Diana:</strong> I personally feel it's very difficult to know which use cases will be impacted. The way 3GPP works is, if they have time, they will address several use cases simultaneously, because they always prepare for the future. We're preparing for use cases that most customers don't even have in mind yet &ndash; everything we do today will not be deployed for another four years, at least.<br />&nbsp;<br />So, I think the short-term impact won't be felt immediately. If we must prioritize, that's where we may feel the impact. The operators and industry players will let us know what's essential to them so that we can focus our attention accordingly.&nbsp;<br />&nbsp;<br />This scenario could actually re-scope Release 17 a little bit, but as of now, 3GPP is not planning on revisiting the scope of Rel-17. The plan is that 3GPP will get things done on time with this shift. For example, they are already considering adding new meetings during the year or next year (virtual, of course), in an effort to adhere to this new three-month timeline shift.&nbsp;<br />&nbsp;<br /><strong>IDCC:</strong> Thank you for sharing these considerations. To switch topics a bit, will the pandemic have any impacts on spectrum we should address?<br />&nbsp;<br /><strong>Diana:</strong> I know that some of the International Telecommunication Union (ITU) meetings where future use of certain spectrum is discussed have been delayed, and some of the spectrum auctions are being postponed as well. That certainly could delay some of the operators from getting the spectrum they need for deployment.<br />&nbsp;<br /><strong>IDCC:</strong> Do you mean, that is because they don't have all of the spectrum they need right now for those deployments?<br />&nbsp;<br /><strong>Diana:</strong> Well, I think some operators have already gotten some spectrum, but they also rely on future spectrum to expand and be able to provide all the services that they've promised or want to provide. So far, operators have purchased some spectrum both in the mmWave spectrum and below 6GHz spectrum.&nbsp; However, additional spectrum will be dependent on further auctions and availability. Until then, operators cannot plan for further deployments.&nbsp; &nbsp;<br />&nbsp;<br />That might cause some delays, but most operators already have one part of the spectrum to kick off their initial 5G deployments and further 5G enhancements.<br />&nbsp;<br /><strong>IDCC:</strong> Finally, what does this pandemic tell us about the importance of the wireless industry &ndash; and 5G &ndash; to the world?<br />&nbsp;<br /><strong>Diana:</strong> First of all, I think it shows the importance of being able to stay connected, especially during these critical times and while the majority of the world population is in full isolation. It's one of the first times I started to truly appreciate the criticality and importance of having the technology we have today &ndash; to allow us to function remotely for a large number of aspects.&nbsp; We can stay in touch with family and friends, work from home, learn online, be diagnosed remotely without going to the hospital, and be able to do almost anything from our phones.<br />&nbsp;<br />If you look at what is going on right now, we see that 5G is being used for health monitoring, remote diagnostics for doctors, and 5G robots used in hospitals in Wuhan to protect staff from the virus. We'll even see an importance placed on supporting video gaming. Virtualization, a key feature of 5G, is also proving extremely important nowadays because everything has been moving towards the cloud and it is what allows us to function remotely. I think everybody now understands the importance of being able to be virtual and have remote capabilities. And 5G offers all those opportunities.<br />&nbsp;<br />*****<br />&nbsp;<br /><strong>Forward-Looking Statements</strong></p>
<p>This blog contains forward-looking statements within the meaning of Section 21E of the Securities Exchange Act of 1934, as amended. Such statements include information regarding the company&rsquo;s current expectations with respect to the impact the coronavirus pandemic will have on 3GPP wireless standards, the timeline for their development, and demand for 5G services. Words such as "expects," "projects," "forecast," &ldquo;anticipates,&rdquo; and variations of such words or similar expressions are intended to identify such forward-looking statements.</p>
<p>Forward-looking statements are subject to risks and uncertainties. Actual outcomes could differ materially from those expressed in or anticipated by such forward-looking statements due to a variety of factors, including, but not limited to, the duration and long-term scope of the ongoing coronavirus pandemic and its potential impacts on standards-setting organizations and the company&rsquo;s business. We undertake no duty to update publicly any forward-looking statement, whether as a result of new information, future events or otherwise except as may be required by applicable law, regulation or other competent legal authority.</p>
<p>InterDigital is a registered trademark of InterDigital, Inc.<br />For more information, visit:&nbsp;<a title="InterDigital.com" href="https://www.interdigital.com">www.interdigital.com</a>.</p>

InterDigital Donation to IEEE Goldsmith Lecture Program Supports Diversity, Women in Engineering

Dec 2019/ Posted By: Roya Stephens

<p>This week, InterDigital became the latest donor to the IEEE Information Theory Goldsmith Lecture Program, an award created to highlight the achievements of exceptional female researchers early in their careers, while providing opportunities to acknowledge and publicize their work. With our donation to the program, established this year in honor of Dr. Andrea Goldsmith, InterDigital stands alongside Microsoft, Intel, Nokia, Google and others across in tech ecosystem in elevating female researchers and supporting diversity and inclusion both within and outside of our business.</p>
<p>InterDigital knows that a diversity of backgrounds and perspectives enables us to approach big problems, conduct complex research, and develop effective solutions that work across ecosystems and experiences, despite our modest size. We feel a strong responsibility to acknowledge the benefits of diversity in engineering and innovation, while also encouraging gender diversity of researchers in engineering and the field of information theory.</p>
<p>As part of the Goldsmith Lecture Program, each yearly award recipient will deliver a lecture of her choice to students and postdoctoral researchers at one of the IEEE Information Theory Society&rsquo;s (ITSoc&rsquo;s) Schools of Information Theory. In addition to driving inclusion and diversity of thought within the IEEE and the ITSoc, the Goldsmith Lecture Program helps provide more visibility and acknowledgment of the contributions of female researchers to technology. The impact of this program extends beyond IEEE and InterDigital, giving female researchers a platform to share their work while inspiring and encouraging new, more diverse students to explore and innovate with technology.</p>
<p>InterDigital is proud to support the Goldsmith Lecture Program and we congratulate the 2020 award recipient, Ayfer &Ouml;zg&uuml;r. Ayfer is currently an assistant professor within the Stanford University&rsquo;s Electrical Engineering Department, and conducted her postdoctoral research with the Algorithmic Research on Networked Information Group.</p>
<p>To learn more, about the IEEE Information Theory Society and the Goldsmith Lecture Program, please visit:&nbsp;<a href="https://www.itsoc.org/honors/goldsmith-lecture">https://www.itsoc.org/honors/goldsmith-lecture</a></p>

InterDigital’s Cutting-Edge 6G Research Gets a Boost with Royal Academy of Engineering Industrial Fellowship

Oct 2019/ Posted By: Roya Stephens

<p><img style="float: left; margin: 10px 15px 15px 0;" src="https://www.interdigital.com/resources/img/DrMohammedElHajjar_photo.png" alt="" /></p>
<p>As a company dedicated to advanced research and development in wireless and video, we know that our innovations are only enhanced by partnerships with industry leaders and respected academic institutions. That&rsquo;s why InterDigital is so proud that Dr. Mohammed El-Hajjar has been awarded the Royal Academy of Engineering (RAEng) Industrial Fellowship to work alongside InterDigital to support the evolution of 6G technology.</p>
<p>Dr. El-Hajjar, a professor at the School of Electronics and Computer Science at the University of Southampton, was recently awarded the prestigious RAEng Industrial Fellowship to spearhead a joint project with InterDigital to advance the research and design of transmission receivers for 6G wireless systems. Just 19 professors and researchers were bestowed Industrial Fellowships this year, based on the caliber of research proposal, direct support from an industry partner, and a clear and significant impact on the industry. Other engineering fellowships were awarded for innovative research in plastic waste recycling, carbon dioxide capture, 3D-reconstructed human skin, and more.</p>
<h3>The Royal Academy of Engineering Industrial Fellowship</h3>
<p>Being awarded the coveted RAEng Industrial Fellowship is an excellent acknowledgment of the proposed benefit our research will bring to industry and validation of InterDigital&rsquo;s industry leadership in developing the foundations for 5G and 6G networks.</p>
<p>&ldquo;InterDigital is proud to work alongside Dr. Mohammed El-Hajjar on this project and join the more than 50 industrial partners that have supported RAEng Industrial Fellowship recipients over the past five years&rdquo; said Dr. Alain Mourad, Director Engineering R&amp;D at InterDigital. &ldquo;This is a nice validation of InterDigital&rsquo;s industry leadership in developing the foundations for 5G and 6G networks alongside our global partners and top-of-class university professors.&rdquo;</p>
<p>During his fellowship with InterDigital, Dr. El-Hajjar&rsquo;s research will build upon several years of collaboration with InterDigital to advance the research and design of wireless transceivers for 6G systems. Specifically, Dr. El-Hajjar will design and develop new signal processing techniques based on the concept of Holographic Multiple-Input Multiple-Output (MIMO), which enables unprecedented data rates up to Terabits per second whilst mitigating the challenges of complexity, energy consumption and cost of large antenna arrays in Massive MIMO. The value of this collaborative research will be foundational for the long-term evolution of 5G into 6G.</p>
<p>&ldquo;With mobile subscribers continuing to demonstrate an insatiable demand for data and billions of smart wireless devices predicted in future services for smart homes, cities, transport, healthcare and environments, the explosive demand for wireless access will soon surpass the data transfer capacity of existing mobile systems, said Dr. El-Hajjar. &ldquo;Achieving the vision of fiber-like wireless data rates relies on efficiently harnessing the benefits of massive MIMO and millimeter wave frequencies. A major challenge for achieving this vision is the design trade-off of the underlying cost, complexity and performance requirements of massive MIMO in future wireless communications.&rdquo;</p>
<p>As a result, Dr. El-Hajjar&rsquo;s research on Holographic MIMO will improve upon the current, state-of-the-art Massive MIMO framework. Today&rsquo;s 5G New Radio (NR) networks have largely adopted Massive MIMO, a concept in which base stations are equipped with an array of antennas to simultaneously serve many terminals with the same time-frequency resource. Massive MIMO utilizes hybrid digital-analogue beamforming, in which the number of users, or streams, depends on the number of available radio frequency chains. While Massive MIMO has enabled high energy and spectral efficiency, scalability to the number of base station antennas, and the ability to employ simple data processing at the transmitter and receiver edge, this method faces several hardware impairments. Namely, hybrid beamforming requires a significant number of radio frequency chains and faces inaccuracies in the angular resolution of phase shifters in analogue beamforming.</p>
<h3>Holographic MIMO</h3>
<p>Dr. El-Hajjar and InterDigital&rsquo;s joint project will center around the concept of Holographic MIMO, a new and dynamic beamforming technique that uses a software-defined antenna to help lower the costs, size, weight, and power requirements of wireless communications. In other words, the Holographic MIMO method implements a phased antenna array in a conformable and affordable way so that each antenna array has a single radio frequency input and a distribution network to vary the directivity of the beamforming. Utilizing machine learning tools within the Holographic MIMO design ensures a high level of adaptability and reduction of signal overhead at the transmitter and receiver levels, while enabling support for Massive MIMO that is 10 times greater than what is available in 5G NR today.</p>
<p>The Holographic MIMO technique will be foundational to the long-term evolution of 5G into 6G networks. Though this technique has a timeframe of five to ten years before it matures and can be implemented in future iterations of 5G NR, our collaborative research will enable unprecedented data rates while mitigating the challenges of cost, complexity, and energy consumption presented by large antenna arrays in Massive MIMO situations. This fellowship project also aligns with InterDigital&rsquo;s ongoing research on meta-materials based large intelligent surfaces with the 6G Flagship program at the University of Oulu, as large intelligent surfaces include large antenna arrays that would require techniques like Holographic MIMO to support efficient and operational beamforming.</p>
<p>The year-long Industrial Fellowship will run until September 2020, and InterDigital&rsquo;s collaboration with the University of Southampton on Beyond 5G intelligent holographic MIMO extends through 2022 as part of InterDigital&rsquo;s sponsorship of a three-year PhD studentship program at the university.</p>

InterDigital R&I Recognized for Immersive Video Technology at Inaugural IBC Showcase

Sep 2019/ Posted By: Roya Stephens

<p><em>InterDigital marked its debut at the IBC trade show in Amsterdam by showcasing five cutting-edge video demonstrations and taking home an award for Best in Show for the Digital Double technology</em> &nbsp;</p>
<p>A first impression is a lasting one. At last week&rsquo;s International Broadcasting Convention (IBC) trade show in Amsterdam, InterDigital not only made its debut as a company with expertise and advanced research in wireless <em>and</em> video technologies, but also left a lasting impression with our award-winning technologies and contributions to immersive video.</p>
<p style="text-align: center;"><img style="width: 95%;" src="https://www.interdigital.com/resources/img/group-photo-cropped.jpg" alt="Group Photo" /> <span style="text-align: center;"><em> Engineers from InterDigital's Home Experience, Imaging Science, and Immersive Labs at IBC 2019</em></span></p>
<p>&nbsp;</p>
<p>Throughout the week, engineers from InterDigital R&amp;I&rsquo;s Home Experience, Immersive, and Imaging Science Labs in Rennes, France displayed their contributions to next-generation video coding standards, volumetric video frameworks, compression schemes, and streaming applications, as well as a cutting-edge tool to automate the creation of digital avatars in VFX, gaming, VR, and other video applications. At the end of the five-day convention, InterDigital received a prestigious prize and recognition of our significant work to enable immersive video and streaming capabilities of the future.</p>
<p>&nbsp;</p>
<h2><strong>InterDigital Wins Best of Show for the Digital Double</strong> &nbsp;</h2>
<p>InterDigital received the IBC Best of Show award, presented by TVB Europe for innovations and outstanding products in media and entertainment, for our cutting-edge &ldquo;Digital Double&rdquo; technology. Developed in InterDigital&rsquo;s Immersive Lab, the Digital Double tool improves upon the traditionally time- and labor-intensive 3D avatar creation process to automatically create a person&rsquo;s digital avatar in less than 30 minutes! Although the Digital Double technology completely automates the avatar creation process, it also gives users the option to make stops and manually finetune the avatar at each step. Using a rig of 14 cameras, the technology computes a full 3D mesh of a person&rsquo;s face and upper body from the cameras&rsquo; images to create more human-like avatars and a precise set of facial expressions for animation.</p>
<div class="col1" style="float: left; margin-bottom: 45px; margin-right: 25px; width: 58%; text-align: center;"><img style="margin-bottom: 15px; margin-right: 25px;" src="https://www.interdigital.com/resources/img/2 - Digital Double Photo.jpg" alt="" align="left" /> <em> Bernard Denis and Fabien Danieau hold the IBC Best of Show award for the digital double</em></div>
<div class="col2" style="float: left; width: 38%;">As we enter the 5G era of ultra-low latency and high bandwidth, video viewers will desire, and be able to enjoy, more immersive video experiences, and our Digital Double tool will become increasingly important to content producers. <em>&nbsp;</em>&nbsp;&nbsp;
<p>The Best in Show award recognized the Digital Double&rsquo;s potential to enhance immersive video opportunities of the future, where individuals could see themselves in real-time as a character in a film or on television or even virtually participate in a game show alongside a presenter, contestants, and audience on screen. The Digital Double technology started at the highest end of the market, this time in Hollywood film production, and is likely to eventually make its way into the consumer mainstream.</p>
</div>
<p>The Digital Double&rsquo;s foundational facial animation control for expression transfer (FACET) technology has already been used by production companies like Disney and Paramount in blockbuster films such as the <em>Jungle Book</em> remake and the <em>Shape of Water.</em> We are excited to explore this award-winning tech&rsquo;s applications in virtual reality, gaming, and other immersive experiences where an individual&rsquo;s digital avatar can be adapted to each context.</p>
<p>&nbsp;</p>
<div class="clear">&nbsp;</div>
<h2><strong>InterDigital&rsquo;s Contributions to Tech Innovation in the Digital Domain</strong></h2>
<div class="col1" style="float: left; width: 55%;">
<p>In addition to the Digital Double technology, InterDigital&rsquo;s Research and Innovation teams displayed 5G their advanced research to support next generation and future video streaming capabilities. Laurent Depersin, Director of the InterDigital R&amp;I Home Experience Lab, provided an overview of InterDigital&rsquo;s contributions to video innovations during a panel discussion on &ldquo;Technological Innovation in the Digital Domain.&rdquo; Laurent spoke alongside peers from VoiceInteraction and Haivision, to explore the innovations needed to support high resolution and intensive data applications for the video content of the future. You may view Laurent&rsquo;s panel discussion <a href="https://ibc.gallery.video/ibctv/detail/video/6086156256001/ibc2019-content-everywhere-hub:-technological-innovation-in-the-digital-domain?">here</a>. &nbsp;</p>
<p>During his presentation, Laurent outlined new video applications that drive the need for technological innovation, as well as InterDigital&rsquo;s Home Experience Lab&rsquo;s commitment to develop technologies that both connect and improve user experience in the home. Laurent identified mass increases in video consumption, the popularity of interactive and immersive content like VR and gaming, and the trend towards ultra-high bandwidth and ultra-low latency content in the form of immersive communication and 8K video, as the key drivers of InterDigital&rsquo;s innovative work in video technology.</p>
</div>
<div class="col2" style="float: left; width: 45%; text-align: center;"><img style="float: right; margin: 15px 0px 15px 15px; width: 90%;" src="https://www.interdigital.com/resources/img/3 - Laurent Speaking Photo.jpg" alt="" /><em> Laurent Depersin outlines technological innovation in the digital domain</em></div>
<div class="clearfix">&nbsp;</div>
<h2><strong>Versatile Video Coding: Improving on the High-Efficiency Video Coding (HEVC) Standard</strong></h2>
<div class="col1" style="float: left; width: 43%; margin-right: 25px; text-align: center;"><img src="https://www.interdigital.com/resources/img/4 - VVC Demo Picture.jpg" alt="" align="left" /><em> Lionel Oisel demonstrates the enhanced capabilities of the VVC standard</em></div>
<div class="col2" style="float: left; width: 53%;">
<p>5G InterDigital&rsquo;s demonstration on Versatile Video Coding (VVC), presented by Michel Kerdranvat and Imaging Science Lab Director Lionel Oisel, reflects our work to develop cutting-edge tools that analyze, process, present, compress, and render content to improve the production and delivery of high-quality images.</p>
<p>The InterDigital R&amp;I lab&rsquo;s contribution to the VVC standard enhances the video compression efficiency of the existing High-Efficiency Video Coding (HEVC) standard published in 2013. Specifically, its demonstration compared the HEVC and VVC video standards and showed how VVC can compress and improve video delivery by lowering the bandwidth and bitrate required for Standard Dynamic Range (SDR), High Dynamic Range (HDR) and immersive, 360-degree video content. &nbsp; &nbsp;</p>
</div>
<div class="clearfix">&nbsp;</div>
<h2 style="margin-top: 25px;"><strong>The Need for Point Cloud Compression for Immersive Video</strong> &nbsp;</h2>
<p>The InterDigital Imaging Science Lab&rsquo;s demo on Point Cloud Compression, presented by C&eacute;line Guede and Ralf Schaefer, built upon the HEVC video coding standard to showcase the vital need for video compression mechanisms to enjoy increasingly immersive and interactive video experiences in VR, AR, and 3D imagery. &nbsp; &nbsp; &nbsp; &nbsp;</p>
<div class="col1" style="float: left; width: 48%; margin-right: 25px;">
<p>Point Clouds are sets of tiny &ldquo;points&rdquo; grouped together to make a 3D image. Point Cloud has become a popular method for AR and VR video composition, 3D cultural heritage and modeling, and geographic maps for autonomous cars. While this method has many benefits, it is important to remember that each Point Cloud video frame typically has 800,000 points, which translates to 1,5000 MBps uncompressed &ndash; a massive amount of video bandwidth. To address this challenge, our Imaging Science Lab has participated in the development of a Point Cloud Compression method being standardized in MPEG to support widespread industry adoption of the Point Cloud format for immersive video. InterDigital showcased its video-based Point Cloud Compression capabilities in a Point Cloud-created AR video demo streamed to a commercially available smartphone in real time. This technique will support the crisp, low-latency deployment of immersive video experiences through existing network infrastructure and devices.</p>
</div>
<div class="col2" style="margin-bottom: 25px; float: left; text-align: center; width: 48%;"><img style="float: right; margin: 15px 0px 15px 15px;" src="https://www.interdigital.com/resources/img/vpcc-photo-cropped.jpg" alt="" /><em>Ralf Schaefer displays Point Cloud compression on a comercially available smartphone</em></div>
<div class="clear">&nbsp;</div>
<h2><strong>The Challenges and Potential for Volumetric Video</strong> <strong>&nbsp;</strong></h2>
<p>In concert with our efforts to compress and deliver high bandwidth video, InterDigital R&amp;I&rsquo;s Immersive Lab also demonstrated its innovative work to enhance immersive experiences that meet our interactive media demands. To give context to the importance of its technological contributions, Immersive Lab Technical Area Leader Val&eacute;rie Alli&eacute; delivered a presentation on the challenges and potential of volumetric video and the various applications in which it might be deployed. &nbsp;</p>
<p>&nbsp; &nbsp;</p>
<p style="text-align: center;"><img style="width: 95%;" src="https://www.interdigital.com/resources/img/6 - Valerie Speaking Photo.jpg" alt="" /></p>
<p style="text-align: center;"><em>Val&eacute;rie Alli&eacute; delivers a presentation on the opportunities of volumetric video content </em></p>
<p>&nbsp;</p>
<p>Volumetric video is hailed as the next generation of video content where users can feel the sensations of depth and parallax for more natural and immersive video experiences. As AR, VR, and 3D video become a more mainstream consumer demand, providers will require tools to deliver the metadata necessary to produce a fluid, immersive or mixed reality video experience from the perspective of each viewer. As a result, content providers may face challenges in maintaining high video quality while supporting user viewpoint adaptation and low latency. &nbsp;</p>
<h2><strong>MPEG Metadata for Immersive Video: A Roadmap for Volumetric Video Distribution</strong> <strong>&nbsp;</strong></h2>
<p>Val&eacute;rie Alli&eacute; and Julian Fleureau&rsquo;s demo on MPEG Metadata for Immersive Video outlined both the steps to create volumetric video and the requisite format for its distribution. Unlike flat 2D video experiences, volumetric video is much larger and cannot be streamed over traditional networks. In addition, volumetric video requires the capture of real video through camera rigs, the development of computer-generated content, the creation of a composite film sequence using VFX tools, and the interpolation of a video&rsquo;s view to create a smooth, unbroken rendering of immersive content from the user&rsquo;s point of view. &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;</p>
<div class="clear">&nbsp;</div>
<h2><strong>Addressing the Challenges of Six Degrees of Freedom (6DoF) Streaming</strong></h2>
<div style="width: 45%; float: left; margin-right: 25px;"><img style="float: left; margin: 15px 15px 15px 0;" src="https://www.interdigital.com/resources/img/7 - 6DoF Demo Photo.jpg" alt="" />
<p style="text-align: center;"><em>Visitor experiences InterDigital's 6DoF streaming video capabilities on a VR Headset</em></p>
</div>
<div style="width: 50%; float: right;">
<p>The significance of the MPEG codec for immersive and volumetric video was put on display in the InterDigital R&amp;I Home Experience Lab&rsquo;s Six Degrees of Freedom (6DoF) streaming demo, presented by Charline Taibi and R&eacute;mi Houdaille. 6DoF refers to the six movements of a viewer in a 3D context, including heave for up and down movements, sway for left and right movements, surge for back and forward movements, yaw for rotation along the normal axis, pitch for rotation along the transverse axis, and roll for rotation along the longitudinal axis.</p>
<p>Using a computer-generated video streamed through a VR headset, the demonstration showed how the standards and codecs developed by InterDigital&rsquo;s labs can be utilized to stream fully immersive volumetric video with six degrees of freedom over current network infrastructure.</p>
<p>The demonstration achieved a seamless and immersive experience by streaming only content from the viewers&rsquo; point of view. &nbsp;</p>
<p>InterDigital left a lasting impression on all who visited our IBC booth and networking hub and experienced the Research and Innovation Labs&rsquo; innovative demos. We are excited to play a role in the pioneering compression solutions and streaming capabilities that will drive and enable the immersive video experiences of the future.</p>
</div>