Ask An Engineer: What’s Next for Mixed Reality?
Caroline Baillard, Immersive Lab Technical Area Leader outlines new technologies and workflows in the future of mixed reality
I was recently asked, "What is mixed reality?" It’s a simple question, but the answers vary based on who you ask.
While augmented reality (AR) has been in the public consciousness for years now – and virtual reality (VR) for decades – mixed reality (MR) is not yet widely understood. The rise of AR, coupled with the further development of VR, has set the stage for MR, a technology that encompasses both its predecessors but goes further than each.
In practical terms, MR provides more than AR's ability to overlay virtual content onto a real world, and more than VR's ability to display interactive and immersive virtual content -- MR is about being able to integrate the two. By blending virtual elements into the real world, we create an entirely new environment – one that mixes the real with the virtual.
To bring the technology into greater mainstream use and enable next-generation mixed reality experiences, we must develop new technologies and reimagine their workflows. These technologies and workflows address two key needs for MR, the first being the need to make MR experiences more realistic and immersive. The second is a need to equip MR technology with context awareness, which enables the experience to adapt both to the user and the environment.
These technology developments are a response to two major trends in MR. The first trend is, quite simply, the increasing importance of consumer AR. Industrial use cases for AR showcased significant early potential for this technology, but consumer AR will bring maturing MR technologies into the home and the mainstream. The second trend is the rise of the AR cloud. Significant developments in cloud computing and networking- enabled AR experiences that are delivered remotely will serve as a vital underpinning of MR. The AR Cloud enables people to collaborate and share and express information in a centralized way – for example in 3D-modeled scenes – to experiment with them and share knowledge.
To bring MR capabilities to the mainstream, a few technologies must be addressed. The overall goal for MR is to develop visual enhancements and allow for personalization of our environment, but how do we do that? Scene modeling holds an important key.
Scene modeling technologies allow users to make a detailed, 3D virtual model of a real physical space – a task far more difficult than it sounds. The first challenge is modeling indoor spaces, which technically are constrained by more boundaries than outdoor spaces. Surprisingly, it is more challenging to model outdoor spaces, citing the additional challenges current technology faces in modeling to an unconstrained environment.
There are several factors to consider when modeling a boundary-constrained indoor space, including 3D geometry and the configuration and placement of furniture, the configuration of objects around the room, the color, texture, and reflectiveness of surfaces, the location, type, and intensity of light sources, and more. All these factors create a very complex scene model – and that is just for relatively fixed objects. Adding avatars and other people into the scene can dramatically increase the complexity of the modeling, as it introduces new dynamics to the room based on the user's movement throughout the space.
Further complicating this scenario is interaction management – the next stage in technology development and where the "mixed" in mixed reality begins to shine. When virtual objects interact with the real environment (through occlusions, reflections, collisions, etc), or when a remote user can begin to interact with the virtual space that has been modeled –the experience becomes very immersive…and increasingly complex task, as addressed below.
In addition to technologies, certain workflows must be developed to enable MR. The nature of mixed reality is such that it benefits from running on a platform based on a central server, which we refer to as an "AR Hub". The architecture of the AR Hub server platform delivers a range of benefits, including centralized data management, multi-user control, higher quality of experience in terms of latency and customization, and a greater degree of privacy. The AR Hub can be installed at a user's location to form the core of the hardware required to deliver MR in a home, for instance. In this way, the MR experience would be primarily tied to the Hub, and critical components of the system would not be limited by variations in personal computing capabilities or network inconsistencies.
Like all things, the AR Hub model is not without its challenges. The model is dependent upon the maturation of earlier technologies, specifically advanced scene analysis and modeling, 3D geometry, texture and light source mapping, and other features that are computationally intensive and require dedicated resources. In addition, MR faces unique challenges because of the benchmarks that indicate its success. Namely, each time an object is moved within an MR space – whether it's a book on the desk, or a chair, or a person – the impact of those changes (geometry, appearance, or semantics) must be taken into account and the rendering of the scene must be adjusted.
Making It Real
Despite the current challenges with this model, maturation should dramatically improve the realism of mixed reality experiences. For example, when a user is able to remotely alter and engage with a virtual object – for instance, pick up a virtual book on the desk, interact with it, and then set it down in a different place – it makes the experience feel more real. When we can enhance the lighting in a room or change the appearance of the furniture, it makes the experience feel more personal. Adding, replacing, or entirely removing an item from the experience exceeds the capabilities of augmented or virtual reality, and starts to become something more.
An important final steppingstone to next-generation MR is the ways the innovations described above will make MR more context aware, meaning that the MR experience can adapt to the environment and the user. Context awareness is integral to enable the technology to become truly usable in interactive settings. From MR model and the related workflows, new applications and prototypes will continue to be built. In the first wave of innovation, we will likely see MR utilized to enhance interactive games, training, education, healthcare, interior design, building management, and more.
Yet, these MR vertical applications are just the tip of the iceberg. MR will lead us towards a spatial web of opportunities, and it will change our everyday life. If we continue this path of research and innovation, MR devices will be capable of replacing smartphones as the predominant form factor for communication, replacing the way we access and browse the internet to get information and content. MR will enable a seamless interaction with information, both in the ways we search for information about anything we see in a space, and how we receive and visualize it. But we still have a lot of exciting work to do before we get there.