Information captured by the eye allows the human brain to extract depth cues from the scene, to analyze them and to understand a complex 3D scene. In real life all these depth cues are naturally present at the same time and coherent. Rendering them on a display for several viewers is not straightforward. Today no technology can fully achieve this requirement. Per definition, a light field content is a collection of light rays that are corresponding to different viewpoints of the same scene. A light field display should be able to render these different viewpoints to a single viewer or to multiple viewers. The quality of a light field display will be measured by its ability to correctly render these different views and then the expected depth cues. In the paper we will define the technical requirements for a light field display to provide effective depth cues such as the binocular disparity, the motion parallax and the accommodation at a pixel resolution expected by the eye. These requirements will be evaluated with respect to existing technologies (e.g. integral imaging displays) or expected future ones (e.g. microLED displays). Simulation of light field displays will be proposed for different display size (smartphone, desktop or TV), resolution per view and pixel pitch. These simulations illustrate the main impact of the pixel pitch to reach the light field display goal in terms of depth cues requirements. The last part of the paper will focus on the data generation issues. How to generate such amount of views at the display side and what are the adapted data formats for such a purpose? The paper will be associated with supplementary videos that will illustrate the view generation topic using a dedicated data format that is optimized for current GPU processors. Video capture of early stage light field displays will be provided.
How to ensure a good depth perception on a light field display
How to ensure a good depth perception on a light field display
How to ensure a good depth perception on a light field display
Related Content
We present a new method for reconstructing a 4D light field from a random set of measurements. A 4D light field block can be represented by a sparse model in the Fourier domain. As such, the proposed algorithm reconstructs the light field, block by block, by selecting frequencies of the model that best fits the available samples, while enforcing orthogonality wi…
Light-field (LF) is foreseen as an enabler for the next generation of 3D/AR/VR experiences. However, lack of unified representation, storage and processing formats, variant LF acquisition systems and capture-specific LF processing algorithms prevent cross-platform approaches and constrain the advancement and standardization process of the LF information. In this…
In this work we propose a solution for the creation of a nanojet focusing component based on a combination of two dielectric materials capable of managing the position of the focused beam in the near zone. We demonstrate that the double-material design of the elements of metagratings can be used to change its diffraction properties improving the diffraction effi…
Webinar /Jun 2024
Blog Post /Jun 2025
Blog Post /Jun 2025