RESEARCH PAPER / Jan 2022
/
Immersive / AR/VR/MR,
Light Field,
Volumetric Imaging,
Machine learning/ Deep learning /Artificial Intelligence
Recently, learning methods have been designed to create Multiplane Images (MPIs) for view synthesis. While MPIs are extremely powerful and facilitate high quality renderings, a great amount of memory is required, making them impractical for many applications. In this paper, we propose a learning method that optimizes the available memory...
Waveguide based optical combiners for Augmented Reality (AR) glasses are integrating several Surface Relief Gratings (SRG) whose pitch sizes can be as small as 200 nm for the blue wavelength. All SRG components exploit the first diffraction order to couple in and out or to deviate the light. We present...
Compressively samples light field reconstruction using orthogonal frequency selection and refinement
This paper considers the compressive sensing framework as a way of overcoming the spatio-angular trade-off inherent to light field acquisition devices. We present a novel method to reconstruct a full 4D light field from a sparse set of data samples or measurements. The approach relies on the assumption that sparse...
Information captured by the eye allows the human brain to extract depth cues from the scene, to analyze them and to understand a complex 3D scene. In real life all these depth cues are naturally present at the same time and coherent. Rendering them on a display for several viewers...
In this paper we address the problem of view synthesis from large baseline light fields, by turning a sparse set of input views into a Multi-plane Image (MPI). Because available datasets are scarce, we propose a lightweight network that does not require extensive training. Unlike latest approaches, our model does...
In this work we propose a solution for the creation of a nanojet focusing component based on a combination of two dielectric materials capable of managing the position of the focused beam in the near zone. We demonstrate that the double-material design of the elements of metagratings can be used...
We present a new method for reconstructing a 4D light field from a random set of measurements. A 4D light field block can be represented by a sparse model in the Fourier domain. As such, the proposed algorithm reconstructs the light field, block by block, by selecting frequencies of the...
To enable light fields of large environments to be captured, they would have to be sparse, i.e. with a relatively large distance between views. Such sparseness, however, causes subsequent processing to be much more difficult than would be the case with dense light fields. This includes segmentation. In this paper,...
Presentation of Light Fields pipeline for Immersive Video Experiences
Light field acquisition devices allow capturing scenes with unmatched postprocessing possibilities. However, the huge amount of high-dimensional data poses challenging problems to light field processing in interactive time. In order to enable light field processing with a tractable complexity, in this paper, we address the problem of light field oversegmentation....
This paper describes a light field scalable compression scheme based on the sparsity of the angular Fourier transform of the light field. A subset of sub-aperture images (or views) is compressed using HEVC as a base layer and transmitted to the decoder. An entire light field is reconstructed from this...
Light-field (LF) is foreseen as an enabler for the next generation of 3D/AR/VR experiences. However, lack of unified representation, storage and processing formats, variant LF acquisition systems and capture-specific LF processing algorithms prevent cross-platform approaches and constrain the advancement and standardization process of the LF information. In this work we...
The quantity and diversity of data in Light-Field videos makes this content valuable for many applications such as mixed and augmented reality or post-production in the movie industry. Some of such applications require a large parallax between the different views of the Light-Field, making the multi-view capture a better option...
In this paper, we present a complete processing pipeline for focused plenoptic cameras. In particular, we propose 1) a new algorithm for microlens center calibration fully in the Fourier domain, 2) a novel algorithm for depth map computation using a stereo focal stack, and 3) a depth-based rendering algorithm that...
In this paper, we introduce a novel graph representation forinteractive light field segmentation using Markov Random Field (MRF).The greatest barrier to the adoption of MRF for light field processing isthe large volume of input data. The proposed graph structure exploits theredundancy in the ray space in order to reduce the...
Light field imaging is recently made available tothe mass market by Lytro and Raytrix commercial cameras.Thanks to a grid of microlenses put in front of the sensor, aplenoptic camera simultaneously captures several images of thescene under different viewing angles, providing an enormousadvantage for post-capture applications,e.g., depth estimationand image refocusing. In...