The quantity and diversity of data in Light-Field videos makes this content valuable for many applications such as mixed and augmented reality or post-production in the movie industry. Some of such applications require a large parallax between the different views of the Light-Field, making the multi-view capture a better option than plenoptic cameras. In this paper we propose a dataset and a complete pipeline for Light-Field video. The proposed algorithms are specially tailored to process sparse and wide-baseline multi-view videos captured with a camera rig. Our pipeline includes algorithms such as geometric calibration, color homogenization, view pseudo-rectification and depth estimation. Such elemental algorithms are well known by the state-of-the-art but they must achieve high accuracy to guarantee the success of other algorithms using our data. Along this paper, we publish our Light-Field video dataset that we believe may be of special interest for the community. We provide the original sequences, the calibration parameters and the pseudo-rectified views. Finally, we propose a depth-based rendering algorithm for Dynamic Perspective Rendering.
Dataset and Pipeline for Multi-View Light Field Video
Dataset and Pipeline for Multi-View Light Field Video
Dataset and Pipeline for Multi-View Light Field Video
Related Content
We present a new method for reconstructing a 4D light field from a random set of measurements. A 4D light field block can be represented by a sparse model in the Fourier domain. As such, the proposed algorithm reconstructs the light field, block by block, by selecting frequencies of the model that best fits the available samples, while enforcing orthogonality wi…
Light-field (LF) is foreseen as an enabler for the next generation of 3D/AR/VR experiences. However, lack of unified representation, storage and processing formats, variant LF acquisition systems and capture-specific LF processing algorithms prevent cross-platform approaches and constrain the advancement and standardization process of the LF information. In this…
Webinar /Jun 2024
Blog Post /Jul 2025
Blog Post /Jun 2025
Blog Post /Jun 2025