This paper presents an adaptive clipping technique with optimized syntax in the video coding Joint Exploratory Model (JEM), which exploits the signal characteristics of the video sequence. The component-wise clipping bounds are coded for each slice. Two encoding methods leveraging the efficiency of the proposed technique are then described. The first one consists in modeling the errors induced by the clipping process in the Rate Distortion Optimization. The second one aims at reducing the cost of transform coefficients by smoothing the residuals. Finally, experimental results are provided and several variants are discussed.
Adaptive Clipping in JEM
Adaptive Clipping in JEM
Related Content
The human visual system (HVS) non-linearly processes light from the real world, allowing us to perceive detail over a wide range of illumination. Although models that describe this non-linearity are constructed based on psycho-visual experiments, they generally apply to a limited range of illumination and therefore may not fully explain the behavior of the HVS u…
We present a new method for reconstructing a 4D light field from a random set of measurements. A 4D light field block can be represented by a sparse model in the Fourier domain. As such, the proposed algorithm reconstructs the light field, block by block, by selecting frequencies of the model that best fits the available samples, while enforcing orthogonality wi…
Webinar /Jun 2024
Blog Post /Jun 2025
Blog Post /Jun 2025