Scene-agnostic visual inpainting remains very challenging despite progress in patch-based methods. Recently, Pathak et al. [26] have introduced convolutional "context encoders'' (CEs) for unsupervised feature learning through image completion tasks. With the additional help of adversarial training, CEs turned out to be a promising tool to complete complex structures in real inpainting problems. In the present paper, we propose to push further this key ability by relying on perceptual reconstruction losses at training time. We show on a wide variety of visual scenes the merit of the approach for structural inpainting, and confirm it through a user study. Combined with the optimization-based refinement of [32] with neural patches, our context encoder opens up new opportunities for prior-free visual inpainting.
Structural Inpainting
Structural Inpainting
Related Content
The ability of multimedia data to attract and keep people’s interest for longer periods of time is gaining more and more importance in the fields of information retrieval and recommendation, especially in the context of the ever growing market value of social media and advertising. In this chapter we introduce a benchmarking framework (dataset and evaluation too…
We present a new method for reconstructing a 4D light field from a random set of measurements. A 4D light field block can be represented by a sparse model in the Fourier domain. As such, the proposed algorithm reconstructs the light field, block by block, by selecting frequencies of the model that best fits the available samples, while enforcing orthogonality wi…
Interestingness is the quantification of the ability of an imageto induce interest in a user. Because defining and interpretinginterestingness remain unclear in the literature, we introduce inthis paper two new notions, intra- and inter-interestingness, andinvestigate a novel set of dedicated experiments.More specifically, we propose four experimental protocols:…
Webinar /Jun 2024
Blog Post /Jul 2025
Blog Post /Jun 2025
Blog Post /Jun 2025