We present a diminished reality application running live on consumer mobile devices. In our pre-observation-based approach, the clean 3D scene, free of undesired objects, is scanned beforehand and reconstructed as a high resolution textured 3D model. At runtime, objects added in a region of interest are efficiently removed by projecting the previously captured background. Differences of illumination conditions between scan time and run-time are compensated to obtain seamless results. The proposed approach requires no segmentation or manual input other than the definition of the 3D region of interest to be diminished, and is not based on any particular assumption on the background geometry. We show the potential of our approach by processing a variety of challenging unknown 3D scenes including textured backgrounds, dynamic illumination conditions and foreground objects partially occluding the diminished region. We provide details on our compute shader implementation to make as easy as possible the reimplementation by the community.