To enable light fields of large environments to be captured, they would have to be sparse, i.e. with a relatively large distance between views. Such sparseness, however, causes subsequent processing to be much more difficult than would be the case with dense light fields. This includes segmentation. In this paper, we address the problem of meaningful segmentation of a sparse planar light field, leading to segments that are coherent between views. In addition, uniquely our method does not make the assumption that all surfaces in the environment are perfect Lambertian reflectors, which further broadens its applicability. Our fully automatic segmentation pipeline leverages scene structure, and does not require the user to navigate through the views to fix inconsistencies. The key idea is to combine coarse estimations given by an over-segmentation of the scene into super-rays, with detailed ray-based processing. We show the merit of our algorithm by means of a novel way to perform intrinsic light field decomposition, outperforming state-of-the-art methods.