360° video, supporting the ability to present views consistent with the rotation of the viewer's head along three axes (roll, pitch, yaw) is the current approach for creation of immersive video experiences. Nevertheless, a more fully natural, photorealistic experience - with support of visual cues that facilitate coherent psycho-visual sensory fusion without the side-effect of cyber-sickness - is desired. 360° video applications that additionally enable the user to translate in x, y, and z directions are clearly a subsequent frontier to be realized toward the goal of sensory fusion without cyber-sickness. Such support of full Six Degrees-of-Freedom (6 DoF) for next generation immersive video is a natural application for light fields. However, a significant obstacle to the adoption of light field technologies is the large data necessary to ensure that the light rays corresponding to the viewer's position relative to 6-DoF are properly delivered, either from captured light information or synthesized from available views. Experiments to improve known methods for view synthesis and depth estimation are therefore a fundamental next step to establish a reference framework within which compression technologies can be evaluated. This paper describes a testbed and experiments to enable smooth and artefact-free view transitions that can later be used in a framework to study how best to compress the data.