With the explosion of Virtual Reality technologies, the production and usage of omni directional images (a.k.a 360 images) is presenting new challenges in the domains of compression, transmission and rendering. The evaluation of the quality of images generated by these technologies is therefore paramount. As the exploration of 360 images within a Head-Mounted Display (HMD) is non-uniform, current state of the art proposes a saliency weighting of distortions (between a reference and an impaired version) thus allowing us to highlight impairments in frequently attended regions. So far, saliency maps have been generated by tracking head motion alone, and consider that the view-port orientation alone is sufficient to determine saliency. The added value of eye gaze tracking within the viewport has not been studied in this domain yet. In this work, an eye-tracking experiment is performed using a HMD and is followed by subsequent gaze analysis to appreciate the visual attention behavior within a view-port. Results suggest that most eye-gaze fixations are rather far away from the center of the viewport. Across contents and observers, gaze fixations are quasi-isotropically distributed in orientation. The average location of gaze fixation (across contents and observers) from the center of the view-port varies between 14 and 20 visual degrees and these values correspond to a shift in retina beyond the para and peri fovea, into the extra-perifoveal region. A saliency weighting model based on foveation, centered at the middle of the view-port seems to be a correct assumption in only 2.5% of the overall scenarios and is consequently questionable. Therefore there is a need to refine saliency modeling and weighting for quality assessment in case of panoramic viewing.