Computational visual attention (VA) has been widely investigated during the last three decades but the conventional algorithms are not suitable for omnidirectional images which often contain a significant amount of radial distortion. Only recently a computational approach was proposed that processes images in the spherical (non-Euclidian) space and produces attention maps with a direction independent homogeneous response. This paper investigates how this spherical approach applies to real scenes and particularly to different omnidirectional visual sensors. Reported experiments refer to omnidirectional images obtained from a multi-camera omnidirectional sensor as well as a parabolic and a hyperbolic catadioptric image sensor.