Distributed coding of multiresolution omnidirectional images
This paper addresses the problem of compact representation of a 3D scene, captured by distributed omnidirectional cameras. As the images from the sensors are likely to be correlated in most practical scenarios, we build a distributed algorithm based on coding with side information. A reference image is processed with a wavelet transform and progressively encoded. The Wyner-Ziv images undergo a multiresolution representation, and the generated bitplanes are channel encoded with LDPC codes. The central decoder eventually reconstructs the Wyner-Ziv images given by the syndrome bits from the channel codes using the reference omnidirectional image. It also iteratively implements motion estimation on the 2-sphere in order to improve the side information. Experimental results demonstrate that distributed coding improves the rate-distortion performance for coding a set of omnidirectional images when compared to independent coding solutions. The proposed method can further be extended to the decoding of multiple Wyner-Ziv images using one single reference omnidirectional image. Hence, it achieves a reduced overall coding rate compared to disparity-based schemes. In addition, it does not require explicit knowledge of the camera parameters nor precise calibration, which is certainly interesting in camera networks.