This paper presents a distributed coding scheme for the representation of 3D scenes captured by omnidirectional cameras. We consider a scenario with a pair of similar cameras that benefit from equivalent bandwidth and computational resources. The images are captured at different viewpoints and encoded independently, while a joint decoder exploits the correlation between images for improved decoding quality. The distributed coding is built on the multiresolution representation of spherical images, whose information is split into two partitions. The encoder then transmits one partition after entropy coding, as well as the syndrome bits resulting from the channel encoding of the other partition. The joint decoder exploits the intra-view correlation by predicting one partition from the other partition. At the same time, it exploits the inter-view correlation by using motion estimation between images from different cameras. Experiments demonstrate that the distributed coding solution performs better than a scheme where images are handled independently. Furthermore, the coding rate stays balanced between the different cameras, which interestingly permits to avoid hierarchical relations between vision sensors in camera networks.