This paper presents a novel rate allocation scheme to compute the 3D structure of the scene from compressed stereo images, captured by the distributed vision sensor networks. The images captured at different view points are encoded independently with a balanced rate allocation. The central decoder jointly decodes the information from the encoders, and computes the 3D geometry of the scene in terms of depth map. We first consider the scenario of estimating the 3D geometry from the views, compressed using standard encoders, e.g., SPIHT. Unfortunately, we noticed that the depth value is not precisely reconstructed in the low contrast regions or region around weak edges. It is mainly due to the rate allocation scheme, that allocates the bits based on the variance of the coefficients. We therefore propose a rate allocation scheme, where each encoder first identifies the low contrast regions and then distributes the bits such that the visual information in the low contrast regions is preserved. At the same time, the approximation quality in the rest of the image should not be penalized significantly. We adapt the SPIHT coding scheme to implement the proposed rate allocation methodology. Experimental results show that for a given bit budget, the proposed encoding scheme reconstructs the 3D geometry with more accuracy comparing to SPIHT, JPEG 2000 and JPEG coding schemes.