Files

Abstract

This thesis investigates advanced signal processing concepts and their application to geometric processing and transformations of images and volumes. In the first part, we discuss the class of transformations that project volume data onto a plane using parallel line integrals; it is called the X-ray transform. In computer tomography (CT) the problem is to reconstruct the volume from these projections. We consider a basic setup with parallel projection and a geometry model in which the CT scanner rotates around one main axis of the volume. In this case, the problem is separable and reduces to the reconstruction of parallel images (slices of the volume). Each image is reconstructible from a series of 1D projections taken at different angular positions. The standard reconstruction algorithm is the filtered back-projection (FBP). We propose an alternative discretization of the Radon transform and of its inverse that is based on least-squares approximation and the convolution of splines. It improves the quality of the transform significantly. Next we discuss volume rendering based on the X-ray transform. The volume is represented by a multiresolution wavelet decomposition. The wavelets are projected onto an adaptive multiresolution 2D grid. The multiresolution grid allows to speed up the rendering process especially at coarse scales. In the second part of the thesis, we discuss transformations that warp images. In computer graphics, this is called texture mapping. Simple warps, such as shear, rotation, or zoom, can be computed by least-squares sampling, e.g. again with convolutions of splines. For more general warps there is not an easy continuous solution, if an analytical one exists at all. After a review of existing texture mapping methods, we propose a novel recursive one, which minimizes information loss. For this purpose, the texture is reconstructed from the mapped image and compared to the original texture. This algorithm outperforms the existing methods in terms of quality, but its complexity can be very high. A multiresolution version of the algorithm allows to keep the storage requirements and computational complexity within acceptable range. Fast transmission of textures and 3D models over communication links requires low bitrate and progressive compression. We can achieve very low bitrate if we code textures only with the information necessary for a given view of the 3D scene. If the view changes, the missing information will be transmitted. This results in a progressive bitstream for an animated 3D scene. In contrast to recursive texture mapping, we do not back-project the texture, but come up with a heuristic that predicts the information loss. This concept is called view-dependent scalability and we show how to apply it on DCT-based (as part of MPEG-4) and wavelet-based coders. Last, we inspect the question of how to balance the bit budget of a jointly coded mesh and texture for the progressive and view-dependent transmission of a 3D model. By exhaustive search, we find the rate-distortion optimal path. By marginal analysis we find a close solution but at much lower costs (only two evaluated frames per step as compared to a full search).

Details

PDF