Tracking and Structure from Motion
Dense three-dimensional reconstruction of a scene from images is a very challenging task. In the structure from motion approach one of the key points is to compute depth maps which contain the distance of objects in the scene to a moving camera. Usually, this is achieved by finding correspondences in successive images and computing the distance by means of epipolar geometry. In this Master's thesis, a variational framework to solve the depth from motion problem for planar image sequences is proposed. Camera ego-motion estimation equations are derived and combined with the depth from motion estimation in a single algorithm. The method is successfully tested on synthetic images for general camera translation. Since it does not depend on the correspondance problem and because it is highly parallelizable, it is well adapted for real-time implementation. Further work in this thesis include a review of general variational methods in image processing, and in particular TV-L1 optical flow as well as its real-time implementation on the graphics processing unit.