Iterative rigid body transformation estimation for visual 3-D object tracking
We present a novel yet simple 3D stereo vision tracking algorithm which computes the position and orientation of an object from the location of markers attached to the object. The novelty of this algorithm is that it does not assume that the markers are tracked synchronously. This provides a higher robustness to the noise in the data, missing points and outliers. The principle of the algorithm is to perform a simple gradient descent on the rigid body transformation describing the object position and orientation. This is proved to converge to the correct solution and is illustrated in a simple experimental setup involving two USB cameras.
hersch08iterative.pdf
openaccess
184.31 KB
Adobe PDF
5e242483c106d4c25185dbd57bc560ce