Modeling of 2D+1 texture movies for video coding

We propose a novel model-based coding system for video. Model-based coding aims at improving compression gain by replacing the non-informative image elements with some perceptually equivalent models. Images enclosing large textured regions are ideal candidates. Texture movies are obtained by filming a static texture with a moving camera. The integration of the motion information within the generative texture process allows to replace the “real” texture with a “visually equivalent” synthetic one, while preserving the correct motion perception. Global motion estimation is used to determine the movement of the camera and to identify the overlapping region between two successive frames. Such an information is then exploited for the generation of the texture movies. The proposed method for synthesizing 2D+1 texture movies is able to emulate any piece-wise linear trajectory. Compression performances are very encouraging. On this kind of video sequences, the proposed method improves the compression rate of an MPEG4 state-of-the-art video coder of an order of magnitude while providing a sensibly better perceptual quality. Importantly, the current implementation is real-time on Intel PIII processors.

Published in:
Image and Vision Computing, Special Issue on Generative Model Based Vision, 21, 12, 49-59
Other identifiers:

Note: The status of this file is: Anyone

 Record created 2005-05-21, last modified 2020-10-28

Download fulltext

Rate this document:

Rate this document:
(Not yet reviewed)