Files

Abstract

Latest developments in video coding technology on one side, and a continuous growth in size and bandwidth in lossy networks on the other side, have undoubtedly created a whole new world of multimedia communications. However, nowadays networks, which are best-effort in nature, are unable to guarantee the stringent delay constraints and bandwidth requirements imposed by many of these applications. Therefore, the main challenge remains to find efficient coding techniques which do not require retransmission and which can ensure a good reconstruction quality if pieces of information are missing. Multiple description coding (MDC) offers an elegant and competitive solution for data transmission over lossy packet-based networks, with a graceful degradation in quality as losses increase. In MDC, two or more representations of a source are generated in such a way that an acceptable quality is ensured even if only one description is received, while this quality further improves as more of them are combined. In this thesis, we address some important issues in MDC. One of them is how to generate an arbitrary number of descriptions, as it has been suggested by many researchers that having a scheme which adapts the number of descriptions to different lossy scenarios can be of great benefit. Another interesting problem is how to combine the principles of multiple description coding and increasingly popular redundant signal expansions, since they represent a natural candidate for MDC. Finally, our goal is to address the problem of designing a simple and efficient multiple description video coding scheme, which utilizes error resilience tools o ered by the latest video coding standard, H.264/AVC. We first address the generation of an arbitrary number of descriptions with the multiple description scalar quantization technique. Unlike the existing solutions whose complexity drastically increases when the number of descriptions augments, our solution remains very simple and easily extendable to any number of descriptions. We show how the tradeoff between distortions can be easily controlled with very few parameters in our scheme. Finally, given the probability of losing a description and the total bitrate, we find the optimal number of descriptions which minimizes the average distortion, taken as a sum of distortions weighted by the corresponding probabilities. Next, we address the multiple description coding problem with redundant dictionaries of functions, called the atoms. Such dictionaries contain inherent redundancy, which can be efficiently exploited for MDC purpose. To do so, we cluster similar atoms together and represent each group by the molecules, taken as a weighted sum of the atoms in its clusters. Once a molecule is chosen as a good candidate in the signal representation, its children are distributed to different descriptions. To generate a description, we project a signal onto the sets of chosen atoms. This further gives us the sets of coefficients, which have to be quantized before transmission. To do so, we propose an adaptive quantization strategy which takes into account the importance of each atom, the properties of a dictionary and the expected loss probability. We apply these principles to an image communication scenario, where we use the modified version of matching pursuit algorithm to extract the most important information about an image on the level of molecules, and the less important candidates on the level of atoms. The redundancy in our scheme is controlled by the number of descriptions and the number of elements taken from the level of molecules. Finally, we propose a standard compatible two-description video coding scheme which uses redundant pictures, an error resilience tool included in H.264/AVC, to improve the robustness to losses. In our implementation, redundant pictures are coarse versions of primary pictures and they are used to replace their possibly lost parts. If a primary picture is correctly received, its redundant version is normally discarded by the decoder. We propose a distortion model which, given the total bitrate and the network loss rate, tells us how to split the total rate between the ones for primary and redundant pictures, such that the average distortion at the receiver is minimized. We show that at low loss rates it does not make a lot of sense to waste bits on redundant pictures, since the probability they will be used as a replacement for primary pictures is low. On the other hand, as the loss rate increases, having a good quality of redundant pictures becomes more beneficial. Finally, we show how the reconstructed quality can be further improved if we combine the reconstructions from both primary and redundant pictures.

Details

Actions