Files

Abstract

With the flood of information available today the question how to deal with high dimensional data/signals, which are cumbersome to handle, to calculate with and to store, is highly important. One approach to reducing this flood is to find sparse signal representations, as a signal that is the linear combination of a few elements from a pool of building blocks, can be reduced to the few coefficients of this representation. If these building blocks form a basis, finding the sparse representation poses no problem but unfortunately not many signal classes are sparse in a basis. Taking more building blocks, i.e. a redundant dictionary, increases the chances of having sparse representations, but actually finding them becomes very hard. This led to the development of numerous strategies and algorithms for finding sparse representations, with varying complexity and success rate. The first part of the thesis deals with two of those algorithms, Thresholding and Matching Pursuit, from a more theoretical point of view. It is shown that both those greedy algorithms can be improved with a little trick, that does not increase their complexity, and that when considering their average instead of their worst case performance they perform quite well in comparison to more complex methods. The second part of thesis treats questions concerning the whole dictionary and its properties. First it gives more evidence that sparsity is useful by extending the concept of compressed sensing to signals that are sparse not in a basis but in a redundant dictionary. Thus to record a sparse signal it is not necessary to make as many measurements as the dimension of the signal but only a multiple of the number of dictionary elements used to represent it. Next we show that dictionaries cannot only provide sparse representations but that their geometric properties can also be exploited to model data structures. Here we explain how to model different subclasses of a class of signals by incoherent subspaces, present an algorithm to learn a dictionary made out of these subspaces and then use it for classification of faces. Finally we turn back to the sparse representation problem and study the fundamental question how to find a dictionary providing sparse representations. We pick up the idea to learn a dictionary via minimisation of a continuous cost function and provide conditions, guaranteeing that the decomposition of a collection of training signals into a dictionary and a coefficient matrix constitutes a local minimum. We also analyse statistically when these conditions are fulfilled with high probability.

Details

Actions

Preview