Nonlinear data description with Principal Polynomial Analysis

Principal Component Analysis (PCA) has been widely used for manifold description and dimensionality reduction. Performance of PCA is however hampered when data exhibits nonlinear feature relations. In this work, we propose a new framework for manifold learning based on the use of a sequence of Principal Polynomials that capture the eventually nonlinear nature of the data. The proposed Principal Polynomial Analysis (PPA) is shown to generalize PCA. Unlike recently proposed nonlinear methods (e.g. spectral/kernel methods and projection pursuit techniques, neural networks), PPA features are easily interpretable and the method leads to a fully invertible transform, which is a desirable property to evaluate performance in dimensionality reduction. Successful performance of the proposed PPA is illustrated in dimensionality reduction, in compact representation of non-Gaussian image textures, and multispectral image classification. © 2012 IEEE.

Published in:
2012 IEEE International Workshop on Machine Learning for Signal Processing, 1-6

 Record created 2013-01-24, last modified 2018-12-03

Rate this document:

Rate this document:
(Not yet reviewed)