Laplacian Matrix Learning for Smooth Graph Signal Representation

The construction of a meaningful graph plays a crucial role in the emerging field of signal processing on graphs. In this paper, we address the problem of learning graph Laplacians, which is similar to learning graph topologies, such that the input data form graph signals with smooth variations on the resulting topology. We adopt a factor analysis model for the graph signals and impose a Gaussian probabilistic prior on the latent variables that control these graph signals. We show that the Gaussian prior leads to an efficient representation that favours the smoothness property of the graph signals, and propose an algorithm for learning graphs that enforce such property. Experiments demonstrate that the proposed framework can efficiently infer meaningful graph topologies from only the signal observations.

Publié dans:
Proceedings of IEEE ICASSP
Présenté à:
IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brisbane, Australia, April, 2015

 Notice créée le 2015-02-10, modifiée le 2019-08-12

Télécharger le document

Évaluer ce document:

Rate this document:
(Pas encore évalué)