Scalable sparse covariance estimation via self-concordance

We consider the class of convex minimization problems, composed of a self-concordant function, such as the logdet metric, a convex data fidelity term h and, a regularizing -- possibly non-smooth -- function g. This type of problems have recently attracted a great deal of interest, mainly due to their omnipresence in top-notch applications. Under this locally Lipschitz continuous gradient setting, we analyze the convergence behavior of proximal Newton schemes with the added twist of a probable presence of inexact evaluations. We prove attractive convergence rate guarantees and enhance state-of-the-art optimization schemes to accommodate such developments. Experimental results on sparse covariance estimation show the merits of our algorithm, both in terms of recovery efficiency and complexity.

Published in:
Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence
Presented at:
Twenty-Eighth AAAI Conference on Artificial Intelligence, Quebec, Canada, July 27-31, 2014

 Record created 2014-05-05, last modified 2018-03-17

Publisher's version:
Download fulltext

Rate this document:

Rate this document:
(Not yet reviewed)