Fast hard thresholding with Nesterov's gradient method

We provide an algorithmic framework for structured sparse recovery which unifies combinatorial optimization with the non-smooth convex optimization framework by Nesterov [1, 2]. Our algorithm, dubbed Nesterov iterative hard-thresholding (NIHT), is similar to the algebraic pursuits (ALPS) in [3] in spirit: we use the gradient information in the convex data error objective to navigate over the non convex set of structured sparse signals. While ALPS feature a priori approximation guarantees, we were only able to provide an online approximation guarantee for NIHT (e.g., the guarantees require the algorithm execution). Experiments show however that NIHT can empirically outperform ALPS and other state-of-the-art convex optimization-based algorithms in sparse recovery.


Presented at:
Advances in Neuronal Information Processing Systems (NIPS) Workshops, Whistler, Canada, December 2010
Year:
2010
Laboratories:




 Record created 2010-11-22, last modified 2018-03-17

n/a:
Download fulltext
PDF

Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)