Inference for Generalized Linear Models via Alternating Directions and Bethe Free Energy Minimization

Generalized Linear Models (GLMs), where a random vector x is observed through a noisy, possibly nonlinear, function of a linear transform z = A x arise in a range of applications in nonlinear filtering and regression. Approximate Message Passing (AMP) methods, based on loopy belief propagation, are a promising class of approaches for approximate inference in these models. AMP methods are computationally simple, general, and admit precise analyses with testable conditions for optimality for large i.i.d. transforms A. However, the algorithms can easily diverge for general transforms. This paper presents a convergent approach to the generalized AMP (GAMP) algorithm based on direct minimization of a large-system limit approximation of the Bethe Free Energy (LSL-BFE). The proposed method uses a double-loop procedure, where the outer loop successively linearizes the LSL-BFE and the inner loop minimizes the linearized LSL-BFE using the Alternating Direction Method of Multipliers (ADMM). The proposed method, called ADMM-GAMP, is similar in structure to the original GAMP method, but with an additional least-squares minimization. It is shown that for strictly convex, smooth penalties, ADMM-GAMP is guaranteed to converge to a local minima of the LSL-BFE, thus providing a convergent alternative to GAMP that is stable under arbitrary transforms. Simulations are also presented that demonstrate the robustness of the method for non-convex penalties as well.

Published in:
Proceedings of the 2015 IEEE International Symposium on Information Theory (ISIT'15), Hong Kong, People's Republic of China, 1640–1644

 Record created 2017-03-02, last modified 2018-12-03

External links:
Download fulltextURL
Download fulltextURL
Download fulltextURL
Rate this document:

Rate this document:
(Not yet reviewed)