Randomized Low-Rank Approximation Of Monotone Matrix Functions
This work is concerned with computing low-rank approximations of a matrix function f(A) for a large symmetric positive semidefinite matrix A, a task that arises in, e.g., statistical learning and inverse problems. The application of popular randomized methods, such as the randomized singular value decomposition or the Nystrom approximation, to f(A) requires multiplying f(A) with a few random vectors. A significant disadvantage of such an approach, matrix-vector products with f(A) are considerably more expensive than matrix-vector products with A, even when carried out only approximately via, e.g., the Lanczos method. In this work, we present and analyze funNystrom, a simple and inexpensive method that constructs a low-rank approximation of f(A) directly from a Nystrom approximation of A, completely bypassing the need for matrix-vector products with f(A). It is sensible to use funNystrom whenever f is monotone and satisfies f(0) = 0. Under the stronger assumption that f is operator monotone, which includes the matrix square root A(1/2) and the matrix logarithm log(I + A), we derive probabilistic bounds for the error in the Frobenius, nuclear, and operator norms. These bounds confirm the numerical observation that funNystrom tends to return an approximation that compares well with the best low-rank approximation of f(A). Furthermore, compared to existing methods, funNystrom requires significantly fewer matrix-vector products with A to obtain a low-rank approximation of f(A), without sacrificing accuracy or reliability. Our method is also of interest when estimating quantities associated with f(A), such as the trace or the diagonal entries of f(A). In particular, we propose and analyze funNystrom++, a combination of funNystrom with the recently developed Hutch++ method for trace estimation.
WOS:001040782500006
2023-01-01
44
2
894
918
REVIEWED