Files

Abstract

In this thesis, we present a Riemannian framework for the solution of high-dimensional optimization problems with an underlying low-rank tensor structure. Here, the high-dimensionality refers to the size of the search space, while the cost function is scalar-valued. Such problems arise, for example, in the reconstruction of high-dimensional data sets and in the solution of parameter dependent partial differential equations. As the degrees of freedom grow exponentially with the number of dimensions, the so-called curse of dimensionality, directly solving the optimization problem is computationally unfeasible even for moderately high-dimensional problems. We constrain the optimization problem by assuming a low-rank tensor structure of the solution; drastically reducing the degrees of freedom. We reformulate this constrained optimization as an optimization problem on a manifold using the smooth embedded Riemannian manifold structure of the low-rank representations of the Tucker and tensor train formats. Exploiting this smooth structure, we derive efficient gradient-based optimization algorithms. In particular, we propose Riemannian conjugate gradient schemes for the solution of the tensor completion problem, where we aim to reconstruct a high-dimensional data set for which the vast majority of entries is unknown. For the solution of linear systems, we show how we can precondition the Riemannian gradient and leverage second-order information in an approximate Newton scheme. Finally, we describe a preconditioned alternating optimization scheme with subspace correction for the solution of high-dimensional symmetric eigenvalue problems.

Details

Actions

Preview