Kressner, DanielUschmajew, Andre2016-04-012016-04-012016-04-01201610.1016/j.laa.2015.12.016https://infoscience.epfl.ch/handle/20.500.14299/125246WOS:000370455800036Low-rank tensor approximation techniques attempt to mitigate the overwhelming complexity of linear algebra tasks arising from high-dimensional applications. In this work, we study the low-rank approximability of solutions to linear systems and eigenvalue problems on Hilbert spaces. Although this question is central to the success of all existing solvers based on low-rank tensor techniques, very few of the results available so far allow to draw meaningful conclusions for higher dimensions. In this work, we develop a constructive framework to study low-rank approximability. One major assumption is that the involved linear operator admits a low-rank representation with respect to the chosen tensor format, a property that is known to hold in a number of applications. Simple conditions, which are shown to hold for a fairly general problem class, guarantee that our derived low-rank truncation error estimates do not deteriorate as the dimensionality increases. (C) 2015 Elsevier Inc. All rights reserved.Low-rank tensor approximationHigh-dimensional equationsSingular value decayRichardson iterationOn low-rank approximability of solutions to high-dimensional operator equations and eigenvalue problemstext::journal::journal article::research article