Antonakopoulos, KimonPethick, Thomas MichaelsenKavis, AliMertikopoulos, PanayotisCevher, Volkan2021-11-092021-11-092021-11-092021https://infoscience.epfl.ch/handle/20.500.14299/182945We examine a flexible algorithmic framework for solving monotone variational inequalities in the presence of randomness and uncertainty. The proposed template encompasses a wide range of popular first-order methods, including dual averaging, dual extrapolation and optimistic gradient algorithms – both adaptive and non-adaptive. Our first result is that the algorithm achieves the optimal rates of convergence for cocoercive problems when the profile of the randomness is known to the optimizer: O (1/√T) for absolute noise profiles, and O (1/T) for relative ones. Subsequently, we drop all prior knowledge requirements (the absolute/ relative variance of the randomness affecting the problem, the operator’s cocoercivity constant, etc.), and we analyze an adaptive instance of the method that gracefully interpolates between the above rates – i.e., it achieves O (1/√T) and O (1/T) in the absolute and relative cases, respectively. To our knowledge, this is the first universality result of its kind in the literature and, somewhat surprisingly, it shows that an extra-gradient proxy step is not required to achieve optimal rates.ml-aiSifting through the Noise: Universal First-Order Methods for Stochastic Variational Inequalitiestext::conference output::conference paper not in proceedings