Information-Theoretic Regret Bounds for Gaussian Process Optimization in the Bandit Setting
Many applications require optimizing an unknown, noisy function that is expensive to evaluate. We formalize this task as a multiarmed bandit problem, where the payoff function is either sampled from a Gaussian process (GP) or has low norm in a reproducing kernel Hilbert space. We resolve the important open problem of deriving regret bounds for this setting, which imply novel convergence rates for GP optimization. We analyze an intuitive Gaussian process upper confidence bound (GP-UCB) algorithm, and bound its cumulative regret in terms of maximal information gain, establishing a novel connection between GP optimization and experimental design. Moreover, by bounding the latter in terms of operator spectra, we obtain explicit sublinear regret bounds for many commonly used covariance functions. In some important cases, our bounds have surprisingly weak dependence on the dimensionality. In our experiments on real sensor data, GP-UCB compares favorably with other heuristical GP optimization approaches.
Keywords: Bandit problems ; Bayesian prediction ; experimental design ; Gaussian process (GP) ; information gain ; nonparametric statistics ; online learning ; regret bound ; statistical learning ; Global Optimization ; Consistency ; Algorithm
Record created on 2012-05-18, modified on 2016-08-09