Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Conferences, Workshops, Symposiums, and Seminars
  4. Lower Bounds on Regret for Noisy Gaussian Process Bandit Optimization
 
conference paper not in proceedings

Lower Bounds on Regret for Noisy Gaussian Process Bandit Optimization

Scarlett, Jonathan  
•
Bogunovic, Ilija  
•
Cevher, Volkan  orcid-logo
2017
Conference on Learning Theory (COLT)Conference on Learning Theory (COLT)

In this paper, we consider the problem of sequentially optimizing a black-box function $f$ based on noisy samples and bandit feedback. We assume that $f$ is smooth in the sense of having a bounded norm in some reproducing kernel Hilbert space (RKHS), yielding a commonly-considered non-Bayesian form of Gaussian process bandit optimization. We provide algorithm-independent lower bounds on the simple regret, measuring the suboptimality of a single point reported after $T$ rounds, and on the cumulative regret, measuring the sum of regrets over the $T$ chosen points. For the isotropic squared-exponential kernel in $d$ dimensions, we find that an average simple regret of $\epsilon$ requires $T = \Omega\big(\frac{1}{\epsilon^2} (\log\frac{1}{\epsilon})^{d/2}\big)$, and the average cumulative regret is at least $\Omega\big( \sqrt{T(\log T)^{d/2}} \big)$, thus matching existing upper bounds up to the replacement of $d/2$ by $2d+O(1)$ in both cases. For the Mat'ern-$\nu$ kernel, we give analogous bounds of the form $\Omega\big( (\frac{1}{\epsilon})^{2+d/\nu}\big)$ and $\Omega\big( T^{\frac{\nu + d}{2\nu + d}} \big)$, and discuss the resulting gaps to the existing upper bounds.

  • Files
  • Details
  • Metrics
Type
conference paper not in proceedings
Author(s)
Scarlett, Jonathan  
Bogunovic, Ilija  
Cevher, Volkan  orcid-logo
Date Issued

2017

Subjects

Gaussian processes

•

Bandits

•

Online optimization

•

Reproducing kernel Hilbert space

•

Lower bounds

•

Cumulative regret

•

Simple regret

•

Bayesian optimization

•

ml-ai

Editorial or Peer reviewed

REVIEWED

Written at

EPFL

EPFL units
LIONS  
Event nameEvent placeEvent date
Conference on Learning Theory (COLT)Conference on Learning Theory (COLT)

AmsterdamAmsterdam, Netherlands

July 2017July, 7-10, 2017

Available on Infoscience
May 31, 2017
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/138058
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés