Adversarially Robust Optimization with Gaussian Processes

In this paper, we consider the problem of Gaussian process (GP) optimization with an added robustness requirement: The returned point may be perturbed by an adversary, and we require the function value to remain as high as possible even after this perturbation. This problem is motivated by settings in which the underlying functions during optimization and implementation stages are different, or when one is interested in finding an entire region of good inputs rather than only a single point. We show that standard GP optimization algorithms do not exhibit the desired robustness properties, and provide a novel confidence-bound based algorithm STABLEOPT for this purpose. We rigorously establish the required number of samples for STABLEOPT to find a near-optimal point, and we complement this guarantee with an algorithm-independent lower bound. We experimentally demonstrate several potential applications of interest using real-world data sets, and we show that STABLEOPT consistently succeeds in finding a stable maximizer where several baseline methods fail.


Published in:
Advances In Neural Information Processing Systems 31 (Nips 2018), 31
Presented at:
32nd Conference on Neural Information Processing Systems (NIPS), Montreal, CANADA, Dec 02-08, 2018
Year:
Jan 01 2018
Publisher:
La Jolla, NEURAL INFORMATION PROCESSING SYSTEMS (NIPS)
ISSN:
1049-5258
ISBN:
*****************
Laboratories:




 Record created 2019-06-18, last modified 2019-08-12


Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)