Best of Both Worlds: Regret Minimization versus Minimax Play
In this paper, we investigate the existence of online learning algorithms with bandit feedback that simultaneously guarantee O(1) regret compared to a given comparator strategy, and Õ(√ T) regret compared to any fixed strategy, where T is the number of rounds. We provide the first affirmative answer to this question whenever the comparator strategy supports every action. In the context of zero-sum games with min-max value zero, both in normal-and extensive form, we show that our results allow us to guarantee to risk at most O(1) loss while being able to gain Ω(T) from exploitable opponents, thereby combining the benefits of both no-regret algorithms and minimax play.
2025-07
Proceedings of Machine Learning Research; 267
2640-3498
REVIEWED
EPFL
| Event name | Event acronym | Event place | Event date |
ICML 2025 | Vancouver, Canada | 2025-07-13 - 2025-07-19 | |