Batch Universal Prediction
Large language models (LLMs) have recently gained much popularity due to their surprising ability at generating human-like English sentences. LLMs are essentially predictors, estimating the probability of a sequence of words given the past. Therefore, it is natural to evaluate their performance from a universal prediction perspective. In order to do that fairly, we introduce the notion of batch regret as a modification of the classical average regret, and we study its asymptotical value for add-constant predictors, in the case of memoryless sources and first-order Markov sources.
École Polytechnique Fédérale de Lausanne
École Polytechnique Fédérale de Lausanne
2024-07-07
979-8-3503-8284-6
3552
3557
REVIEWED
EPFL
| Event name | Event acronym | Event place | Event date |
ISIT 2024 | Athens, Greece | 2024-07-07 - 2024-07-12 | |