Bondaschi, MarcoGastpar, Michael2025-07-082025-07-082025-07-072024-07-0710.1109/isit57864.2024.10619270https://infoscience.epfl.ch/handle/20.500.14299/252042Large language models (LLMs) have recently gained much popularity due to their surprising ability at generating human-like English sentences. LLMs are essentially predictors, estimating the probability of a sequence of words given the past. Therefore, it is natural to evaluate their performance from a universal prediction perspective. In order to do that fairly, we introduce the notion of batch regret as a modification of the classical average regret, and we study its asymptotical value for add-constant predictors, in the case of memoryless sources and first-order Markov sources.enBatch Universal Predictiontext::conference output::conference proceedings::conference paper