Rosas, Fernando E.Mediano, Pedro A. M.Gastpar, Michael2021-12-042021-12-042021-12-042021-01-0110.1109/ITW46852.2021.9457579https://infoscience.epfl.ch/handle/20.500.14299/183581WOS:000713953900010Learning and compression are driven by the common aim of identifying and exploiting statistical regularities in data, which opens the door for fertile collaboration between these areas. A promising group of compression techniques for learning scenarios is normalised maximum likelihood (NML) coding, which provides strong guarantees for compression of small datasets - in contrast with more popular estimators whose guarantees hold only in the asymptotic limit. Here we consider a NML-based decision strategy for supervised classification problems, and show that it attains heuristic PAC learning when applied to a wide variety of models. Furthermore, we show that the misclassification rate of our method is upper bounded by the maximal leakage, a recently proposed metric to quantify the potential of data leakage in privacy-sensitive scenarios.supervised learninguniversal compressionmaximal leakagenormalised maximum likelihoodinformationLearning, compression, and leakage: Minimising classification error via meta-universal compression principlestext::conference output::conference proceedings::conference paper