El-Bachir, YousraDavison, Anthony C.2020-01-262020-01-262020-01-262019-01-01https://infoscience.epfl.ch/handle/20.500.14299/164931WOS:000506403100013Generalized additive models (GAMs) are regression models wherein parameters of probability distributions depend on input variables through a sum of smooth functions, whose degrees of smoothness are selected by L-2 regularization. Such models have become the de-facto standard nonlinear regression models when interpretability and flexibility are required, but reliable and fast methods for automatic smoothing in large data sets are still lacking. We develop a general methodology for automatically learning the optimal degree of L-2 regularization for GAMs using an empirical Bayes approach. The smooth functions are penalized by hyper-parameters that are learned simultaneously by maximization of a marginal likelihood using an approximate expectation-maximization algorithm. The latter involves a double Laplace approximation at the E-step, and leads to an efficient M-step. Empirical analysis shows that the resulting algorithm is numerically stable, faster than the best existing methods and achieves state-of-the-art accuracy. For illustration, we apply it to an important and challenging problem in the analysis of extremal data.Automation & Control SystemsComputer Science, Artificial IntelligenceAutomation & Control SystemsComputer Scienceautomatic l-2 regularizationempirical bayesexpectation-maximization algorithmgeneralized additive modellaplace approximationmarginal maximum likelihoodfrequency-distributionmaximum-likelihoodregressionselectionparameterscaleFast Automatic Smoothing for Generalized Additive Modelstext::journal::journal article::research article