MATHICSE GroupTsilifis, PanagiotisPapaioannou, IasonStraub, DanielNobile, Fabio2020-01-092020-01-092020-01-092020-01-0910.5075/epfl-MATHICSE-273648https://infoscience.epfl.ch/handle/20.500.14299/164488The challenges for non-intrusive methods for Polynomial Chaos modeling lie in the computational efficiency and accuracy under a limited number of model simulations. These challenges can be addressed by enforcing sparsity in the series representation through retaining only the most important basis terms. In this work, we present a novel sparse Bayesian learning technique for obtaining sparse Polynomial Chaos expansions which is based on a Relevance Vector Machine model and is trained using Variational Inference. The methodology shows great potential in high-dimensional data-driven settings using relatively few data points and achieves user-controlled sparse levels that are comparable to other methods such as compressive sensing. The proposed approach is illustrated on two numerical examples, a synthetic response function that is explored for validation purposes and a low-carbon steel plate with random Young's modulus and random loading, which is modeled by stochastic finite element with 38 input random variables.Polynomial Chaossparse representationsvariational inferencerelevance vector machinesKullback-Leibler divergencehierachical Bayesian modeMATHICSE Technical Report : Sparse Polynomial Chaos expansions using Variational Relevance Vector Machinestext::working paper