000207540 001__ 207540
000207540 005__ 20180317092058.0
000207540 022__ $$a1053-587X
000207540 0247_ $$2doi$$a10.1109/TSP.2016.2634546
000207540 02470 $$2ISI$$a000393753400021
000207540 037__ $$aARTICLE
000207540 245__ $$aMulti-task additive models with shared transfer functions based on dictionary learning
000207540 260__ $$aPiscataway$$bInstitute of Electrical and Electronics Engineers$$c2017
000207540 269__ $$a2017
000207540 300__ $$a14
000207540 336__ $$aJournal Articles
000207540 520__ $$aAdditive models form a widely popular class of regression models which represent the relation between covariates and response variables as the sum of low-dimensional transfer functions. Besides flexibility and accuracy, a key benefit of these models is their interpretability: the transfer functions provide visual means for inspecting the models and identifying domain-specific relations between inputs and outputs. However, in large-scale problems involving the prediction of many related tasks, learning independently additive models results in a loss of model interpretability, and can cause overfitting when training data is scarce. We introduce a novel multi-task learning approach which provides a corpus of accurate and interpretable additive models for a large number of related forecasting tasks. Our key idea is to share transfer functions across models in order to reduce the model complexity and ease the exploration of the corpus. We establish a connection with sparse dictionary learning and propose a new efficient fitting algorithm which alternates between sparse coding and transfer function updates. The former step is solved via an extension of Orthogonal Matching Pursuit, whose properties are analyzed using a novel recovery condition which extends existing results in the literature. The latter step is addressed using a traditional dictionary update rule. Experiments on real-world data demonstrate that our approach compares favorably to baseline methods while yielding an interpretable corpus of models, revealing structure among the individual tasks and being more robust when training data is scarce. Our framework therefore extends the well-known benefits of additive models to common regression settings possibly involving thousands of tasks.
000207540 6531_ $$aAdditive models
000207540 6531_ $$anonparametric regression
000207540 6531_ $$adictionary learning
000207540 6531_ $$asparse representations
000207540 6531_ $$amulti-task learning
000207540 700__ $$0246320$$aFawzi, Alhussein$$g203034
000207540 700__ $$aSinn, Mathieu
000207540 700__ $$0241061$$aFrossard, Pascal$$g101475
000207540 773__ $$j65$$k5$$q1352-1365$$tIEEE Transactions on Signal Processing
000207540 8564_ $$s567456$$uhttps://infoscience.epfl.ch/record/207540/files/double.pdf$$yPreprint$$zPreprint
000207540 909CO $$ooai:infoscience.tind.io:207540$$particle$$pSTI
000207540 909C0 $$0252393$$pLTS4$$xU10851
000207540 917Z8 $$x203034
000207540 937__ $$aEPFL-ARTICLE-207540
000207540 973__ $$aEPFL$$rREVIEWED$$sPUBLISHED
000207540 980__ $$aARTICLE
000207540 981__ $$a225414