000192396 001__ 192396
000192396 005__ 20181001181855.0
000192396 037__ $$aARTICLE
000192396 245__ $$aAdaptive Sampling for Large Scale Boosting
000192396 269__ $$a2014
000192396 260__ $$c2014
000192396 336__ $$aJournal Articles
000192396 520__ $$aClassical Boosting algorithms, such as AdaBoost, build a strong classifier without concern for the computational cost. Some applications, in particular in computer vision, may involve millions of training examples and very large feature spaces. In such contexts, the training time of off-the-shelf Boosting algorithms may become prohibitive. Several methods exist to accelerate training, typically either by sampling the features or the examples used to train the weak learners. Even if some of these methods provide a guaranteed speed improvement, they offer no insurance of being more efficient than any other, given the same amount of time. The contributions of this paper are twofold: (1) a strategy to better deal with the increasingly common case where features come from multiple sources (eg. color, shape, texture, etc. in the case of images) and therefore can be partitioned into meaningful subsets; (2) new algorithms which balance at every Boosting iteration the number of weak learners and the number of training examples to look at in order to maximize the expected loss reduction. Experiments in image classification and object recognition on four standard computer vision data-sets show that the adaptive methods we propose outperform basic sampling and state-of-the-art bandit methods.
000192396 700__ $$0246031$$aDubout, Charles$$g160831
000192396 700__ $$aFleuret, Francois
000192396 773__ $$j15$$q1431-1453$$tJournal of Machine Learning Research
000192396 8564_ $$s548058$$uhttps://infoscience.epfl.ch/record/192396/files/Dubout_JMLR_2014.pdf$$yn/a$$zn/a
000192396 909C0 $$0252189$$pLIDIAP$$xU10381
000192396 909CO $$ooai:infoscience.tind.io:192396$$pSTI$$particle$$qGLOBAL_SET
000192396 917Z8 $$x148230
000192396 937__ $$aEPFL-ARTICLE-192396
000192396 970__ $$aDubout_JMLR_2014/LIDIAP
000192396 973__ $$aEPFL$$rREVIEWED$$sPUBLISHED
000192396 980__ $$aARTICLE