Infoscience

Report

# More Efficiency in Multiple Kernel Learning

An efficient and general multiple kernel learning (MKL) algorithm has been recently proposed by \singleemcite{sonnenburg_mkljmlr}. This approach has opened new perspectives since it makes the MKL approach tractable for large-scale problems, by iteratively using existing support vector machine code. However, it turns out that this iterative algorithm needs several iterations before converging towards a reasonable solution. In this paper, we address the MKL problem through an adaptive 2-norm regularization formulation. Weights on each kernel matrix are included in the standard SVM empirical risk minimization problem with a $\ell_1$ constraint to encourage sparsity. We propose an algorithm for solving this problem and provide an new insight on MKL algorithms based on block 1-norm regularization by showing that the two approaches are equivalent. Experimental results show that the resulting algorithm converges rapidly and its efficiency compares favorably to other MKL algorithms.

Note:

To appear in \textit{Proceedings of the $\mathit{24}^{th}$ International Conference on Machine Learning}, Corvallis, OR, 2007

#### Reference

• LIDIAP-REPORT-2007-047

Record created on 2010-02-11, modified on 2016-08-08