Files

Abstract

We emphasize the need for input-dependent regularization in the context of conditional density models (also: discriminative models) like Gaussian process predictors. This can be achieved by a simple modification of the standard Bayesian data generation model un- derlying these techniques. Specifically, we allow the latent target function to be a- priori dependent on the distribution of the input points. While the standard genera- tion model results in robust predictors, data with missing labels is ignored, which can be wasteful if relevant prior knowledge is avail- able. We show that discriminative mod- els like Fisher kernel discriminants and Co- Training classifiers can be regarded as (ap- proximate) Bayesian inference techniques un- der the modified generation model, and that the template Co-Training algorithm is related to a variant of the well-known Expectation- Maximization (EM) technique. We propose a template EM algorithm for the modified generation model which can be regarded as generalization of Co-Training.

Details

Actions

Preview