Files

Abstract

A mixture of experts consists of a gating network that learns to partition the input space and of experts networks attributed to these different regions. This paper focuses on the choice of the gating network. First, a localized gating network based on a mixture of linear latent variable models is proposed that extends a gating network introduced by Xu et al, based on Gaussian mixture models. It is shown that this localized mixture of experts model, can be trained with the Expectation Maximization algorithm. The localized model is compared on a set of classification problems, with mixtures of experts having single or multi-layer perceptrons as gating network. It is found that the standard mixture of experts with feed-forward networks as gate often outperforms the other models.

Details

Actions

Preview