Abstract

In this correspondence, we provide a transient analysis of an affinely constrained mixture method that adaptively combines the outputs of adaptive filters running in parallel on the same task. The affinely constrained mixture is adapted using a stochastic gradient update to minimize the square of the prediction error. Although we specifically carry out the transient analysis for a combination of two equal length adaptive filters trying to learn a linear model working on real valued data, we also provide the final equations and the necessary extensions in order to generalize the transient analysis to mixtures combining more than two filters; using Newton based updates to train the mixture weights; working on complex valued data; or unconstrained mixtures. The derivations are generic such that the constituent filters can be trained using unbiased updates including the least-mean squares or recursive least squares updates. This correspondence concludes with numerical examples and final remarks.

Details

Actions