Abstract

Linear inverse problems with discrete data are equivalent to the estimation of the continuous-time input of a linear dynamical system from samples of its output. The solution obtained by means of regularization theory has the structure of a neural network similar to classical RBF networks. However, the basis functions depend in a nontrivial way on the specific linear operator to be inverted and the adopted regularization strategy. By resorting to the Bayesian interpretation of regularization, we show that such networks can be implemented rigorously and efficiently whenever the linear operator admits a state-space representation. An analytic expression is provided for the basis functions as well as for the entries of the matrix of the linear system used to compute the weights. Moreover, the weights can be computed in $O(N)$ operations by a suitable algorithm based on Kalman filtering. The results are illustrated through a deconvolution problem where the spontaneous secretory rate of Luteinizing Hormone (LH) of the hypophisis is reconstructed from measurements of plasma LH concentrations.

Details

Actions