Files

Abstract

We study the spatial-temporal sampling of a linear diffusion field, and show that it is possible to compensate for insufficient spatial sampling densities by oversampling in time. Our work is motivated by the following issue often encountered in sensor network sampling, namely increasing the temporal sampling density is often easier and less expensive than increasing the spatial sampling density of the network. For the case of sampling a diffusion field, we show that, to achieve trade-off between spatial and temporal sampling, the spatial arrangement of the sensors must satisfy certain conditions. We provide in this paper the precise relationships between the achievable reduction of spatial sampling density, the required temporal oversampling rate, the spatial arrangement of the sensors, and the bound for the condition numbers of the resulting sampling and reconstruction procedures.

Details

Actions

Preview