Files

Abstract

In target tracking, fusing multi-modal sensor data under a power-performance trade-off is becoming increasingly important. Proper fusion of multiple modalities can help in achieving better tracking performance while decreasing the total power consumption. In this paper, we present a framework for tracking a target given joint acoustic and video observations from a co-located acoustic array and a video camera. We demonstrate on field data that tracking of the direction-of-arrival of a target improves significantly when the video information is incorporated at time instants when the acoustic signal-to-noise ratio is low.

Details

PDF