Files

Abstract

Most of the multi-camera systems assume a well structured environment to detect and match objects across cameras. Cameras need to be fixed and calibrated. In this work, a novel system is presented to detect and match any objects in a network of uncalibrated fixed and mobile cameras. Objects are detected with the mobile cameras given only their observations from the fixed cameras. No training stage and data are used. Detected objects are correctly matched across cameras leading to a better understanding of the scene. A cascade of dense region descriptors is proposed to describe any object of interest. Various region descriptors are studied such as color histogram, histogram of oriented gradients, haar- wavelet responses, and covariance matrices of various features. The proposed descriptor outperforms existing approaches such as scale invariant feature transform (SIFT), or the speeded up robust features (SURF). Moreover, a sparse scan of the image plane is proposed to reduce the search space of the detection and matching process, approaching nearly real- time performance. The approach is robust to changes in illuminations, viewpoints, color distributions and image quality. Partial occlusions are also handled.

Details

Actions

Preview