Infoscience

Thesis

Autonomous multisensor tracking surveillance system

In the world of video especially in video processing important steps are being taken at this time. Research projects around the world are tackling all kind of tracking problems and segmentation of video images. Others are doing feature extraction, image enhancements also using combinations of multiple sensors. The goal is always to help machines interpret the content of the images. Biometrical identification and verification is also becoming more and more important. What is the goal of machines interpreting video imagesΧ They can work 24 hours a day and they are efficient. This becomes very important for systems with a growing number of sensors. With a large number of sensors the coordination and control gets more and more difficult not to mention the installation and calibration. Such a system needs a good installation concept to be affordable for the end user. Also the maintenance effort needs to be reduced. This leads to integration of more functionality than only image interpretation into the sensor units. It has also to assist the installer and configurator during the commissioning of the plant. After developing video surveillance equipment for 10 years the leading thought was, that there must be a way to get complex algorithms out of the laboratories into the field. This is the reason why I developed a so called Sensorcam. This sensorcam was built with the base knowhow of the SEON1100 minidome (developed for Sensile Systems), a very small pan-tilt dome camera and the SDR2100, a digital video recorder for the commercial video surveillance market. The result is a prototype of a sensorcam network which is able to fill the gap between algorithm research and commercial products. The additional costs are minimal for the gained functionality. The goal of this work was to develop a network of pan tilt cameras which should be easy to install, calibrate, configure and maintain for an affordable price. Therefore the devices have to detect their neighbor's relative position and send that information to a central controller which can assemble that information to a complete floorplan of the sensorcam network. If such a floorplan can be constructed automatically such a system can be used as a commercial base platform for algorithms written for multisensor networks like people tracking over multiple cameras[21], tracking of targets in cluttered[16] or even occluded scenes[18]. During the development of such a camera network various sensor technologies have been selected due to their price/performance ratio. The main question was what technology to use for the first contact between neighboring sensorcam devices. Usually there is no direct sight, therefore direct visual recognition would not work. Simple radiofrequency signals do not provide geometrical information and an indoor GPS would be far too expensive. A quite common technology is infrared communication. An infrared signal source is not expensive and works over some meters distance. So the decision was taken to implement an active infrared transmission as a first contact between neighboring units. Also the distance to the floor has to be measured to reconstruct the positions in 3D. There are different technologies available already. The ultrasonic method, measuring the traveling time of an ultrasonic wave packet, has been chosen. It is in common use in new cars for parking assistance and is very robust and cheap. Other methods (laser or infrared measurement) have been considered as too expensive and an overkill. The automatic setup and calibration is done in two steps. After all sensorcams are powered up all units send their unique address via the infrared transmitter towards the floor except unit number one. This unit starts scanning its neighborhood with the infrared receiver and creates the first part of the floorplan. Then the next unit scans its neighborhood and so forth. At the end all the collected data is transferred to the central controller where the total floorplan of the installation site can then be assembled. This map of the sensorcams is not yet very precise and may need to be improved for applications which rely on precise position information. To fulfill that requirement the sensorcams are equipped with a moveable laser pointer. Now the second initialization step starts. The laser pointer of sensorcam number one is moved to the positions where it has detected the neighboring units. These units move their camera module straight downwards and then the laser dot is detected in the image. The offset of the laser dot to the center of the image is used to correct the relative coordinates of the referring unit. This procedure is done for every sensorcam and the correction data is sent to the central controller for the final floorplan. The indoor testing installation showed that it works very well under certain constraints. It also pointed out some problems which will need a closer look before developing the final product. The initialization procedure can also be optimized, for a commercial system it would need to be shorter. This could be reached by parallelizing the different detection procedures. Primarily for large systems this will reduce the setup time significantly. The demonstration units also verify that it is possible to add this functionality without increasing the cost of the final product by more than ten percent. The combination of cheap sensors and corresponding signal sources makes that possible. The image processing tasks are not so complex that it then could not be done with a powerfull IP camera solution based on a programmable DSP. Image processing is not needed at all for the detection procedure, in case the operator is satisfied with a rough map of the camera network. Such a sensorcam system has numerous advantages when using multi sensor algorithms. The question is then whether they have to run locally on each sensorcam or on a powerful central image processing unit. This depends strongly on the complexity of the algorithm, whether it can be ported to a DSP platform or not and the development costs. This has to be evaluated together with the algorithm developers to find a cost effective solution.

    Thèse École polytechnique fédérale de Lausanne EPFL, n° 3505 (2006)
    Section de microtechnique
    Faculté des sciences et techniques de l'ingénieur
    Laboratoire de production microtechnique 2

    Public defense: 2006-10-4

    Reference

    Record created on 2006-03-13, modified on 2016-08-08

Fulltext

Related material