Files

Abstract

Activity recognition systems based on body-worn motion sensors suffer from a decrease in performance during the deployment and run-time phases, because of probable changes in the sensors (e.g. displacement or rotation), which is the case in many real-life scenarios (e.g. mobile phone in a pocket). Existing approaches to achieve robustness tend to sacrifice information (e.g. by rotation-invariant features) or reduce the weight of the anomalous sensors at the classifier fusion stage (adaptive fusion), ignoring data which might still be perfectly meaningful, although different from the training data. We propose to use adaptation to rebuild the classifier models of the sensors which have changed position by a two-step approach: in the first step, we run an anomaly detection algorithm to automatically detect which sensors are delivering unexpected data; subsequently, we trigger a system self-training process, so that the remaining classifiers retrain the “anomalous” sensors. We show the benefit of this approach in a real activity recognition dataset comprising data from 8 sensors to recognize locomotion. The approach achieves similar accuracy compared to the upper baseline, obtained by retraining the anomalous classifiers on the new data.

Details

Actions

Preview