Estimating Fusion Weights of a Multi-Camera Eye Tracking System by Leveraging User Calibration Data

Cross-ratio (CR)-based eye tracking has been attracting much interest due to its simple setup, yet its accuracy is lower than that of the model-based approaches. In order to improve the estimation accuracy, a multi-camera setup can be exploited rather than the traditional single camera systems. The overall gaze point can be computed by fusion of available gaze information from all cameras. This paper presents a real-time multi-camera eye tracking system in which the estimation of gaze relies on simple CR geometry. A novel weighted fusion method is proposed, which leverages the user calibration data to learn the fusion weights. Experimental results conducted on real data show that the proposed method achieves a significant accuracy improvement over single camera systems. The real-time system achieves 0.82 degrees of visual angle accuracy error with very few calibration data (5 points) under natural head movements, which is competitive with more complex model-based systems.

Published in:
Proceedings of the Symposium on Eye Tracking Research & Applications
Presented at:
ACM Symposium on Eye Tracking Research & Applications, Charleston, SC, USA, March 14-17, 2016
New York, Assoc Computing Machinery

 Record created 2015-11-30, last modified 2018-03-17

Download fulltext

Rate this document:

Rate this document:
(Not yet reviewed)