Estimating Fusion Weights of a Multi-Camera Eye Tracking System by Leveraging User Calibration Data
Cross-ratio (CR)-based eye tracking has been attracting much interest due to its simple setup, yet its accuracy is lower than that of the model-based approaches. In order to improve the estimation accuracy, a multi-camera setup can be exploited rather than the traditional single camera systems. The overall gaze point can be computed by fusion of available gaze information from all cameras. This paper presents a real-time multi-camera eye tracking system in which the estimation of gaze relies on simple CR geometry. A novel weighted fusion method is proposed, which leverages the user calibration data to learn the fusion weights. Experimental results conducted on real data show that the proposed method achieves a significant accuracy improvement over single camera systems. The real-time system achieves 0.82 degrees of visual angle accuracy error with very few calibration data (5 points) under natural head movements, which is competitive with more complex model-based systems.
paper132_etra_camera_ready_v3.pdf
Preprint
openaccess
918.7 KB
Adobe PDF
dc4715ba523a4215acf9e5522bf4338c