Abstract

Successful navigation in a teleoperation scenario requires a good level of situational or environmental awareness. This paper presents the main features and capabilities of a new augmented virtuality-based system aimed at providing users with improved perception of the robot’s remote environment. With this purpose, a mixed-perspective exocentric display (ME3D), and a video centric display (VC2D) are compared. Both interfaces were implemented on a mobile robot and experiments were performed in a real working scenario. To assess this contribution, this works analyzes the teleoperation capability, performance, and human workload of users by means of NASA-TLX (Task Load Index). The results show that participants experienced a reduction in the driving workload and showed high degrees of acceptance for the proposed ME3D interface.

Details