Adapting Virtual Embodiment through Reinforcement Learning
In Virtual Reality, having a virtual body opens a wide range of possibilities as the participant's avatar can appear to be quite different from oneself for the sake of the targeted application (e.g. for perspective-taking). In addition, the system can partially manipulate the displayed avatar movement through some distortion to make the overall experience more enjoyable and effective (e.g. training, exercising, rehabilitation). Despite its potential, an excessive distortion may become noticeable and break the feeling of being embodied into the avatar. Past researches have shown that individuals have a relatively high tolerance to movement distortions and a great variability of individual sensitivities to distortions. In this paper, we propose a method taking advantage of Reinforcement Learning (RL) to efficiently identify the magnitude of the maximum distortion that does not get noticed by an individual (further noted the detection threshold). We show through a controlled experiment with subjects that the RL method finds a more robust detection threshold compared to the adaptive staircase method, i.e. it is more able to prevent subjects from detecting the distortion when its amplitude is equal or below the threshold. Finally, the associated majority voting system makes the RL method able to handle more noise within the forced choices input than adaptive staircase. This last feature is essential for future use with physiological signals as these latter are even more susceptible to noise. It would then allow to calibrate embodiment individually to increase the effectiveness of the proposed interactions.
TVCG2020_RL_Video.mp4
Preprint
openaccess
CC BY
75.66 MB
Video MP4
4a587aebd419549ebe6d929356dcb0ad
TVCG__Adapting_Virtual_Embodiment_through_Reinforcement_Learning.pdf
Preprint
openaccess
CC BY
8.15 MB
Adobe PDF
9e1d70c2d590920d38083cee85c4aecf