Files

Abstract

Since [years] we hear about Virtual Reality as a discipline that could potentially provide benefits to many applications. Basically, the principle of Virtual Reality consists in stimulating user senses in order to give the impression to be in another place which they can discover and interact with. Today, most virtual reality systems create realistic visual and auditive environments. However the interaction with these environments can not be considered as natural. Indeed, we often use devices such as mouse, keyboard or joystick to move or manipulate them. These paradigms of interaction are in fact metaphors, which are not similar to reality. In some situations they are practical and efficient. However, the lack of intuitiveness sometimes makes them limited or simply ineffective. To overcome this, researchers can use Haptic Devices. They are designed to simulate what is commonly called the "sense of touch", which includes more specifically, tactile, pain sense, thermal sense, and proprioception. Proprioception is knowledge gained by the perception of the relative member's position of the human body. In this thesis, we particularly focus on the simulation of proprioception. There are two advantages of such haptic devices. First, they can give the user more information on the nature of virtual objects (size, weight, finish, rigidity, etc.). Second, they can provide interaction paradigms that are closer to reality (three-dimensional interaction in a three-dimensional world). However, haptic device mechanics is complex. Moreover, proprioception is a sense that covers the entire body which is a rather large surface. For this reason, haptic devices usually apply force feedback on a very small portion of the body, such as fingertip. In addition to this hardware constraint, haptic research also faces software constraints. Indeed, a haptic application requires many computer resources in order to perform collision detection, dynamic animation of objects, and force feedback computation. Moreover, this should be done at a refresh rate that is much higher than the visualization for producing a convincing result. In the first part of this thesis, we propose to increase realism and complexity of haptic applications. To achieve this goal, we use a state-of-the-art commercial device which allows to acquire the posture and position of both hands, and to apply forces on the fingertips and wrists. We propose techniques to calibrate and improve the comfort of these kinds of devices in order to integrate them into Virtual Environments. However, a two-handed haptic device do not presents only advantages. Indeed, It is much more complicated to compute forces on two hand models, than on a single point or fingertip. For this reason, in this thesis, we propose a framework to optimize this computation. Thanks to it, we can create Virtual Environments in which an object is graspable and dynamically animated by the laws of physics. When the object is seized by both hands, the haptic rendering engine realistically computes the forces on both exoskeletons. The efficiency of our rendering permits to apply these techniques to complex environments that have a significant number of objects. But the existing visual Virtual Environments are much more detailed than the ones seen in common haptic applications. In this thesis, we aim at reducing this gap. One of the problems is that these quality environments usually do not include specific haptic object properties, such as mass or material. We thus propose a software allowing even non-professional to quickly and easily add this information to an environment. Our results show that this haptic rendering engine does not suffer from the large quantity of objects. They demonstrate that we have an efficient framework for integrating a two-handed haptic interface into a generic virtual environment. In the second part, we evaluate the potential of these kinds of Virtual Reality systems in more detail. While most applications can in theory take advantage of haptic devices, the practice shows that it is not always the case. Indeed, with experience, some metaphorical interaction paradigms remain more powerful than realistic ones. We thus present and study the integration of our two-handed haptic interface in a variety of applications. Evaluations show that depending on the application, it is not appropriate to reproduce reality: in teleoperation, for instance, simulating a virtual haptic steering wheel is less efficient than providing a force gesture interface. On the other hand, in virtual learning, the power of two-handed haptic manipulation is fully exploited and presents great advantages over standard techniques.

Details

Actions

Preview