Fichiers

Résumé

Currently trainees typically learn surgery techniques directly on mocks, animals or cadavers. For safety, cost and ethics reasons, one cannot try all possible strategies to achieve efficient learning. Virtual Reality (VR) based surgery training systems introduce a complementary alternative in surgery training and education, which will enable more effective and systematic training, provide objective assessment of technical competence, facilitate the teaching of rare cases, and enable to test the future surgeons. However, to be a useful training tool, surgery simulator must be visually and physically realistic. This dissertation investigates the training method and fast modeling algorithms for the purpose of surgery simulator. Different to conventional training approach, a strategy of using decomposition of complex surgery procedures into subtasks and using multisensory learning cues is proposed. Given the computational constraint of real-time simulation, several physics-based approaches of modeling rigid and deformable objects, (self-)collision detection and collision handling techniques are introduced and implemented. The developed algorithms are firstly tested in the simulation of interventional radiology (IR) procedures. The simulation environment allows to carry out the most common procedures: guidewire and catheter navigation, contrast dye injection to visualize the vessels, balloon angioplasty and stent placement. Moreover, visual details including the heartbeat as well as breathing are also considered. Finally, we present a VR based microsurgery simulator which applies and extends the above training strategy and algorithms. The training system demonstrates a complete vascular suturing procedure and a series of decomposed subtasks. Using a novel haptic forceps, a user can learn principle microsurgery skills such as grasping, suture placement, needle insertion and knot-tying.

Détails

Actions