This thesis presents how to visualize a large crowd of virtual human characters in real time on a standard issue personal computer. The virtual humans are distributed into rendering fidelities such that the humans closer to the viewer are detailed and expressive while the further away they are, the less detailed they become. Fidelity partitioning starts with fully dynamic and geometric humans, meaning they can perform a full suit of animations even computed on-demand. Dynamic geometry is followed by static geometry such that a constrained set of animations can be achieved while the last and farthest detail level from the viewer is image-based and static. Various rendering accelerations applicable to each fidelity are used and compared, such as caching schemes, levels-of-detail, shaders and state sorting. Memory usage and artistic workload is reduced using template humans that are instantiated with individual varieties in animation, clothing, facial appearance and color combinations. An innovative constraining method for randomized colors is presented. With the opportunity of rendering large crowds, emerges the need for novel interaction methods such as the CrowdBrush, an intuitive spraycan interface for crowds. In the area of virtual therapy, individual human picking is presented using the GazeMap. Several other applications are shown to present the versatility and flexibility of the crowd rendering engine.
EPFL_TH3534.pdf
restricted
3.96 MB
Adobe PDF
989ce3b40044cd50066d3350aec89b4e