Files

Abstract

There are two primary approaches to behavioural animation of an Autonomous Virtual Agent (AVA). The first one, or behavioural model, defines how AVA reacts to the current state of its environment. In the second one, or cognitive model, this AVA uses a thought process allowing it to deliberate over its possible actions. Despite the success of these approaches in several domains, there are two notable limitations which we address in this thesis. First, cognitive models are traditionally very slow to execute, as a tree search, in the form of mapping: states → actions, must be performed. On the one hand, an AVA can only make sub-optimal decisions and, on the other hand, the number of AVAs that can be used simultaneously in real-time is limited. These constraints restrict their applications to a small set of candidate actions. Second, cognitive and behavioural models can act unexpectedly, producing undesirable behaviour in certain regions of the state space. This is because it may be impossible to exhaustively test them for the entire state space, especially if the state space is continuous. This can be worrisome for end-user applications involving AVAs, such as training simulators for cars and aeronautics. Our contributions include the design of novel learning methods for approximating behavioural and cognitive models. They address the problem of input selection helped by a novel architecture ALifeE including virtual sensors and perception, regardless of the machine learning technique utilized. The input dimensionality must be kept as small as possible, this is due to the "curse of dimensionality", well known in machine learning. Thus, ALifeE simplifies and speeds up the process for the designer.

Details

Actions

Preview