Prodanov, P.Drygajlo, A.Ramel, G.Meisser, M.Siegwart, R.2006-12-072006-12-072006-12-07200210.1109/IRDS.2002.1043939https://infoscience.epfl.ch/handle/20.500.14299/237578WOS:000179289100216This paper considers design methodologies in order to develop voice-enabled interfaces for tour-guide robots to be deployed at the Robotics Exposition of the Swiss National Exhibition (Expo.02). Human-robot voice communication presents new challenges for design of fully autonomous mobile robots, in that interactivity must be robot-initiated in conversation and within a dynamic adverse environment. We approached these general problems for a voice enabled interface, tailored to limited computational resources of one on-board processor, when integrating smart speech signal acquisition, automatic speech recognition and synthesis, as well as dialogue system into the multi-modal, multi-sensor interface for the expo tour-guide robot. We also focus on particular issues that need to be addressed in voice-based interaction when planning specific tasks and research experiments for Expo.02 where tour-guide robots will interact with hundred of thousands of visitors during six months, seven days a week, ten hours per day.LTS1Voice Enabled Interface for Interactive Tour Guide Robotstext::conference output::conference proceedings::conference paper