This paper considers design methodologies in order to develop voice-enabled interfaces for tour-guide robots deployed at the Robotics Exposition of the Swiss National Exhibition (Expo.02). Human–robot voice communication presents new challenges for design of fully autonomous mobile robots, in that interactivity must be robot-initiated in conversation and within a dynamic adverse environment.We approached these general problems for a voice-enabledinterface, tailored to limited computational resources of one on-board processor,when integrating smart speech signal acquisition, automatic speech recognition and synthesis, as well as a dialogue system into the multi-modal, multi-sensor interface for the expo tour-guide robot. We also focus on particular issues that needed to be addressed in voice-based interaction when planning specific tasks and research experiments for Expo.02, where tour-guide robots had to interact with hundreds of thousands of visitors over 6 months, 7 days a week, 10 h per day.