The main task of a voice-enabled tour-guide robot in mass exhibition setting is to engage visitors in dialogue and provide as much exhibit information as possible in a limited time. In managing such a dialogue, extracting the user (visitor) goal or intention at each dialogue state is the key issue. In mass exhibition conditions uncooperative visitors and speech recognition limitations in noisy acoustic conditions may jeopardize user goal identification. In this paper, we introduce the use of sequential dialogue repair techniques, exploiting the inherent multimodality of the tour-guide robot, in order to reduce the risk of the resulting communication failures. Bayesian networks fusing acoustic and non-acoustic modalities during user goal identification serve as input to graphical models known as decision networks. Decision networks allow the definition of dialogue repair sequences as actions, and provide a decision-theoretic utility-based strategy for selecting actions. The benefits of the proposed repair strategies are demonstrated through experiments with the dialogue system of RoboX, a tour-guide robot successfully deployed at the Swiss National Exhibition (Expo.02).