Cai, FengyuZhou, WanhaoMi, FeiFaltings, Boi2023-01-162023-01-162023-01-162022-01-0110.1109/ICASSP43922.2022.9747477https://infoscience.epfl.ch/handle/20.500.14299/193731WOS:000864187907182Utterance-level intent detection and token-level slot filling are two key tasks for spoken language understanding (SLU) in task-oriented systems. Most existing approaches assume that only a single intent exists in an utterance. However, there are often multiple intents within an utterance in real-life scenarios. In this paper, we propose a multi-intent SLU framework, called SLIM, to jointly learn multi-intent detection and slot filling based on BERT. To fully exploit the existing annotation data and capture the interactions between slots and intents, SLIM introduces an explicit slot-intent classifier to learn the many-to-one mapping between slots and intents. Empirical results on three public multi-intent datasets demonstrate (1) the superior performance of SLIM compared to the current state-of-the-art for SLU with multiple intents and (2) the benefits obtained from the slot-intent classifier.AcousticsComputer Science, Artificial IntelligenceEngineering, Electrical & ElectronicComputer ScienceEngineeringspoken language understandingmulti-intent classificationslot fillingSlim: Explicit Slot-Intent Mapping With Bert For Joint Multi-Intent Detection And Slot Fillingtext::conference output::conference proceedings::conference paper