Paper ID | SS-5.4 |
Paper Title |
A CO-INTERACTIVE TRANSFORMER FOR JOINT SLOT FILLING AND INTENT DETECTION |
Authors |
Libo Qin, Tailu Liu, Wanxiang Che, Bingbing Kang, Sendong Zhao, Ting Liu, Harbin Institute of Technology, China |
Session | SS-5: Domain Adaptation for Multimedia Signal Processing |
Location | Gather.Town |
Session Time: | Wednesday, 09 June, 13:00 - 13:45 |
Presentation Time: | Wednesday, 09 June, 13:00 - 13:45 |
Presentation |
Poster
|
Topic |
Special Sessions: Domain Adaptation for Multimedia Signal Processing |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
Intent detection and slot filling are two main tasks for building a spoken language understanding (SLU) system. The two tasks are closely related and the information of one task can benefit the other. Previous studies either implicitly model the two tasks with multi-task framework or only explicitly consider the single information flow from intent to slot. None of the prior approaches model the bidirectional connection between the two tasks simultaneously in a unified framework. In this paper, we propose a Co-Interactive Transformer which considers the cross-impact between the two tasks. Instead of adopting the self-attention mechanism in vanilla Transformer, we propose a co-interactive module to consider the cross-impact by building a bidirectional connection between the two related tasks, where slot and intent can be able to attend on the corresponding mutual information. The experimental results on two public datasets show that our model achieves the state-of-the-art performance. |