Paper ID | HLT-11.5 |
Paper Title |
Improving Cross-domain Slot Filling with Common Syntactic Structure |
Authors |
Luchen Liu, Xixun Lin, Peng Zhang, Chinese Academy of Sciences, China; Bin Wang, Xiaomi Inc., China |
Session | HLT-11: Language Understanding 3: Speech Understanding - General Topics |
Location | Gather.Town |
Session Time: | Thursday, 10 June, 13:00 - 13:45 |
Presentation Time: | Thursday, 10 June, 13:00 - 13:45 |
Presentation |
Poster
|
Topic |
Human Language Technology: [HLT-UNDE] Spoken Language Understanding and Computational Semantics |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
Cross-domain slot filling is a challenging task in spoken language understanding due to the differences in text genre across domains. In this paper, we attempt to solve this task by exploiting the syntactic structures of user utterances, because these syntactic structures are actually accessible and can be shared between utterances from different domains. To this end, we propose a novel Syntactic Structure Encoder (SSE) module and incorporate it into a detection-prediction framework. SSE introduces graph convolutional network (GCN) to learn the common structures from multiple source domains, which are helpful to better adaptation on the target domain. Experimental results conducted on SNIPS dataset show that our model significantly outperforms the state-of-the-art approach in cross-domain slot filling. Specifically, our model outperforms the best model by ~4% and ~5% F1-scores under the 20-example and 50-example settings, respectively. |