Paper ID | HLT-4.5 |
Paper Title |
LEARNING TO SELECT CONTEXT IN A HIERARCHICAL AND GLOBAL PERSPECTIVE FOR OPEN-DOMAIN DIALOGUE GENERATION |
Authors |
Lei Shen, Institute of Computing Technology, Chinese Academy of Sciences, China; Haolan Zhan, Institute of Software, Chinese Academy of Sciences, China; Xin Shen, Australian National University, Australia; Yang Feng, Institute of Computing Technology, Chinese Academy of Sciences, China |
Session | HLT-4: Dialogue Systems 2: Response Generation |
Location | Gather.Town |
Session Time: | Tuesday, 08 June, 14:00 - 14:45 |
Presentation Time: | Tuesday, 08 June, 14:00 - 14:45 |
Presentation |
Poster
|
Topic |
Human Language Technology: [HLT-DIAL] Discourse and Dialog |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
Open-domain multi-turn conversations mainly have three fea-tures, which are hierarchical semantic structure, redundant in-formation, and long-term dependency. Grounded on these,selecting relevant context becomes a challenge step for multi-turn dialogue generation. However, existing methods cannotdifferentiate both useful words and utterances in long dis-tances from a response. Besides, previous work just performscontext selection based on a state in the decoder, which lacksa global guidance and could lead some focuses on irrelevantor unnecessary information. In this paper, we propose a novelmodel with hierarchical self-attention mechanism and distantsupervision to not only detect relevant words and utterances inshort and long distances, but also discern related informationglobally when decoding. Experimental results on two publicdatasets of both automatic and human evaluations show thatour model significantly outperforms other baselines in termsof fluency, coherence and informativeness. |