Paper ID | HLT-4.4 |
Paper Title |
HSAN: A HIERARCHICAL SELF-ATTENTION NETWORK FOR MULTI-TURN DIALOGUE GENERATION |
Authors |
Yawei Kong, Lu Zhang, Can Ma, Cong Cao, Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China |
Session | HLT-4: Dialogue Systems 2: Response Generation |
Location | Gather.Town |
Session Time: | Tuesday, 08 June, 14:00 - 14:45 |
Presentation Time: | Tuesday, 08 June, 14:00 - 14:45 |
Presentation |
Poster
|
Topic |
Human Language Technology: [HLT-DIAL] Discourse and Dialog |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
In the multi-turn dialogue system, response generation is not only related to the sentences in the context but also relies on the words in each utterance. Although there are lots of methods that pay attention to model the relationship between words and utterances, there still exist problems such as tending to generate trivial responses. In this paper, we propose a hierarchical self-attention network, named HSAN, which attends to the important words and utterances in the context simultaneously. Firstly, we use a hierarchical encoder to update the word and utterance representations with the corresponding position information. Secondly, the response representations are updated by the mask self-attention module in the decoder. Finally, the relevance between utterances and response is computed by another self-attention module and used for the next response decoding process. In terms of automatic metrics and human judgments, experimental results show that HSAN significantly outperforms all baselines on two common public datasets. |