Paper ID | HLT-14.3 |
Paper Title |
DUALFORMER: A UNIFIED BIDIRECTIONAL SEQUENCE-TO-SEQUENCE LEARNING |
Authors |
Jen-Tzung Chien, Wei-Hsiang Chang, National Chiao Tung University, Taiwan |
Session | HLT-14: Language Representations |
Location | Gather.Town |
Session Time: | Thursday, 10 June, 14:00 - 14:45 |
Presentation Time: | Thursday, 10 June, 14:00 - 14:45 |
Presentation |
Poster
|
Topic |
Human Language Technology: [HLT-MLMD] Machine Learning Methods for Language |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
This paper presents a new dual domain mapping based on a unified bidirectional sequence-to-sequence (seq2seq) learning. Traditionally, dual learning in domain mapping was constructed with intrinsic connection where the conditional generative models in two directions were mutually leveraged and combined. The additional feedback from the other generation direction was used to regularize sequential learning in original direction of domain mapping. Domain matching between source sequence and target sequence was accordingly improved. However, the reconstructions for knowledge in two domains were ignored. The dual information based on separate models in two training directions was not sufficiently discovered. To cope with this weakness, this study proposes a closed-loop seq2seq learning where domain mapping and domain knowledge are jointly learned. In particular, a new feature-level dual learning is incorporated to build a dualformer where feature integration and feature reconstruction are further performed to bridge dual tasks. Experiments demonstrate the merit of the proposed dualformer for machine translation based on the multi-objective seq2seq learning. |