2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

Technical Program

Paper Detail

Paper IDHLT-18.4
Paper Title GRAPH ATTENTION AND INTERACTION NETWORK WITH MULTI-TASK LEARNING FOR FACT VERIFICATION
Authors Rui Yang, Runze Wang, Zhen-Hua Ling, National Engineering Laboratory for Speech and Language Information Processing, University of Science and Technology of China, China
SessionHLT-18: Language Understanding 6: Summarization and Comprehension
LocationGather.Town
Session Time:Friday, 11 June, 13:00 - 13:45
Presentation Time:Friday, 11 June, 13:00 - 13:45
Presentation Poster
Topic Human Language Technology: [HLT-SDTM] Spoken Document Retrieval and Text Mining
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Virtual Presentation  Click here to watch in the Virtual Conference
Abstract Fact verification is a challenging task which requires to retrieve relevant sentences from plain texts and then take these sentences as evidences to verify given claims. Conventional methods treat sentence selection and claim verification as separate subtasks in a pipeline. Claim verification models usually analyzes the inference relationship between each retrieved sentence and the claim, and then aggregates the claim-sentence representations by graph-based reasoning methods, such as graph attention networks (GAT). In this paper, we propose a graph attention and interaction network (GAIN) for claim verification. In addition to GAT, this model includes a graph interaction network (GIN), which considers the comparative relationships among all claim-sentence representations. More importantly, a multi-task learning strategy, which combines the objectives of both sentence selection and claim verification, is designed to train the GAIN model in order to utilize the supervision information of both subtasks. Experimental results on the FEVER dataset show that the GAIN model with multi-task learning achieves a FEVER score of 73.04%, which outperforms other published models.