2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

Technical Program

Paper Detail

Paper IDIVMSP-26.5
Paper Title Cascade Attention Fusion for Fine-grained Image Captioning based on Multi-layer LSTM
Authors Shuang Wang, Yun Meng, Yu Gu, Lei Zhang, Xiutiao Ye, Jingxian Tian, Licheng Jiao, Xidian University, China
SessionIVMSP-26: Attention for Vision
LocationGather.Town
Session Time:Thursday, 10 June, 16:30 - 17:15
Presentation Time:Thursday, 10 June, 16:30 - 17:15
Presentation Poster
Topic Image, Video, and Multidimensional Signal Processing: [IVARS] Image & Video Analysis, Synthesis, and Retrieval
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Virtual Presentation  Click here to watch in the Virtual Conference
Abstract The conventional visual attention-based image captioning approaches typically use image information to guide caption generation. Results from these models tend to be coarse and ignore the details in the image, such as objects, attributes and the distinguishing aspects of each image. In this paper, we propose a visual and semantic fusion network with a margin-based training guidance mechanism to generate fine image descriptions that depict more objects, attributes and other distinguishing aspects of images. In our model, the visual attention layer introduces more low-level visual information, the semantic attention layer provides more high-level semantic attributes. Furthermore, the proposed margin-based loss encourages our model to produce more discriminative descriptions. Extensive experiments are conducted on COCO and Flickr30K image captioning datasets to validate our method, and the results show its superior performance at captioning. Our method achieves a state-of-the-art 70.6 CIDEr-D on the Flickr30K dataset, and a competitive 123.5 CIDEr-D on the MS-COCO dataset.