2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

Technical Program

Paper Detail

Paper IDIVMSP-7.1
Paper Title SUPER-RESOLUTION AND INFECTION EDGE DETECTION CO-GUIDED LEARNING FOR COVID-19 CT SEGMENTATION
Authors Yu Sang, Jinguang Sun, Simiao Wang, Liaoning Technical University, China; Heng Qi, Dalian University of Technology, China; Keqiu Li, Tianjin University, China
SessionIVMSP-7: Machine Learning for Image Processing I
LocationGather.Town
Session Time:Wednesday, 09 June, 13:00 - 13:45
Presentation Time:Wednesday, 09 June, 13:00 - 13:45
Presentation Poster
Topic Image, Video, and Multidimensional Signal Processing: [IVTEC] Image & Video Processing Techniques
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Virtual Presentation  Click here to watch in the Virtual Conference
Abstract In this paper, we propose a novel super-resolution and infection edge detection co-guided learning network for COVID-19 CT segmentation (CogSeg). Our CogSeg is a coherent framework consisting of two branches. Specifically, we use image super-resolution (SR) as an auxiliary task, which assist segmentation to recover high-resolution representations. Moreover, we propose an infection edge detection guided region mutual information (RMI) loss, which uses the edge detection results of segmentation to explicitly maintain the high order consistency between segmentation prediction and ground truth around infection edge pixels. Our CogSeg network can effectively maintain high-resolution representation and leverages edge details to improve the segmentation performance. When evaluated on two publicly available COVID-19 CT datasets, our CogSeg improves 10.63 and 13.02 points than the established baseline method (i.e. U-Net) w.t.r mIoU. Moreover, our CogSeg achieves more appealing results both quantitatively and qualitatively than the state-of-the-art methods.