Paper ID | IVMSP-1.4 |
Paper Title |
SSFENET: SPATIAL AND SEMANTIC FEATURE ENHANCEMENT NETWORK FOR OBJECT DETECTION |
Authors |
Tianyuan Wang, University of Chinese Academy of Sciences, China; Can Ma, Institute of Information Engineering, Chinese Academy of Sciences, China; Haoshan Su, University of Chinese Academy of Sciences, China; Weiping Wang, Institute of Information Engineering, Chinese Academy of Sciences, China |
Session | IVMSP-1: Object Detection 1 |
Location | Gather.Town |
Session Time: | Tuesday, 08 June, 13:00 - 13:45 |
Presentation Time: | Tuesday, 08 June, 13:00 - 13:45 |
Presentation |
Poster
|
Topic |
Image, Video, and Multidimensional Signal Processing: [IVARS] Image & Video Analysis, Synthesis, and Retrieval |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
Current state-of-the-art object detectors generally use pre-trained classification networks to extract features, and then utilize feature pyramids to detect objects of different scales. However, classification networks prefer translation invariance and ignore the location information, so directly using the extracted features for fusion will affect the performance. In this paper, we present a novel network to address this dilemma, denoted as Spatial and Semantic Feature Enhancement Network (SSFENet). First, we introduce Spatial Feature Enhancement Block to utilize dilated convolution and weighted feature fusion to enhance the spatial information in features. Second, in the low-level stage, our Semantic Feature Enhancement Block uses the backbone network of the high-level stage to obtain features with richer semantic information and only introduces little computational cost due to the use of shared convolution layers. Experimental results on the MS-COCO benchmark show that the proposed SSFENet significantly improves the mAP of commonly used object detectors. |