Paper ID | IVMSP-25.3 | ||
Paper Title | Fine-Grained Pose Temporal Memory Module for Video Pose Estimation and Tracking | ||
Authors | Chaoyi Wang, Shanghai Jiao Tong University, China; Yang Hua, Queen's University Belfast, United Kingdom; Tao Song, Zhengui Xue, Ruhui Ma, Shanghai Jiao Tong University, China; Neil Robertson, Queen's University Belfast, United Kingdom; Haibing Guan, Shanghai Jiao Tong University, China | ||
Session | IVMSP-25: Tracking | ||
Location | Gather.Town | ||
Session Time: | Thursday, 10 June, 16:30 - 17:15 | ||
Presentation Time: | Thursday, 10 June, 16:30 - 17:15 | ||
Presentation | Poster | ||
Topic | Image, Video, and Multidimensional Signal Processing: [IVARS] Image & Video Analysis, Synthesis, and Retrieval | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | The task of video pose estimation and tracking has been largely improved with the development of image pose estimation recently. However, there are still many challenging cases, such as body part occlusion, fast body motion, camera zooming, and complex background. Most existing methods generally use the temporal information to get more precise human bounding boxes or just use it in the tracking stage, but they fail to improve the accuracy of pose estimation tasks. To better solve these problems and utilize the temporal information efficiently and effectively, we present a novel structure, called pose temporal memory module, which is flexible to be transferred into top-down pose estimation frameworks. The temporal information stored in the pose temporal memory is aggregated into the current frame feature in our proposed module. We also transfer compositional de-attention (CoDA) to solve the unique keypoint occlusion problem in this task and propose a novel keypoint feature replacement to recover the extreme error detection under fine-grained keypoint-level guidance. To verify the generality and effectiveness of our proposed method, we integrate our module into two widely used pose estimation frameworks and obtain notable improvement on the PoseTrack dataset with only a few extra computing resources. |