Paper ID | MMSP-8.5 |
Paper Title |
DETECTION OF AUDIO-VIDEO SYNCHRONIZATION ERRORS VIA EVENT DETECTION |
Authors |
Joshua Ebenezer, University of Texas at Austin, United States; Yongjun Wu, Hai Wei, Sriram Sethuraman, Zongyi Liu, Amazon Prime Video, United States |
Session | MMSP-8: Multimedia Retrieval and Signal Detection |
Location | Gather.Town |
Session Time: | Friday, 11 June, 13:00 - 13:45 |
Presentation Time: | Friday, 11 June, 13:00 - 13:45 |
Presentation |
Poster
|
Topic |
Multimedia Signal Processing: Human Centric Multimedia |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
We present a new method and a large-scale database to detect audio-video synchronization errors in tennis videos. A deep network is trained to detect the visual signature of the tennis ball being hit by the racquet in the video stream. Another deep network is trained to detect the auditory signature of the same event in the audio stream. During evaluation, the audio stream is searched by the audio network for the audio event of the ball being hit. If the event is found in audio, the neighbor- ing interval in video is searched for the corresponding visual signature. If the event is not found in the video stream but is found in the audio stream, audio-video synchronization error is flagged. We developed a large-scaled database of 504,300 frames from 6 hours of videos of tennis events, simulated A/V synchronization errors, and found our method achieves high accuracy on the task. |