2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

Technical Program

Paper Detail

Paper IDIVMSP-10.1
Paper Title DEEP SEMI-SUPERVISED METRIC LEARNING VIA IDENTIFICATION OF MANIFOLD MEMBERSHIPS
Authors Furen Zhuang, Pierre Moulin, University of Illinois at Urbana-Champaign, United States
SessionIVMSP-10: Metric Learning and Interpretability
LocationGather.Town
Session Time:Wednesday, 09 June, 13:00 - 13:45
Presentation Time:Wednesday, 09 June, 13:00 - 13:45
Presentation Poster
Topic Image, Video, and Multidimensional Signal Processing: [IVSMR] Image & Video Sensing, Modeling, and Representation
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Virtual Presentation  Click here to watch in the Virtual Conference
Abstract Three of the key challenges in semi-supervised metric learning are the difficulty in sampling loss-producing triplets, the difficulty in locating similar data which are faraway from the anchor points, and the difficulty in making the model robust to noisy predicted pseudolabels. We propose a method which allows the use of class-representative anchors (proxies), and avoids the computational costs associated with triplet sampling. Our new semi-supervised metric learning method propagates labels along mutual nearest neighbor pairs, so that faraway similar data can be drawn to the anchors, while data which are not along these paths (and hence not on the same manifold as the anchors) can be pushed away from these anchors. By assessing the number of different labels which were propagated to the same point, we obtain an estimate of the probability that our prediction of the pseudolabel is accurate, and hence able to attenuate the effect of uncertain pseudolabels on our model by factoring in the confidence of these predictions. We show the superiority of our method over various state-of-the-art methods on four diverse public datasets.