Paper ID | MLSP-4.5 |
Paper Title |
Embedding Semantic Hierarchy in Discrete Optimal Transport for Risk Minimization |
Authors |
Yubin Ge, University of Illinois Urbana-Champaign, United States; Site Li, Carnegie Mellon University, United States; Xuyang Li, Northeastern University, United States; Fangfang Fan, Wanqing Xie, Harvard University, United States; Jane You, Hong Kong Polytechnic University, China; Xiaofeng Liu, Harvard University, United States |
Session | MLSP-4: Machine Learning for Classification Applications 1 |
Location | Gather.Town |
Session Time: | Tuesday, 08 June, 14:00 - 14:45 |
Presentation Time: | Tuesday, 08 June, 14:00 - 14:45 |
Presentation |
Poster
|
Topic |
Machine Learning for Signal Processing: [MLR-PRCL] Pattern recognition and classification |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
The widely-used cross-entropy (CE) loss-based deep networks achieved significant progress w.r.t. the classification accuracy. However, the CE loss can essentially ignore the risk of misclassification which is usually measured by the distance between the prediction and label in a semantic hierarchical tree. In this paper, we propose to incorporate the risk-aware inter-class correlation in a discrete optimal transport (DOT) training framework by configuring its ground distance matrix. The ground distance matrix can be pre-defined following a priori of hierarchical semantic risk. Specifically, we define the tree induced error (TIE) on a hierarchical semantic tree and extend it to its increasing function from the optimization perspective. The semantic similarity in each level of a tree is integrated with the information gain. We achieve promising results on several large scale image classification tasks with a semantic tree structure in a plug and play manner. |