Paper ID | IVMSP-9.3 | ||
Paper Title | KAN: KNOWLEDGE-AUGMENTED NETWORKS FOR FEW-SHOT LEARNING | ||
Authors | Zeyang Zhu, Xin Lin, East China Normal University, China | ||
Session | IVMSP-9: Zero and Few Short Learning | ||
Location | Gather.Town | ||
Session Time: | Wednesday, 09 June, 13:00 - 13:45 | ||
Presentation Time: | Wednesday, 09 June, 13:00 - 13:45 | ||
Presentation | Poster | ||
Topic | Image, Video, and Multidimensional Signal Processing: [IVTEC] Image & Video Processing Techniques | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | Few-shot learning task aims to explore a model that is able to quickly learn new concepts by learning a few examples. The current approaches learning new categories with few images or even a single image are only based on the the visual modality. However, it is difficult to learn the representative features of new categories by a few images. This is because some categories are similar in vision. Moreover, due to the viewpoint, luminosity and that sometimes individuals of the same species appear markedly different from one another, the models are not able to learn the exact representation of classes. Therefore, considering that semantic information can enhance understanding when visual information is limited, we propose Knowledge-Augmented Networks (KAN), which combines the visual features with the semantic information extracted from knowledge graph to represent the features of each class. We demonstrate the effectiveness of our method on standard few-shot learning tasks, and further observe that with the augmented semantic information from knowledge graph, KAN is able to learn more disentangled representations. Experiments show that our model outperforms the state-of-the-art methods. |