Paper ID | IVMSP-8.5 |
Paper Title |
ADA-SISE: ADAPTIVE SEMANTIC INPUT SAMPLING FOR EFFICIENT EXPLANATION OF CONVOLUTIONAL NEURAL NETWORKS |
Authors |
Mahesh Sudhakar, Sam Sattarzadeh, Konstantinos N. Plataniotis, University of Toronto, Canada; Jongseong Jang, Yeonjeong Jeong, Hyunwoo Kim, LG AI Research, South Korea |
Session | IVMSP-8: Machine Learning for Image Processing II |
Location | Gather.Town |
Session Time: | Wednesday, 09 June, 13:00 - 13:45 |
Presentation Time: | Wednesday, 09 June, 13:00 - 13:45 |
Presentation |
Poster
|
Topic |
Image, Video, and Multidimensional Signal Processing: [IVTEC] Image & Video Processing Techniques |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
Explainable AI (XAI) is an active research area to interpret a neural network's decision by ensuring transparency and trust in the task-specified learned models. Recently, perturbation-based model analysis has shown better interpretation, but backpropagation techniques are still prevailing because of their computational efficiency. In this work, we combine both approaches as a hybrid visual explanation algorithm and propose an efficient interpretation method for convolutional neural networks. Our method adaptively selects the most critical features that mainly contribute towards a prediction to probe the model by finding the activated features. Experimental results show that the proposed method can reduce the execution time while enhancing competitive interpretability without compromising the quality of explanation generated. |