Paper ID | IVMSP-14.1 |
Paper Title |
NMF-SAE: AN INTERPRETABLE SPARSE AUTOENCODER FOR HYPERSPECTRAL UNMIXING |
Authors |
Fengchao Xiong, Nanjing University of Science and Technology, China; Jun Zhou, Griffith University, Australia; Minchao Ye, China Jiliang University, China; Jianfeng Lu, Nanjing University of Science and Technology, China; Yuntao Qian, College of Computer Science, China |
Session | IVMSP-14: Hyperspectral Imaging |
Location | Gather.Town |
Session Time: | Wednesday, 09 June, 15:30 - 16:15 |
Presentation Time: | Wednesday, 09 June, 15:30 - 16:15 |
Presentation |
Poster
|
Topic |
Image, Video, and Multidimensional Signal Processing: [IVTEC] Image & Video Processing Techniques |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
Hyperspectral unmixing is an important tool to learn the material constitution and distribution of a scene. Model-based unmixing methods depend on well-designed iterative optimization algorithms, which is usually time consuming. Learning-based methods perform unmixing in a data-driven manner but heavily rely on the quality and quantity of the training samples due to the lack of physical interpretability. In this paper, we combine the advantages of both model-based and learning-based methods and propose a nonnegative matrix factorization (NMF) inspired sparse autoencoder (NMF-SAE) for hyperspectral unmixing. NMF-SAE consists of an encoder and a decoder, both of which are constructed by unrolling the iterative optimization rules of $L_1$ sparsity-constrained NMF for the linear spectral mixture model. All parameters in our method are obtained by end-to-end training in a data-driven manner. Our network is not only physically interpretable and flexible but also has higher learning capacity with fewer parameters. Experimental results on both synthetic and real-world data demonstrate that our method is capable of producing desirable unmixing results when compared against several alternative approaches. |