2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

Technical Program

Paper Detail

Paper IDMLSP-20.6
Paper Title Learning a Sparse Generative Non-Parametric Supervised Autoencoder
Authors Michel Barlaud, University Cote d'azur, France; Frederic Guyard, Orange Labs, France
SessionMLSP-20: Attention and Autoencoder Networks
LocationGather.Town
Session Time:Wednesday, 09 June, 15:30 - 16:15
Presentation Time:Wednesday, 09 June, 15:30 - 16:15
Presentation Poster
Topic Machine Learning for Signal Processing: [MLR-DEEP] Deep learning techniques
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Virtual Presentation  Click here to watch in the Virtual Conference
Abstract This paper concerns the supervised generative non parametric autoencoder. Classical methods are based on variational autoencoders (VAe). Variational autoencoders encourage the latent space to fit a prior distribution, like a Gaussian. However, they tend to draw stronger assumptions form the data, often leading to higher asymptotic bias when the model is wrong.\\ In this paper, we relax the parametric distribution assumption in the latent space and we propose to learn a non-parametric data distribution of the clusters in the latent space. The network encourages the latent space to fit a distribution learned with the labels instead of the parametric prior assumptions. We have built a network architecture that incorporates the labels into an autoencoder latent space. Thus we define a global criterion combining classification and reconstruction loss. In addition, we have proposed a $\ell_{1,1}$ regularization which has the advantage of sparsifying the network improving the clustering. Finally we propose a tailored algorithm to minimize the criterion with constraint. We demonstrate the effectiveness of our method using the popular image datasets MNIST and two biological datasets.