2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

Technical Program

Paper Detail

Paper IDIVMSP-23.3
Paper Title RoutingGAN: Routing Age Progression and Regression with Disentangled Learning
Authors Zhizhong Huang, Junping Zhang, Hongming Shan, Fudan University, China
SessionIVMSP-23: Applications 1
LocationGather.Town
Session Time:Thursday, 10 June, 15:30 - 16:15
Presentation Time:Thursday, 10 June, 15:30 - 16:15
Presentation Poster
Topic Image, Video, and Multidimensional Signal Processing: [IVARS] Image & Video Analysis, Synthesis, and Retrieval
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Virtual Presentation  Click here to watch in the Virtual Conference
Abstract Although impressive results have been achieved for age progression and regression, there remain two major issues in generative adversarial networks (GANs)-based methods: 1) conditional GANs (cGANs)-based methods can learn various effects between any two age groups in a single model, but are insufficient to characterize some specific patterns due to completely shared convolutions filters; and 2) GANs-based methods can, by utilizing several models to learn effects independently, learn some specific patterns, however, they are cumbersome and require age label in advance. To address these deficiencies and have the best of both worlds, this paper introduces a dropout-like method based on GAN~(RoutingGAN) to route different effects in a high-level semantic feature space. Specifically, we first disentangle the age-invariant features from the input face, and then gradually add the effects to the features by residual routers that assign the convolution filters to different age groups by dropping out the outputs of others. As a result, the proposed RoutingGAN can simultaneously learn various effects in a single model, with convolution filters being shared in part to learn some specific effects. Experimental results on two benchmarked datasets demonstrate superior performance over existing methods both qualitatively and quantitatively.