Paper ID | MMSP-3.5 |
Paper Title |
DrawGAN: Text to Image Synthesis with Drawing Generative Adversarial Networks |
Authors |
Zhiqiang Zhang, Jinjia Zhou, Hosei University, Japan; Wenxin Yu, Ning Jiang, Southwest University of Science and Technology, China |
Session | MMSP-3: Multimedia Synthesis and Enhancement |
Location | Gather.Town |
Session Time: | Wednesday, 09 June, 14:00 - 14:45 |
Presentation Time: | Wednesday, 09 June, 14:00 - 14:45 |
Presentation |
Poster
|
Topic |
Multimedia Signal Processing: Signal Processing for Multimedia Applications |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
In this paper, we propose a novel drawing generative adversarial networks (DrawGAN) for text-to-image synthesis. The whole model divides the image synthesis into three stages by imitating the process of drawing. The first stage synthesizes the simple contour image based on the text description, the second stage generates the foreground image with detailed information, and the third stage synthesizes the final result. Through the step by step synthesis process from simple to complex and easy to difficult, the model can draw the corresponding results step by step and finally achieve the higher-quality image synthesis effect. Our method is validated on the Caltech-UCSD Birds 200 (CUB) dataset and the Microsoft Common Objects in Context (MS COCO) dataset. The experimental results demonstrate the effectiveness and superiority of our method. In terms of both subjective and objective evaluation, our method's results surpass the existing state-of-the-art methods. |