Next Steps for Image Synthesis using Semantic Segmentation
Image synthesis in the desired semantic can be used in many tasks of self-driving cars giving us the possibility to enhance existing challenging datasets by realistic-looking images which we do not have enough. Our goal is to improve the image quality generated by the conditional Generative Adversarial Network (cGAN). We focus on the class of problems where images are generated given semantic inputs, such as scene segmentation masks or human body poses. To do that, we change the architecture of the discriminator to better guide the generator. The improvements we present are generic and simple enough that any architecture of cGAN can benefit from. Our experiments show the benefits of our framework on different tasks and datasets. In this paper, the preliminary achievements of our study on the discriminator structure are described.
strc2020 (1).pdf
openaccess
3.31 MB
Adobe PDF
ea692fc80c41a06b87131f7e4819e24e