455
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Dual conditional GAN based on external attention for semantic image synthesis

ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon
Article: 2259120 | Received 03 Jun 2023, Accepted 10 Sep 2023, Published online: 04 Oct 2023
 

Abstract

Although the existing semantic image synthesis methods based on generative adversarial networks (GANs) have achieved great success, the quality of the generated images still cannot achieve satisfactory results. This is mainly caused by two reasons. One reason is that the information in the semantic layout is sparse. Another reason is that a single constraint cannot effectively control the position relationship between objects in the generated image. To address the above problems, we propose a dual-conditional GAN with based on an external attention for semantic image synthesis (DCSIS). In DCSIS, the adaptive normalization method uses the one-hot encoded semantic layout to generate the first latent space and the external attention uses the RGB encoded semantic layout to generate the second latent space. Two latent spaces control the shape of objects and the positional relationship between objects in the generated image. The graph attention (GAT) is added to the generator to strengthen the relationship between different categories in the generated image. A graph convolutional segmentation network (GSeg) is designed to learn information for each category. Experiments on several challenging datasets demonstrate the advantages of our method over existing approaches, regarding both visual quality and the representative evaluating criteria.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

The work described in this article is supported by the Hubei Province University Student Innovation and Entrepreneurship Training, Hubei University of Technology Graduate Research Innovation Project [grant number 4306.22019]. The work described in this paper was support by the National Natural Science Foundation of China [grant number 61300127].