Indexed by:
Abstract:
Text-to-Image (T2I) synthesis refers to the computational method of translating normal human text description into images which have alike semantic meaning as text with the use of keywords or sentences. T2I has achieved great success in a few areas, especially in generating vivid, realistic visual and photographic images. Recent years, the development of deep learning has brought some new methods for unsupervised deep learning area, which provide some models to generate visually natural images with suitably trained neural network models. However, T2I now still faces some challenges. For example, current T2I models are unable to generate high resolution images with multiple objects. And it can be difficult to reproduce the results of many approaches. Since present methods are mostly based on GAN (Generative Adversarial Network) models, this paper will focus on the methods depending on it and systematically clear up the developments of T2I based on GAN. Also, datasets always play a supporting role in task development, several current T2I-related datasets will be discussed in the paper to explain more about T2I's development. A brief discussion of the future work and challenges will be showed at the end of the paper. © 2022 IEEE.
Keyword:
Reprint Author's Address:
Email:
Source :
Year: 2022
Page: 843-847
Language: English
Cited Count:
SCOPUS Cited Count: 1
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 4
Affiliated Colleges: