• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
搜索

Author:

Li, J. (Li, J..) | Sun, T. (Sun, T..) | Yang, Z. (Yang, Z..) | Yuan, Z. (Yuan, Z..)

Indexed by:

EI Scopus

Abstract:

Text-to-Image (T2I) synthesis refers to the computational method of translating normal human text description into images which have alike semantic meaning as text with the use of keywords or sentences. T2I has achieved great success in a few areas, especially in generating vivid, realistic visual and photographic images. Recent years, the development of deep learning has brought some new methods for unsupervised deep learning area, which provide some models to generate visually natural images with suitably trained neural network models. However, T2I now still faces some challenges. For example, current T2I models are unable to generate high resolution images with multiple objects. And it can be difficult to reproduce the results of many approaches. Since present methods are mostly based on GAN (Generative Adversarial Network) models, this paper will focus on the methods depending on it and systematically clear up the developments of T2I based on GAN. Also, datasets always play a supporting role in task development, several current T2I-related datasets will be discussed in the paper to explain more about T2I's development. A brief discussion of the future work and challenges will be showed at the end of the paper. © 2022 IEEE.

Keyword:

Text-to-Image Neural Network synthesis GAN

Author Community:

  • [ 1 ] [Li J.]Xi'an University of Architecture and Technology, Xi'an, China
  • [ 2 ] [Sun T.]Beijing University of Technology, Beijing, China
  • [ 3 ] [Yang Z.]The Ohio State University, Ohio, United States
  • [ 4 ] [Yuan Z.]Beijing University of Technology, Beijing, China

Reprint Author's Address:

Email:

Show more details

Related Keywords:

Related Article:

Source :

Year: 2022

Page: 843-847

Language: English

Cited Count:

WoS CC Cited Count:

SCOPUS Cited Count: 1

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 4

Affiliated Colleges:

Online/Total:660/10583454
Address:BJUT Library(100 Pingleyuan,Chaoyang District,Beijing 100124, China Post Code:100124) Contact Us:010-67392185
Copyright:BJUT Library Technical Support:Beijing Aegean Software Co., Ltd.