Indexed by:
Abstract:
Given a corrupted image, image inpainting aims to complete the image and outputs a plausible result. When we complete the missing region, we always borrow the information from a known area, which is aimless and causes unsatisfactory results. In our daily life, some other information is often used for corrupted images, such as text descriptions. Therefore, we introduce the use of text information to guide image inpainting. To fulfill this task, We introduce an inpainting model named TGNet (Text-Guided Inpainting Network). We provide a text-image gated feature fusion module to fuse text feature and image feature deeply. A mask attention module is proposed to enhance the consistency of known areas and the repaired area. Extensive quantitative and qualitative experiments on three public datasets with captions demonstrate the effectiveness of our method. © 2022 IEEE.
Keyword:
Reprint Author's Address:
Email:
Source :
Year: 2022
Page: 1804-1807
Language: English
Cited Count:
SCOPUS Cited Count: 3
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 2
Affiliated Colleges: