Indexed by:
Abstract:
Referring image segmentation identifies the object masks from images with the guidance of input natural language expressions. Nowadays, many remarkable cross-modal decoder are devoted to this task. But there are mainly two key challenges in these models. One is that these models usually lack to extract fine-grained boundary information and gradient information of images. The other is that these models usually lack to explore language associations among image pixels. In this work, a Multi-scale Gradient balanced Central Difference Convolution (MG-CDC) and a Graph convolutional network-based Language and Image Fusion (GLIF) for cross-modal encoder, called Graph-RefSeg, are designed. Specifically, in the shallow layer of the encoder, the MG-CDC captures comprehensive fine-grained image features. It could enhance the perception of target boundaries and provide effective guidance for deeper encoding layers. In each encoder layer, the GLIF is used for cross-modal fusion. It could explore the correlation of every pixel and its corresponding language vectors by a graph neural network. Since the encoder achieves robust cross-modal alignment and context mining, a light-weight decoder could be used for segmentation prediction. Extensive experiments show that the proposed Graph-RefSeg outperforms the state-of-the-art methods on three public datasets. Code and models will be made publicly available at https://github.com/ZYQ111/Graph_refseg. © 2024 The Authors. IET Image Processing published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology.
Keyword:
Reprint Author's Address:
Email:
Source :
IET Image Processing
ISSN: 1751-9659
Year: 2024
Issue: 4
Volume: 18
Page: 1083-1095
2 . 3 0 0
JCR@2022
Cited Count:
WoS CC Cited Count: 0
SCOPUS Cited Count: 1
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 16
Affiliated Colleges: