Indexed by:
Abstract:
The rapid development of the social network has brought great convenience to people's lives. A large amount of cross-media big data, such as text, image, and video data, has been accumulated. A cross-media search can facilitate a quick query of information so that users can obtain helpful content for social networks. However, cross-media data suffer from semantic gaps and sparsity in social networks, which bring challenges to cross-media searches. To alleviate the semantic gaps and sparsity, we propose a cross-media search method based on complementary attention and generative adversarial networks (CAGS). To obtain high-quality feature representations, we build a complementary attention mechanism containing the focused and unfocused features of images to realize the consistent association of cross-media data in social networks. By designing the cross-media adversarial learning process, we can obtain a common semantic representation of cross-media data and further alleviate the semantic gap and sparsity issues for social networks. Finally, we perform a similarity calculation to realize an accurate cross-media search. We construct four search tasks utilizing two standard cross-media data sets to verify the search performance of the proposed CAGS.
Keyword:
Reprint Author's Address:
Email:
Source :
INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS
ISSN: 0884-8173
Year: 2021
Issue: 8
Volume: 37
Page: 4393-4416
7 . 0 0 0
JCR@2022
ESI Discipline: ENGINEERING;
ESI HC Threshold:87
JCR Journal Grade:1
Cited Count:
WoS CC Cited Count: 23
SCOPUS Cited Count: 23
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 2
Affiliated Colleges: