Indexed by:
Abstract:
Recommender systems play a crucial role in providing personalized services but face significant challenges from data sparsity and long-tail bias. Researchers have sought to address these issues using self-supervised contrastive learning. Current contrastive learning primarily relies on self-supervised signals to enhance embedding quality. Despite performance improvement, task-independent contrastive learning contributes limited to the recommendation task. In an effort to adapt contrastive learning to the task, we propose a preference contrastive learning (PCL) model by contrasting preferences of user-items pairs to model users' interests, instead of the self-supervised user-user/item-item discrimination. The supervised contrastive manner works in a single view of the interaction graph and does not require additional data augmentation and multi-view contrasting anymore. Performance on public datasets shows that the proposed PCL outperforms the state-of-the-art models, demonstrating that preference contrast betters self-supervised contrast for personalized recommendation.
Keyword:
Reprint Author's Address:
Email:
Source :
PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT IX
ISSN: 0302-9743
Year: 2024
Volume: 14433
Page: 356-367
Cited Count:
WoS CC Cited Count: 1
SCOPUS Cited Count: 1
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 10
Affiliated Colleges: