Indexed by:
Abstract:
Counterfactual reasoning has recently achieved impressive performance in the explainability of recommendation. However, existing counterfactual explainable methods ignore the realism of explanations and consider only the sparsity and proximity of explanations. Moreover, the huge counterfactuals space causes a time-consuming search process. In this study, we propose Prototype-Guided Counterfactual Explanations (PGCE), a novel counterfactual explainable recommendation framework to overcome the above issues. At its core, PGCE leverages a variational auto-encoder generative model to constrain the modification of features to generate counterfactual instances that are consistent with the distribution of real data. Meanwhile, we constructed a contrastive prototype for each user in a low-dimensional latent space, which can guide the search direction towards the optimal candidate instance space, thus, speed up the search process. For evaluation, we compared our method with several state-of-the-art model-intrinsic methods on three real-world datasets, in addition to the latest counterfactual reasoning-based method. Extensive experiments show that our model is not only able to efficiently generate realistic counterfactual explanations but also achieve state-of-the-art performance on other popular explainability evaluation metrics. © 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.
Keyword:
Reprint Author's Address:
Email:
Source :
ISSN: 0302-9743
Year: 2023
Volume: 14174 LNAI
Page: 652-668
Language: English
Cited Count:
WoS CC Cited Count: 0
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 8
Affiliated Colleges: