Indexed by:
Abstract:
The long-tailed distribution of monitoring data poses challenges for deep learning-based fault diagnosis (FD). Recent efforts utilizing supervised contrastive learning (SCL) and reweighted loss have made progress, but have overlooked two key issues: 1) prevailing random undersampling introduces sample influence bias and suboptimal model learning; and 2) focusing only on improving the FD average accuracy compromises fundamental fault judgement (FJ), heightening missed-detective and false-alarm risks unsuitable for real-world deployment. To fill these research gaps, this paper proposes a granularity knowledge-sharing SCL (GKSSCL) framework for longtailed FD, encompassing GKS supervised contrasting and GKS classification stages. In the former, normal data are clustered into multiple fine-grained subclasses that are similar in size to the fault categories for balanced contrasting. Moreover, a mixed-granularity contrastive loss facilitates knowledge sharing across granularities. In the latter, FJ and FD tasks were concurrently trained through a knowledge graph-based adaptive sharing strategy. Experiments on two fault datasets showed that the GKSSCL can effectively harness all normal data, eliminate sample influence bias, and enhance FD precision without sacrificing FJ reliability.
Keyword:
Reprint Author's Address:
Email:
Source :
KNOWLEDGE-BASED SYSTEMS
ISSN: 0950-7051
Year: 2024
Volume: 301
8 . 8 0 0
JCR@2022
Cited Count:
WoS CC Cited Count: 2
SCOPUS Cited Count: 2
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 11
Affiliated Colleges: