Indexed by:
Abstract:
Probabilistic linear discriminant analysis (PLDA) is a very effective feature extraction approach and has obtained extensive and successful applications in supervised learning tasks. It employs the squared L-2-norm to measure the model errors, which assumes a Gaussian noise distribution implicitly. However, the noise in real-life applications may not follow a Gaussian distribution. Particularly, the squared L-2-norm could extremely exaggerate data outliers. To address this issue, this article proposes a robust PLDA model under the assumption of a Laplacian noise distribution, called L1-PLDA. The learning process employs the approach by expressing the Laplacian density function as a superposition of an infinite number of Gaussian distributions via introducing a new latent variable and then adopts the variational expectation-maximization (EM) algorithm to learn parameters. The most significant advantage of the new model is that the introduced latent variable can be used to detect data outliers. The experiments on several public databases show the superiority of the proposed L1-PLDA model in terms of classification and outlier detection.
Keyword:
Reprint Author's Address:
Email:
Source :
IEEE TRANSACTIONS ON CYBERNETICS
ISSN: 2168-2267
Year: 2022
Issue: 3
Volume: 52
Page: 1616-1627
1 1 . 8
JCR@2022
1 1 . 8 0 0
JCR@2022
ESI Discipline: COMPUTER SCIENCE;
ESI HC Threshold:46
JCR Journal Grade:1
CAS Journal Grade:1
Cited Count:
WoS CC Cited Count: 9
SCOPUS Cited Count: 8
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 4
Affiliated Colleges: