• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
搜索

Author:

Ji, Q. (Ji, Q..) | Sun, Y. (Sun, Y..) | Gao, J. (Gao, J..) | Hu, Y. (Hu, Y..) | Yin, B. (Yin, B..)

Indexed by:

EI Scopus SCIE

Abstract:

In deep clustering frameworks, autoencoder (AE)- or variational AE-based clustering approaches are the most popular and competitive ones that encourage the model to obtain suitable representations and avoid the tendency for degenerate solutions simultaneously. However, for the clustering task, the decoder for reconstructing the original input is usually useless when the model is finished training. The encoder-decoder architecture limits the depth of the encoder so that the learning capacity is reduced severely. In this article, we propose a decoder-free variational deep embedding for unsupervised clustering (DFVC). It is well known that minimizing reconstruction error amounts to maximizing a lower bound on the mutual information (MI) between the input and its representation. That provides a theoretical guarantee for us to discard the bloated decoder. Inspired by contrastive self-supervised learning, we can directly calculate or estimate the MI of the continuous variables. Specifically, we investigate unsupervised representation learning by simultaneously considering the MI estimation of continuous representations and the MI computation of categorical representations. By introducing the data augmentation technique, we incorporate the original input, the augmented input, and their high-level representations into the MI estimation framework to learn more discriminative representations. Instead of matching to a simple standard normal distribution adversarially, we use end-to-end learning to constrain the latent space to be cluster-friendly by applying the Gaussian mixture distribution as the prior. Extensive experiments on challenging data sets show that our model achieves higher performance over a wide range of state-of-the-art clustering approaches. © 2012 IEEE.

Keyword:

Augmented mutual information (MI) deep clustering variational embedding self-supervised learning (SSL)

Author Community:

  • [ 1 ] [Ji, Q.]Beijing University of Technology, Faculty of Information Technology, Beijing, 100124, China
  • [ 2 ] [Sun, Y.]Beijing University of Technology, Faculty of Information Technology, Beijing, 100124, China
  • [ 3 ] [Gao, J.]The University of Sydney Business School, The University of Sydney, Discipline of Business Analytics, Sydney, NSW 2006, Australia
  • [ 4 ] [Hu, Y.]Beijing University of Technology, Faculty of Information Technology, Beijing, 100124, China
  • [ 5 ] [Yin, B.]Beijing University of Technology, Faculty of Information Technology, Beijing, 100124, China

Reprint Author's Address:

  • [Sun, Y.]Beijing University of Technology, China;;[Yin, B.]Beijing University of Technology, China

Show more details

Related Keywords:

Related Article:

Source :

IEEE Transactions on Neural Networks and Learning Systems

ISSN: 2162-237X

Year: 2022

Issue: 10

Volume: 33

Page: 5681-5693

1 0 . 4

JCR@2022

1 0 . 4 0 0

JCR@2022

ESI Discipline: COMPUTER SCIENCE;

ESI HC Threshold:46

JCR Journal Grade:1

CAS Journal Grade:1

Cited Count:

WoS CC Cited Count: 0

SCOPUS Cited Count: 19

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 13

Affiliated Colleges:

Online/Total:500/10549662
Address:BJUT Library(100 Pingleyuan,Chaoyang District,Beijing 100124, China Post Code:100124) Contact Us:010-67392185
Copyright:BJUT Library Technical Support:Beijing Aegean Software Co., Ltd.