Prob-CLR: A probabilistic approach to learn discriminative representation

作者:

Highlights:

摘要

One of the core objectives of unsupervised representation learning is to learn discriminative representations without human supervision. Generally speaking, discriminative representations should be efficiently assigned to desired clusters because of their small intra-class distances and considerable inter-class distances. However, representations learned by most of the previous works cannot be clustered effectively and need to be fine-tuned for clustering. These fine-tuning processes usually take advantage of the dataset’s prior knowledge, which may not be easy to acquire in real-world scenarios. Besides, the fine-tuning network needs to be retrained when the prior knowledge changes, making the network lack adaptability. This paper proposed a probability-based contrastive learning method (Prob-CLR) that aims to learn discriminative representations without the prior knowledge of the given dataset. Prob-CLR works by encoding input data into multivariate Gaussian distributions and imposing contrastive learning from two different probability perspectives, namely distribution-wise and sample-wise regulations. The learned representations are discriminative and can be signed into desired clusters by K-means. Extensive experimental results show that Prob-CLR outperforms state-of-the-art methods on five benchmarks. Besides, Prob-CLR has stable performance in multi-grained clustering.

论文关键词:Representation learning,Unsupervised learning,Clustering

论文评审过程:Received 2 April 2021, Revised 13 June 2021, Accepted 20 July 2021, Available online 22 July 2021, Version of Record 29 July 2021.

论文官网地址:https://doi.org/10.1016/j.knosys.2021.107329