Hyperspherically regularized networks for self-supervision

作者:

Highlights:

• Bootstrap Your Own Latent does not uniformly distribute its feature representations like its contrastive counterparts.

• Explicit uniformity loss terms add batch dependencies which are undesirable computationally.

• The minimization of hyperspherical energy between network neurons improves representation uniformity and separability.

• Regularization methods play a key role in the distribution of feature representations in latent space.

摘要

•Bootstrap Your Own Latent does not uniformly distribute its feature representations like its contrastive counterparts.•Explicit uniformity loss terms add batch dependencies which are undesirable computationally.•The minimization of hyperspherical energy between network neurons improves representation uniformity and separability.•Regularization methods play a key role in the distribution of feature representations in latent space.

论文关键词:Self-supervised learning,Representation learning,Representation separability,Image classification

论文评审过程:Received 27 March 2022, Accepted 25 May 2022, Available online 30 May 2022, Version of Record 1 July 2022.

论文官网地址:https://doi.org/10.1016/j.imavis.2022.104494