A Review on SimCLR (Simple Framework for Contrastive Learning of Visual Representation)
Author : Nikhil Narayan
Submitted on : 5 Dec 2020
Abstract – This paper discusses the working principle of SimCLR algorithm, developed by research scientists at Google Brain. SimCLR framework incorporates a unique choice of data augmentation techniques along with a learnable nonlinear transformation between the representation and the contrastive loss to substantially improve the quality of the learned representations in a simplified manner compared to other contrastive learning approaches such as MoCo (Momentum Contrast) and PIRL (Pretext-Invariant Representations).
Keywords – self-supervised learning, contrastive learning, contrastive loss, pretext task/auxiliary task, downstream tasks, NT-Xent loss