SimCLR
SimCLR copied to clipboard
MLP Dimension on ResNet Module
Hi there, in the models/resnet_simclr.py we have the following:
dim_mlp = self.backbone.fc.in_features
self.backbone.fc = nn.Sequential(nn.Linear(dim_mlp, dim_mlp), nn.ReLU(), self.backbone.fc)
Shouldn't the Linear be: dim_mlp, 128
Threfore we get the 128 dimension hidden vector used on the paper.
Hello. I remember the out_dim is the dimension of the output embedding through contrastive learning from projection head. The dim_mlp refers the output dimension of resnet18 or resnet50. The projection head is added after the network.