self-supervised-pretraining
self-supervised-pretraining copied to clipboard
chexpert config file (BYOL)
Hi, thanks for the great work!
I have some questions about training BYOL on chexpert dataset.
-
why the number of iterations is multiplied by 2 in the config file? I am asking this because based on the code we have another multiplication by 2 because of the 'update_interval' which is 2 for BYOL. For example, to train with 50k iterations, the total number of iterations for training will be 50k x 2 x 2 (update_interval) = 200k.
-
what is the default number of GPUs to train BYOL? Is that 4 with a batch size of 128 (imgs_per_gpu=32)?
-
why 'lr' is set to '4.8/16'. Is it related to the number of GPUs?