BYOL-PyTorch
BYOL-PyTorch copied to clipboard
Will apex opt_level affect performance?
I notice that you set the opt_level='O0', which is FP32 training instead of mixed-precision training. What would happen if using opt_level=O1 or higher opt_level?
I have tried opt_level=O1 and O2, they gave very close results but didn't show much speedup in training time.
I guess it is due to the communication bottleneck.
Thx.