BYOL-PyTorch
BYOL-PyTorch copied to clipboard
PyTorch implementation of "Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning" with DDP and Apex AMP
Hi. Thanks for sharing your code. I trained the Imagenet data based on your implementation. The trained model performance is around 44.3% in KNN clustering. However, your checkpoint gives around...
Hi, thanks for your excellent work! I'm trying to reproduce BYOL using albumentations transforms. However, I noticed that in [https://github.com/yaox12/BYOL-PyTorch/blob/master/data/byol_transform_a.py#L43](https://github.com/yaox12/BYOL-PyTorch/blob/master/data/byol_transform_a.py#L43) you set the `sigma_limit` to `(0.1, 0.2)`, instead of `(0.1,...
I notice that you set the `opt_level='O0'`, which is FP32 training instead of mixed-precision training. What would happen if using `opt_level=O1` or higher `opt_level`?
I notice a paper "Momentum2 Teacher: Momentum Teacher with Momentum Statistics for Self-Supervised Learning". It is an interesting work.  The results using all BN will not collapse. I doubt...
Would you release the ImageNet Linear Classification code or configuration?
Hi, thanks for the implementation. Could you provide the (approximate) training time to produce the result in your table in the Readme?
The trend of the loss curve in the BYOL method is always decreasing, or does it decrease first and then increase? Because as I understand it, initially the target_network and...