submissions icon indicating copy to clipboard operation
submissions copied to clipboard

[Re] A Simple Framework for Contrastive Learning of Visual Representations

Open ADevillers opened this issue 2 years ago • 29 comments

Original article: T. Chen, S. Kornblith, M. Norouzi, and G. Hinton. “A simple framework for contrastive learning of visual repre- sentations.” In: International conference on machine learning. PMLR, 2020, pp. 1597–1607

PDF URL: https://github.com/ADevillers/SimCLR/blob/main/report.pdf Metadata URL: https://github.com/ADevillers/SimCLR/blob/main/report.metadata.tex Code URL: https://github.com/ADevillers/SimCLR/tree/main

Scientific domain: Representation Learning Programming language: Python Suggested editor: @rougier

ADevillers avatar Nov 09 '23 16:11 ADevillers

Thanks for your submission and sorry for the delay. We'll assign an editor soon.

rougier avatar Nov 22 '23 07:11 rougier

@gdetor @benoit-girard @koustuvsinha Can any of you edit this submission?

rougier avatar Nov 22 '23 07:11 rougier

I can do it!

benoit-girard avatar Nov 22 '23 14:11 benoit-girard

Good news: @charlypg has accepted to review this paper and its companion!

benoit-girard avatar Dec 04 '23 09:12 benoit-girard

@pps121 would you like to review this paper? And possibly (or aletrnatively) its companion paper https://github.com/ReScience/submissions/issues/77 ? Let me know!

benoit-girard avatar Dec 04 '23 10:12 benoit-girard

Hello everybody. I am going to review SimCLR then BYOL. I have a lot to do for my own research during 2 weeks but I think I can do your review before the 25. Is it ok for you ? It will also depend on the required computational resources.

charlypg avatar Dec 05 '23 18:12 charlypg

@bsciolla would you like to review this paper? And possibly (or alternatively) its companion paper https://github.com/ReScience/submissions/issues/77 ? Let me know!

benoit-girard avatar Dec 21 '23 10:12 benoit-girard

@cJarvers would you like to review this paper? And possibly (or alternatively) its companion paper https://github.com/ReScience/submissions/issues/77 ? Let me know!

benoit-girard avatar Jan 19 '24 10:01 benoit-girard

@schmidDan would you like to review this paper? And possibly (or alternatively) its companion paper https://github.com/ReScience/submissions/issues/77 ? Let me know!

benoit-girard avatar Jan 19 '24 11:01 benoit-girard

@charlypg do you have an idea when you could be able to deliver your review?

benoit-girard avatar Jan 19 '24 11:01 benoit-girard

@mo-arvan would you like to review this paper? And possibly (or alternatively) its companion paper https://github.com/ReScience/submissions/issues/77 ? Let me know!

benoit-girard avatar Jan 19 '24 11:01 benoit-girard

@pena-rodrigo would you like to review this paper? And possibly (or alternatively) its companion paper https://github.com/ReScience/submissions/issues/77 ? Let me know!

benoit-girard avatar Jan 19 '24 11:01 benoit-girard

@bagustris would you like to review this paper? And possibly (or alternatively) its companion paper https://github.com/ReScience/submissions/issues/77 ? Let me know!

benoit-girard avatar Jan 19 '24 11:01 benoit-girard

@birdortyedi would you like to review this paper? And possibly (or alternatively) its companion paper https://github.com/ReScience/submissions/issues/77 ? Let me know!

benoit-girard avatar Jan 19 '24 11:01 benoit-girard

@MiWeiss would you like to review this paper? And possibly (or alternatively) its companion paper https://github.com/ReScience/submissions/issues/77 ? Let me know!

benoit-girard avatar Jan 19 '24 11:01 benoit-girard

@MiWeiss would you like to review this paper? And possibly (or alternatively) its companion paper #77 ? Let me know!

Hi @benoit-girard. Unfortunately, I am currently not available - and I am afraid I also would not have quite the compute needed to run the code of this paper ;-)

MiWeiss avatar Jan 19 '24 20:01 MiWeiss

Hello everybody. I am really sorry for the delay. First of all, thank you for this work that may benefit to the community because reproduction in machine learning is always complicated as tips and tricks are not always precised in the articles themselves.
Here are two lists, one for the good aspects and another one for the problems I encountered.

Good :

  • Implementation tips and tricks are precised in the article
  • The code is proper and clear which benefits to the community
  • I could reproduce CIFAR top-1 accuracy results
  • (PS: How to reproduce top-5 ?)

Problems :

  • Config for single GPU

    • Please provide minimum config (cuda version + conda env + requirements). I had some problems because I didn't have the right cuda/cudnn version. Even though it takes only 5-10min to fix, I think it is important.
    • "six" module is not in requirements
  • For evaluation I had the following problem : "Error tracker : world_size missing argument for tracker". So I set it to 1. What does world_size mean and how to set it ?

  • With world_size=1 I could reproduce eval results for CIFAR but not for ImageNet on Jean Zay. In the logs I obtain 59% of top-1 accuracy instead of 70% in the article and I also obtain a warning with it : "WARNING: A reduction issue may have occurred (abs(50016.0 - 1563.0*1) >= 1)."

charlypg avatar Jan 31 '24 15:01 charlypg

Dear Reviewer (@charlypg),

Thank you very much for your insightful feedback.

I will do my best to ensure that I provide the minimal configuration required to run the code on a single (non-JeanZay) GPU machine as soon as possible. However, I would like to highlight a challenge: currently, I do not have access to a machine with these specifications. My resources are limited to Jean Zay and a CPU-only laptop, which may complicate the development and testing of the configuration (hopefully, this will not be the case for a long time).

Regarding the "Error tracker: world_size missing argument for tracker" issue, it is my bad (and it is now fixed). This error was indeed a typo on my part, coming from recent code updates related to the warning mentioned right after in your review.

Thus, for this warning "A reduction issue may have occurred (abs(50016.0 - 1563.0*1) >= 1)," this problem is attributed to an unresolved issue within PyTorch's distributed operations that can lead to illogical reduction, leading to erroneous results (for further details, please refer to: https://discuss.pytorch.org/t/distributed-all-reduce-returns-strange-results/89248). Unfortunately, if this warning is triggered, it indicates that the results of the current epoch (often the final one) are unreliable. The recommended approach in this case is to restart the experiment from the previous checkpoint.

Regarding the top-5 accuracy metric, it should be automatically calculated and available through TensorBoard. Could you please clarify if you encountered any difficulties in accessing these results?

Best regards, Alexandre DEVILLERS

ADevillers avatar Feb 02 '24 14:02 ADevillers

Dear @ADevillers ,

Thank you for your response. I will try the evaluation on other checkpoints. By the way, what do "even" and "odd" mean regarding checkpoints ?

Thank you in advance, Charly PECQUEUX--GUÉZÉNEC

charlypg avatar Feb 02 '24 20:02 charlypg

Dear @charlypg,

To clarify this part of the checkpointing strategy, this involves alternating saves between "odd" and "even" checkpoints at the end of each respective epoch. This trick ensures that if a run fails during an odd-numbered epoch, we have the state from the preceding epoch in the "even" checkpoint, and vice versa.

Please feel free to reach out if you have any further questions.

Best regards, Alexandre

ADevillers avatar Feb 06 '24 08:02 ADevillers

@charlypg : thanks a lot for the review.

benoit-girard avatar Feb 07 '24 14:02 benoit-girard

@MiWeiss would you like to review this paper? And possibly (or alternatively) its companion paper #77 ? Let me know!

Hi @benoit-girard. Unfortunately, I am currently not available - and I am afraid I also would not have quite the compute needed to run the code of this paper ;-)

Thanks a lot for your answer.

benoit-girard avatar Feb 07 '24 14:02 benoit-girard

@ReScience/reviewers I am looking for a reviewer with expertise in machine learning to review this submission and possibly (or alternatively) its companion paper https://github.com/ReScience/submissions/issues/77

benoit-girard avatar Feb 07 '24 15:02 benoit-girard

Dear @ADevillers ,

Thank you for your answer.

I have a question about the training. Once the job corresponding to "run_simclr_imagenet.slurm" has successfully ended, I only obtain one checkpoint of the form "expe_[job_id]_[epoch_number].pt". If I understand your paper well ("Jobs too long and checkpoints" paragraph), you submit the same slurm multiple times to reach 800 epochs ? If yes, is the checkpoint, from which you start training, the only thing you modify in the slurm script ?

Best regards, Charly PECQUEUX--GUÉZÉNEC

charlypg avatar Feb 07 '24 15:02 charlypg

Dear @charlypg ,

Yes, the script itself remains unchanged; the only variation is in the checkpoint used. Initially, no checkpoint is provided for the first execution. Then, I use the last checkpoint from the preceding job. This checkpoint contains all pertinent data, including the current epoch, scheduler, optimizer, and model state, allowing the training to resume from where it was interrupted. Note that you should not modify the other hyperparameters while doing so, as this may lead to unexpected behaviors.

Best regards, Alexandre

ADevillers avatar Feb 14 '24 09:02 ADevillers

Dear @ADevillers ,

I am sorry for my late response.

I could reproduce top-1 results on Jean Zay. So the reproduction seems convincing to me.

However, I cannot find the top-5 results. I saw there is a folder "runs" but much of my evaluation results have not been stored in it.

Best regards, Charly PECQUEUX--GUÉZÉNEC

charlypg avatar Apr 03 '24 13:04 charlypg

Dear @charlypg,

Your runs should normally be stored in the "runs" folder under a format readable by tensorboard and contains all the curves (including Top-5 acc).

Note that, when starting from a checkpoint, the data will append to the file corresponding to the run of the checkpoint. Therefore, a run on ImageNet, even if it requires 6 to 7 restarts from a checkpoint, will only produce one file (that will contain everything).

To find out where the issue could be, can you please answer the following questions:

  1. Is your "runs" folder empty?
  2. Have you been able to open tensorboard with the "runs" folder?
  3. If so, do you see any runs/curves?
  4. Are you able to find in the runs list the ones starting with the same ID as the first job of your run?
  5. If so, is there any curve you are able to see for these runs?

Best, Alexandre DEVILLERS

ADevillers avatar Apr 20 '24 12:04 ADevillers

@benoit-girard Gentle reminder

rougier avatar May 27 '24 12:05 rougier

@benoit-girard Any update on the second review ?

rougier avatar Jul 11 '24 05:07 rougier