hzhzhzhzhz
hzhzhzhzhz
Hi Thanks for the great repo. I'm wondering in the line below, why do we need .detach(), isn't that the targets_u is already in the scope of torch.no_grad()? https://github.com/YU1ut/MixMatch-pytorch/blob/cc7ef42cffe61288d06eec1428268b384674009a/train.py#L235
Hi I'm wondering in your implementation of pseudo-labeling, why did you use non-zero loss for unlabeled sample that has maximum predicted probability **below** the threshold.
Hi Thank you for this amazing repo. For the implementation of FlexMatch. you update the selected_label in this line selected_label[x_ulb_idx[select == 1]] = pseudo_lb[select == 1] https://github.com/TorchSSL/TorchSSL/blob/f26e1d42967cec7f7c8a00c2e7ff9219d8ab7c92/models/flexmatch/flexmatch.py#L181 where the indicator...
Dear Author Thanks for the great work. For these benchmarking results, did you run any hyperparameter searches? What is the size of the validation set? Looking forward to hearing from...
Dear Authors Thanks for providing this great repo! You mentioned in your FlexMatch paper that a batch norm controller is introduced in the codebase to prevent performance crashes for some...
Dear Author Thank you for this exciting work! I have a clarification question regarding the experiment setting: Did you use any validation data to tune the hyperparameters for CoMatch? How...
Dear Authors Your work is exciting! I am trying out your code with the example you provides, python Train_CoMatch.py --n-labeled 40 --seed 1 I am running on one A100 gpu....
Hi I would like to ask a clarification question regarding experiment setting, i'm curious did u tune the hyperparameter for FixMatch or the other compared baselines?
Hi can you provide the commands (like you did in MixMatch repo, the runs/ folder) to reproduce the results in the paper? Thanks a lot! and thanks for the great...
Hi It would be great if you can provide code for table 11 in your "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence". I found the ablation interesting, and would...