DivideMix
DivideMix copied to clipboard
Code for paper: DivideMix: Learning with Noisy Labels as Semi-supervised Learning
`prior = torch.ones(args.num_class)/args.num_class` `prior = prior.cuda()` `pred_mean = torch.softmax(logits, dim=1).mean(0)` `penalty = torch.sum(prior*torch.log(prior/pred_mean))` entropy=p*log(p) why not penalty = `torch.sum(pred_mean*torch.log(prior/pred_mean))`
Hi li, I got a poor performance in both cifar-10/100 when the noise ratio is high than 80%, which mainly indicating 80% 90% in Table 1. All goes well with...
Hi, I haven't been able to find which hyper-parameters you use to train on CIFAR-100 with 40% asymmetric noise. Can you please tell me? Thank you! P.S: Awesome work!
Dear author Thank you very much for your excellent code. My recent work is also trying to identify noise labels from correct labels. I'm curious where you output the loss...
Hi, I am impressed and interested in your outstanding work. However, I noticed a question about the batch size while experimenting with the code. When I increase the batch size...
The cifar10 0.2 noise rate, I can only get about 91.5% acc useing the code, and I can't find the reason
why cahnge (samples, channels, height, width) to (samples, height, width, channels) in cifar-10 and cifar-100? (transpose((0, 2, 3, 1))) I wonder the last dimension is (samples, channels, height, width) or...
I couldn't achieve the accuracy of CIFAR10-N on its dataset's official website, or even much worse.Any recommendations for hyperparameters?