examples icon indicating copy to clipboard operation
examples copied to clipboard

MNIST test_loader not being used as intended (wrong)!

Open syomantak opened this issue 5 years ago • 3 comments

In the MNIST code, around line 57 (I think), the following code is wrong in my opinion for data, target in test_loader: It should be replaced by for idx,(data, target) in test_loader: The original code gives an error (int has no attribute .to(device)) as originally, data is actually assigned an integer (idx value) wrongly. I have verified this fix works. The code for training is correct but surprisingly there is a mistake in test code.

syomantak avatar Apr 19 '20 06:04 syomantak

@CyanideBoy would you like to make a PR to fix this?

msaroufim avatar Mar 09 '22 23:03 msaroufim

@msaroufim Sorry I haven't used Pytorch in a while, forgot a lot of stuff 😅

syomantak avatar Mar 10 '22 18:03 syomantak

@msaroufim I believe this issue has already been resolved.

I've examined the code, and the difference in the presence of batch_idx between the train function and the test function is because the train function prints training logs for each batch size. In contrast, the test function does not have any training log outputs, so batch_idx appears unnecessary there. Furthermore, the test function is currently running smoothly without any issues.

if batch_idx % args.log_interval == 0:  
    print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(  
        epoch, batch_idx * len(data), len(train_loader.dataset),  
        100. * batch_idx / len(train_loader), loss.item()))  
    if args.dry_run:  
        break

I don't think the above part is necessary for the test function, so it doesn't seem necessary to add batch_idx within the test function.

I would appreciate it if you could consider closing this issue.

osmin625 avatar Sep 07 '23 05:09 osmin625