deeplearning-models icon indicating copy to clipboard operation
deeplearning-models copied to clipboard

cnn-vgg16.ipynb got abnormal results

Open nameongithub opened this issue 1 year ago • 1 comments

Hello Sebastian, First of all, I would like to express my gratitude for your great work and knowledge sharing!

I just ran the cnn-vgg16.ipynb on Google Colab without any modification (except the CUDA device ordinal). The result I got was totally abnormal, and was different from yours provided. The cost didn't decrease and the accuracy didn't increase at all. Could you please have a look at it?

Thank you so much, again!

Below is my train log. And the notebook on Google Colab is here.

Epoch: 001/010 | Batch 0000/0391 | Cost: 2.3682
Epoch: 001/010 | Batch 0050/0391 | Cost: 2.2857
Epoch: 001/010 | Batch 0100/0391 | Cost: 2.3016
Epoch: 001/010 | Batch 0150/0391 | Cost: 2.3024
Epoch: 001/010 | Batch 0200/0391 | Cost: 2.3069
Epoch: 001/010 | Batch 0250/0391 | Cost: 2.3022
Epoch: 001/010 | Batch 0300/0391 | Cost: 2.3035
Epoch: 001/010 | Batch 0350/0391 | Cost: 2.3035
Epoch: 001/010 | Train: 10.000% |  Loss: 2.303
Time elapsed: 0.63 min
Epoch: 002/010 | Batch 0000/0391 | Cost: 2.3032
Epoch: 002/010 | Batch 0050/0391 | Cost: 2.3020
Epoch: 002/010 | Batch 0100/0391 | Cost: 2.3012
Epoch: 002/010 | Batch 0150/0391 | Cost: 2.3041
Epoch: 002/010 | Batch 0200/0391 | Cost: 2.3035
Epoch: 002/010 | Batch 0250/0391 | Cost: 2.3009
Epoch: 002/010 | Batch 0300/0391 | Cost: 2.3026
Epoch: 002/010 | Batch 0350/0391 | Cost: 2.3005
Epoch: 002/010 | Train: 10.000% |  Loss: 2.303
Time elapsed: 1.25 min
Epoch: 003/010 | Batch 0000/0391 | Cost: 2.3008
Epoch: 003/010 | Batch 0050/0391 | Cost: 2.3013
Epoch: 003/010 | Batch 0100/0391 | Cost: 2.3013
Epoch: 003/010 | Batch 0150/0391 | Cost: 2.3018
Epoch: 003/010 | Batch 0200/0391 | Cost: 2.3027
Epoch: 003/010 | Batch 0250/0391 | Cost: 2.3029
Epoch: 003/010 | Batch 0300/0391 | Cost: 2.3028
Epoch: 003/010 | Batch 0350/0391 | Cost: 2.3036
Epoch: 003/010 | Train: 10.000% |  Loss: 2.303
Time elapsed: 1.88 min
Epoch: 004/010 | Batch 0000/0391 | Cost: 2.3025
Epoch: 004/010 | Batch 0050/0391 | Cost: 2.3021
Epoch: 004/010 | Batch 0100/0391 | Cost: 2.3015
Epoch: 004/010 | Batch 0150/0391 | Cost: 2.3024
Epoch: 004/010 | Batch 0200/0391 | Cost: 2.3027
Epoch: 004/010 | Batch 0250/0391 | Cost: 2.3014
Epoch: 004/010 | Batch 0300/0391 | Cost: 2.3030
Epoch: 004/010 | Batch 0350/0391 | Cost: 2.3026
Epoch: 004/010 | Train: 10.000% |  Loss: 2.303
Time elapsed: 2.50 min
Epoch: 005/010 | Batch 0000/0391 | Cost: 2.3014
Epoch: 005/010 | Batch 0050/0391 | Cost: 2.3027
Epoch: 005/010 | Batch 0100/0391 | Cost: 2.3023
Epoch: 005/010 | Batch 0150/0391 | Cost: 2.3017
Epoch: 005/010 | Batch 0200/0391 | Cost: 2.3007
Epoch: 005/010 | Batch 0250/0391 | Cost: 2.3018
Epoch: 005/010 | Batch 0300/0391 | Cost: 2.3029
Epoch: 005/010 | Batch 0350/0391 | Cost: 2.3028
Epoch: 005/010 | Train: 10.000% |  Loss: 2.303
Time elapsed: 3.13 min
Epoch: 006/010 | Batch 0000/0391 | Cost: 2.3018
Epoch: 006/010 | Batch 0050/0391 | Cost: 2.3009
Epoch: 006/010 | Batch 0100/0391 | Cost: 2.3020
Epoch: 006/010 | Batch 0150/0391 | Cost: 2.3030
Epoch: 006/010 | Batch 0200/0391 | Cost: 2.3025
Epoch: 006/010 | Batch 0250/0391 | Cost: 2.3005
Epoch: 006/010 | Batch 0300/0391 | Cost: 2.3033
Epoch: 006/010 | Batch 0350/0391 | Cost: 2.3028
Epoch: 006/010 | Train: 10.000% |  Loss: 2.303
Time elapsed: 3.75 min
Epoch: 007/010 | Batch 0000/0391 | Cost: 2.3024
Epoch: 007/010 | Batch 0050/0391 | Cost: 2.3027
Epoch: 007/010 | Batch 0100/0391 | Cost: 2.3032
Epoch: 007/010 | Batch 0150/0391 | Cost: 2.3044
Epoch: 007/010 | Batch 0200/0391 | Cost: 2.3026
Epoch: 007/010 | Batch 0250/0391 | Cost: 2.3030
Epoch: 007/010 | Batch 0300/0391 | Cost: 2.3026
Epoch: 007/010 | Batch 0350/0391 | Cost: 2.3024
Epoch: 007/010 | Train: 10.000% |  Loss: 2.303
Time elapsed: 4.37 min
Epoch: 008/010 | Batch 0000/0391 | Cost: 2.3025
Epoch: 008/010 | Batch 0050/0391 | Cost: 2.3033
Epoch: 008/010 | Batch 0100/0391 | Cost: 2.3034
Epoch: 008/010 | Batch 0150/0391 | Cost: 2.3021
Epoch: 008/010 | Batch 0200/0391 | Cost: 2.3034
Epoch: 008/010 | Batch 0250/0391 | Cost: 2.3034
Epoch: 008/010 | Batch 0300/0391 | Cost: 2.3027
Epoch: 008/010 | Batch 0350/0391 | Cost: 2.3030
Epoch: 008/010 | Train: 10.000% |  Loss: 2.303
Time elapsed: 5.00 min
Epoch: 009/010 | Batch 0000/0391 | Cost: 2.3031
Epoch: 009/010 | Batch 0050/0391 | Cost: 2.3029
Epoch: 009/010 | Batch 0100/0391 | Cost: 2.3033
Epoch: 009/010 | Batch 0150/0391 | Cost: 2.3035
Epoch: 009/010 | Batch 0200/0391 | Cost: 2.3019
Epoch: 009/010 | Batch 0250/0391 | Cost: 2.3027
Epoch: 009/010 | Batch 0300/0391 | Cost: 2.3037
Epoch: 009/010 | Batch 0350/0391 | Cost: 2.3027
Epoch: 009/010 | Train: 10.000% |  Loss: 2.303
Time elapsed: 5.62 min
Epoch: 010/010 | Batch 0000/0391 | Cost: 2.3030
Epoch: 010/010 | Batch 0050/0391 | Cost: 2.3023
Epoch: 010/010 | Batch 0100/0391 | Cost: 2.3031
Epoch: 010/010 | Batch 0150/0391 | Cost: 2.3023
Epoch: 010/010 | Batch 0200/0391 | Cost: 2.3029
Epoch: 010/010 | Batch 0250/0391 | Cost: 2.3022
Epoch: 010/010 | Batch 0300/0391 | Cost: 2.3023
Epoch: 010/010 | Batch 0350/0391 | Cost: 2.3029
Epoch: 010/010 | Train: 10.000% |  Loss: 2.303
Time elapsed: 6.25 min
Total Training Time: 6.25 min

nameongithub avatar Nov 12 '24 07:11 nameongithub

I also encountered this situation, but after I adjusted lr to 0.000 1, this situation disappeared.I think that the image size in cifar10 is too small, and after many pooling operations, the feature loss is very serious, which leads to the problem of gradient disappearance, but I cannot guarantee that this is the real reason. However, the adjustment of lr should be feasible, please give it a try, and I hope it can help you!

Fantasyawsd avatar May 24 '25 17:05 Fantasyawsd