rethinkingCAM
rethinkingCAM copied to clipboard
RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
Hi, my name is Yuen.
Thank you for sharing this wonderful work.
So far I followed steps and trained the vgg16 model.
However, when I tried "eval" command, it showed the following errors:
python eval.py --config_path=configs/vgg16_tap.yml --tag=vgg16_tap --checkpoint_dir=data
[2022-03-27 15:36:24,223 - data\eval_vgg16_tap] GPU is not available.
[2022-03-27 15:36:24,268 - data\eval_vgg16_tap] VGG network structure: [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'M', 512, 512, 512, 'M', 512, 512, 512, 1024]
[2022-03-27 15:36:24,452 - data\eval_vgg16_tap] Checkpointer is built.
[2022-03-27 15:36:24,453 - data\eval_vgg16_tap] Loading checkpoint from data\checkpoint_191_1150014.pth
Traceback (most recent call last):
File "eval.py", line 31, in <module>
engine.evaluate()
File "D:\xxxx\rethinkingCAM\src\engine.py", line 175, in evaluate
epoch, num_step, **self.eval_config)
File "D:\xxxx\rethinkingCAM\src\engine.py", line 208, in _eval_one_epoch
top1_cls, top5_cls = metrics.topk_accuracy(predictions, labels, topk=(1,5))
File "D:\xxxx\rethinkingCAM\src\utils\metrics.py", line 22, in topk_accuracy
correct_k = correct[:k].view(-1).float().sum(0, keepdim=False)
RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
Can I know how to resolve it ?
I am using the pytorch 1.11
Thank you
Best regads, Yuen