human-pose-estimation.pytorch icon indicating copy to clipboard operation
human-pose-estimation.pytorch copied to clipboard

I have error to load image file in data/coco/images/val2017

Open hsw2012s opened this issue 7 years ago • 4 comments

I use python 3.6.

In step Valid on COCO val2017 using pretrained models, I did

sudo python3 pose_estimation/valid.py --cfg experiments/coco/resnet50/256x192_d256x3_adam_lr1e-3.yaml --flip-test --model-file models/pytorch/pose_coco/pose_resnet_50_256x192.pth.tar

and the error occurs

=> creating output/coco/pose_resnet_50/256x192_d256x3_adam_lr1e-3 => creating log/coco/pose_resnet_50/256x192_d256x3_adam_lr1e-3_2019-01-14-10-45 Namespace(cfg='experiments/coco/resnet50/256x192_d256x3_adam_lr1e-3.yaml', coco_bbox_file=None, flip_test=True, frequent=100, gpus=None, model_file='models/pytorch/pose_coco/pose_resnet_50_256x192.pth.tar', post_process=False, shift_heatmap=False, use_detect_bbox=False, workers=None) {'CUDNN': {'BENCHMARK': True, 'DETERMINISTIC': False, 'ENABLED': True}, 'DATASET': {'DATASET': 'coco', 'DATA_FORMAT': 'jpg', 'FLIP': True, 'HYBRID_JOINTS_TYPE': '', 'ROOT': 'data/coco/', 'ROT_FACTOR': 40, 'SCALE_FACTOR': 0.3, 'SELECT_DATA': False, 'TEST_SET': 'val2017', 'TRAIN_SET': 'train2017'}, 'DATA_DIR': '', 'DEBUG': {'DEBUG': True, 'SAVE_BATCH_IMAGES_GT': True, 'SAVE_BATCH_IMAGES_PRED': True, 'SAVE_HEATMAPS_GT': True, 'SAVE_HEATMAPS_PRED': True}, 'GPUS': '0', 'LOG_DIR': 'log', 'LOSS': {'USE_TARGET_WEIGHT': True}, 'MODEL': {'EXTRA': {'DECONV_WITH_BIAS': False, 'FINAL_CONV_KERNEL': 1, 'HEATMAP_SIZE': array([48, 64]), 'NUM_DECONV_FILTERS': [256, 256, 256], 'NUM_DECONV_KERNELS': [4, 4, 4], 'NUM_DECONV_LAYERS': 3, 'NUM_LAYERS': 50, 'SIGMA': 2, 'TARGET_TYPE': 'gaussian'}, 'IMAGE_SIZE': array([192, 256]), 'INIT_WEIGHTS': True, 'NAME': 'pose_resnet', 'NUM_JOINTS': 17, 'PRETRAINED': 'models/pytorch/imagenet/resnet50-19c8e357.pth', 'STYLE': 'pytorch'}, 'OUTPUT_DIR': 'output', 'PRINT_FREQ': 100, 'TEST': {'BATCH_SIZE': 32, 'BBOX_THRE': 1.0, 'COCO_BBOX_FILE': 'data/coco/person_detection_results/COCO_val2017_detections_AP_H_56_person.json', 'FLIP_TEST': True, 'IMAGE_THRE': 0.0, 'IN_VIS_THRE': 0.2, 'MODEL_FILE': 'models/pytorch/pose_coco/pose_resnet_50_256x192.pth.tar', 'NMS_THRE': 1.0, 'OKS_THRE': 0.9, 'POST_PROCESS': True, 'SHIFT_HEATMAP': True, 'USE_GT_BBOX': True}, 'TRAIN': {'BATCH_SIZE': 32, 'BEGIN_EPOCH': 0, 'CHECKPOINT': '', 'END_EPOCH': 140, 'GAMMA1': 0.99, 'GAMMA2': 0.0, 'LR': 0.001, 'LR_FACTOR': 0.1, 'LR_STEP': [90, 120], 'MOMENTUM': 0.9, 'NESTEROV': False, 'OPTIMIZER': 'adam', 'RESUME': False, 'SHUFFLE': True, 'WD': 0.0001}, 'WORKERS': 4} => loading model from models/pytorch/pose_coco/pose_resnet_50_256x192.pth.tar /home/elysium/.local/lib/python3.6/site-packages/torch/nn/_reduction.py:49: UserWarning: size_average and reduce args will be deprecated, please use reduction='mean' instead. warnings.warn(warning.format(ret)) loading annotations into memory... Done (t=0.13s) creating index... index created! => classes: ['background', 'person'] => num_images: 5000 => load 6352 samples => fail to read data/coco/images/val2017/000000397133.jpg => fail to read data/coco/images/val2017/000000476258.jpg => fail to read data/coco/images/val2017/000000329323.jpg => fail to read data/coco/images/val2017/000000355257.jpg => fail to read data/coco/images/val2017/000000559842.jpg => fail to read data/coco/images/val2017/000000512836.jpg => fail to read data/coco/images/val2017/000000524456.jpg => fail to read data/coco/images/val2017/000000329219.jpg => fail to read data/coco/images/val2017/000000414170.jpg Traceback (most recent call last): File "pose_estimation/valid.py", line 165, in main() File "pose_estimation/valid.py", line 161, in main final_output_dir, tb_log_dir) File "/home/elysium/Documents/human-pose-estimation.pytorch-master/pose_estimation/../lib/core/function.py", line 108, in validate for i, (input, target, target_weight, meta) in enumerate(val_loader): File "/home/elysium/.local/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 637, in next return self._process_next_batch(batch) File "/home/elysium/.local/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 658, in _process_next_batch raise batch.exc_type(batch.exc_msg) ValueError: Traceback (most recent call last): File "/home/elysium/.local/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 138, in _worker_loop samples = collate_fn([dataset[i] for i in batch_indices]) File "/home/elysium/.local/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 138, in samples = collate_fn([dataset[i] for i in batch_indices]) File "/home/elysium/Documents/human-pose-estimation.pytorch-master/pose_estimation/../lib/dataset/JointsDataset.py", line 80, in getitem raise ValueError('Fail to read {}'.format(image_file)) ValueError: Fail to read data/coco/images/val2017/000000397133.jpg

I have image file "000000397133.jpg" in data/coco/images/val2017/000000397133.jpg. why this error occur?

My directory is look like this

screenshot from 2019-01-14 11-02-50

hsw2012s avatar Jan 14 '19 01:01 hsw2012s

why you use sudo to run?

leoxiaobin avatar Jan 16 '19 14:01 leoxiaobin

same problem any solution? Number of Layers Conv2d : 293 layers BatchNorm2d : 292 layers ReLU : 261 layers Bottleneck : 4 layers BasicBlock : 104 layers Upsample : 28 layers HighResolutionModule : 8 layers loading annotations into memory... Done (t=8.37s) creating index... index created! => classes: ['__background__', 'person'] => num_images: 118287 => load 149813 samples coco loading annotations into memory... Done (t=0.27s) creating index... index created! => classes: ['__background__', 'person'] => num_images: 5000 => load 6352 samples /usr/local/lib/python3.7/dist-packages/torch/optim/lr_scheduler.py:134: UserWarning: Detected call of lr_scheduler.step()beforeoptimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step()beforelr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate "https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning) => fail to read /content/drive/MyDrive/deep-high-resolution-net/data/coco/images/train2017/000000414881.jpg Traceback (most recent call last): File "/content/drive/MyDrive/deep-high-resolution-net/tools/train.py", line 224, in <module> main() File "/content/drive/MyDrive/deep-high-resolution-net/tools/train.py", line 188, in main final_output_dir, tb_log_dir, writer_dict) File "/content/drive/MyDrive/deep-high-resolution-net/tools/../lib/core/function.py", line 38, in train for i, (input, target, target_weight, meta) in enumerate(train_loader): File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 521, in __next__ data = self._next_data() File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 1203, in _next_data return self._process_data(data) File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 1229, in _process_data data.reraise() File "/usr/local/lib/python3.7/dist-packages/torch/_utils.py", line 434, in reraise raise exception ValueError: Caught ValueError in DataLoader worker process 0. Original Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop data = fetcher.fetch(index) File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp> data = [self.dataset[idx] for idx in possibly_batched_index] File "/content/drive/MyDrive/deep-high-resolution-net/tools/../lib/dataset/JointsDataset.py", line 135, in __getitem__ raise ValueError('Fail to read {}'.format(image_file)) ValueError: Fail to read /content/drive/MyDrive/deep-high-resolution-net/data/coco/images/train2017/000000414881.jpg

eddie000000 avatar Jan 17 '22 18:01 eddie000000

same problem any solution? Number of Layers Conv2d : 293 layers BatchNorm2d : 292 layers ReLU : 261 layers Bottleneck : 4 layers BasicBlock : 104 layers Upsample : 28 layers HighResolutionModule : 8 layers loading annotations into memory... Done (t=8.37s) creating index... index created! => classes: ['__background__', 'person'] => num_images: 118287 => load 149813 samples coco loading annotations into memory... Done (t=0.27s) creating index... index created! => classes: ['__background__', 'person'] => num_images: 5000 => load 6352 samples /usr/local/lib/python3.7/dist-packages/torch/optim/lr_scheduler.py:134: UserWarning: Detected call of lr_scheduler.step()beforeoptimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step()beforelr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate "https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning) => fail to read /content/drive/MyDrive/deep-high-resolution-net/data/coco/images/train2017/000000414881.jpg Traceback (most recent call last): File "/content/drive/MyDrive/deep-high-resolution-net/tools/train.py", line 224, in <module> main() File "/content/drive/MyDrive/deep-high-resolution-net/tools/train.py", line 188, in main final_output_dir, tb_log_dir, writer_dict) File "/content/drive/MyDrive/deep-high-resolution-net/tools/../lib/core/function.py", line 38, in train for i, (input, target, target_weight, meta) in enumerate(train_loader): File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 521, in __next__ data = self._next_data() File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 1203, in _next_data return self._process_data(data) File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 1229, in _process_data data.reraise() File "/usr/local/lib/python3.7/dist-packages/torch/_utils.py", line 434, in reraise raise exception ValueError: Caught ValueError in DataLoader worker process 0. Original Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop data = fetcher.fetch(index) File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp> data = [self.dataset[idx] for idx in possibly_batched_index] File "/content/drive/MyDrive/deep-high-resolution-net/tools/../lib/dataset/JointsDataset.py", line 135, in __getitem__ raise ValueError('Fail to read {}'.format(image_file)) ValueError: Fail to read /content/drive/MyDrive/deep-high-resolution-net/data/coco/images/train2017/000000414881.jpg

Have you solved the problem yet? I meet the same problem now.

sun-tao avatar Apr 01 '22 12:04 sun-tao

날짜: 2022.04.01 오후 09:34:41 (GMT+09:00) 제목: Re: [microsoft/human-pose-estimation.pytorch] I have error to load image file in data/coco/images/val2017 (#77)

same problem any solution? Number of Layers Conv2d : 293 layers BatchNorm2d : 292 layers ReLU : 261 layers Bottleneck : 4 layers BasicBlock : 104 layers Upsample : 28 layers HighResolutionModule : 8 layers loading annotations into memory... Done (t=8.37s) creating index... index created! => classes: ['background', 'person'] => num_images: 118287 => load 149813 samples coco loading annotations into memory... Done (t=0.27s) creating index... index created! => classes: ['background', 'person'] => num_images: 5000 => load 6352 samples /usr/local/lib/python3.7/dist-packages/torch/optim/lr_scheduler.py:134: UserWarning: Detected call of lr_scheduler.step()beforeoptimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step()beforelr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate "https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning) => fail to read /content/drive/MyDrive/deep-high-resolution-net/data/coco/images/train2017/000000414881.jpg Traceback (most recent call last): File "/content/drive/MyDrive/deep-high-resolution-net/tools/train.py", line 224, in main() File "/content/drive/MyDrive/deep-high-resolution-net/tools/train.py", line 188, in main final_output_dir, tb_log_dir, writer_dict) File "/content/drive/MyDrive/deep-high-resolution-net/tools/../lib/core/function.py", line 38, in train for i, (input, target, target_weight, meta) in enumerate(train_loader): File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 521, in next data = self._next_data() File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 1203, in _next_data return self._process_data(data) File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 1229, in _process_data data.reraise() File "/usr/local/lib/python3.7/dist-packages/torch/_utils.py", line 434, in reraise raise exception ValueError: Caught ValueError in DataLoader worker process 0. Original Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop data = fetcher.fetch(index) File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 49, in data = [self.dataset[idx] for idx in possibly_batched_index] File "/content/drive/MyDrive/deep-high-resolution-net/tools/../lib/dataset/JointsDataset.py", line 135, in getitem raise ValueError('Fail to read {}'.format(image_file)) ValueError: Fail to read /content/drive/MyDrive/deep-high-resolution-net/data/coco/images/train2017/000000414881.jpg

Have you solved the problem yet? I meet the same problem now. — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

hsw2012s avatar Apr 01 '22 12:04 hsw2012s