Can anyone successfully run this project, please tell me your versions, thx
Can anyone successfully run this project, please tell me your versions, thx
python 3.6
pytorch 1.6
cuda 10.2
conda install pytorch==1.6 torchvision torchaudio cudatoolkit=10.2 -c pytorch
detectron2 0.2.1
python -m pip install detectron2==0.2.1 -f
https://dl.fbaipublicfiles.com/detectron2/wheels/cu102/torch1.6/index.html
apex : note that gcc >5
python 3.6 pytorch 1.6 cuda 10.2
conda install pytorch==1.6 torchvision torchaudio cudatoolkit=10.2 -c pytorchdetectron2 0.2.1
python -m pip install detectron2==0.2.1 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu102/torch1.6/index.htmlapex : note that gcc >5
thx
python 3.6 pytorch 1.6 cuda 10.2
conda install pytorch==1.6 torchvision torchaudio cudatoolkit=10.2 -c pytorchdetectron2 0.2.1
python -m pip install detectron2==0.2.1 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu102/torch1.6/index.htmlapex : note that gcc >5
Can you use this configuration (MODEL.QUERY.QUERY_INFER=True) for your environment? I use this configuration to run visdrone_inference.py, an error will be reported in the sparse convolution.
[08/01 17:20:06 fvcore.common.checkpoint]: [Checkpointer] Loading from /data3/ysl/quarydet_code/QueryDet-PyTorch/runs/exp1/model_0034999.pth ... [08/01 17:20:08 d2.data.common]: Serializing 98 elements to byte tensors and concatenating them all ... [08/01 17:20:08 d2.data.common]: Serialized dataset takes 0.02 MiB [08/01 17:20:12 d2.evaluation.evaluator]: Start inference on 98 images SPCONV_DEBUG_SAVE_PATH not found, you can specify SPCONV_DEBUG_SAVE_PATH as debug data save path to save debug data which can be attached in a issue. [Exception|implicit_gemm]feat=torch.Size([148, 256]),w=torch.Size([3, 3, 256, 256]),pair=torch.Size([9, 148]),act=148,issubm=True,istrain=True Traceback (most recent call last): File "/home/ysl/anaconda3/envs/quarydet/lib/python3.8/contextlib.py", line 131, in exit self.gen.throw(type, value, traceback) File "/home/ysl/anaconda3/envs/quarydet/lib/python3.8/site-packages/detectron2/evaluation/evaluator.py", line 195, in inference_context yield File "/home/ysl/anaconda3/envs/quarydet/lib/python3.8/site-packages/detectron2/evaluation/evaluator.py", line 141, in inference_on_dataset outputs = model(inputs) File "/home/ysl/anaconda3/envs/quarydet/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(input, **kwargs) File "/data3/ysl/quarydet_code/QueryDet-PyTorch/models/querydet/detector.py", line 166, in forward return self.test(batched_inputs) File "/data3/ysl/quarydet_code/QueryDet-PyTorch/models/querydet/detector.py", line 209, in test results, total_time = self.test_forward(images) # normal test File "/data3/ysl/quarydet_code/QueryDet-PyTorch/models/querydet/detector.py", line 254, in test_forward det_cls_query, det_bbox_query, query_anchors = self.qInfer.run_qinfer(params, features_key, features_value, anchors_value) File "/data3/ysl/quarydet_code/QueryDet-PyTorch/models/querydet/qinfer.py", line 166, in run_qinfer cls_result = self._run_spconvs(x, self.cls_spconv).view(-1, self.anchor_numself.num_classes)[inds] File "/data3/ysl/quarydet_code/QueryDet-PyTorch/models/querydet/qinfer.py", line 131, in _run_spconvs y = filters(x) File "/home/ysl/anaconda3/envs/quarydet/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/ysl/anaconda3/envs/quarydet/lib/python3.8/site-packages/spconv/pytorch/modules.py", line 137, in forward input = module(input) File "/home/ysl/anaconda3/envs/quarydet/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/ysl/anaconda3/envs/quarydet/lib/python3.8/site-packages/spconv/pytorch/conv.py", line 441, in forward out_features = Fsp.implicit_gemm( File "/home/ysl/anaconda3/envs/quarydet/lib/python3.8/site-packages/torch/cuda/amp/autocast_mode.py", line 209, in decorate_fwd return fwd(*args, **kwargs) File "/home/ysl/anaconda3/envs/quarydet/lib/python3.8/site-packages/spconv/pytorch/functional.py", line 200, in forward raise e File "/home/ysl/anaconda3/envs/quarydet/lib/python3.8/site-packages/spconv/pytorch/functional.py", line 185, in forward out, mask_out, mask_width = ops.implicit_gemm(features, filters, File "/home/ysl/anaconda3/envs/quarydet/lib/python3.8/site-packages/spconv/pytorch/ops.py", line 1103, in implicit_gemm tune_res, _ = CONV.tune_and_cache( File "/home/ysl/anaconda3/envs/quarydet/lib/python3.8/site-packages/spconv/algo.py", line 660, in tune_and_cache ConvMainUnitTest.implicit_gemm2(params) ValueError: /tmp/pip-build-env-xxm94833/overlay/lib/python3.8/site-packages/cumm/include/tensorview/check.h(32) shape_ten[i] == shape[i] assert faild. error shape [9, 148] expect [768, -1] python-BaseException
python 3.6 pytorch 1.6 cuda 10.2
conda install pytorch==1.6 torchvision torchaudio cudatoolkit=10.2 -c pytorchdetectron2 0.2.1python -m pip install detectron2==0.2.1 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu102/torch1.6/index.htmlapex : note that gcc >5
Can you use this configuration (MODEL.QUERY.QUERY_INFER=True) for your environment? I use this configuration to run visdrone_inference.py, an error will be reported in the sparse convolution.
[08/01 17:20:06 fvcore.common.checkpoint]: [Checkpointer] Loading from /data3/ysl/quarydet_code/QueryDet-PyTorch/runs/exp1/model_0034999.pth ... [08/01 17:20:08 d2.data.common]: Serializing 98 elements to byte tensors and concatenating them all ... [08/01 17:20:08 d2.data.common]: Serialized dataset takes 0.02 MiB [08/01 17:20:12 d2.evaluation.evaluator]: Start inference on 98 images SPCONV_DEBUG_SAVE_PATH not found, you can specify SPCONV_DEBUG_SAVE_PATH as debug data save path to save debug data which can be attached in a issue. [Exception|implicit_gemm]feat=torch.Size([148, 256]),w=torch.Size([3, 3, 256, 256]),pair=torch.Size([9, 148]),act=148,issubm=True,istrain=True Traceback (most recent call last): File "/home/ysl/anaconda3/envs/quarydet/lib/python3.8/contextlib.py", line 131, in exit self.gen.throw(type, value, traceback) File "/home/ysl/anaconda3/envs/quarydet/lib/python3.8/site-packages/detectron2/evaluation/evaluator.py", line 195, in inference_context yield File "/home/ysl/anaconda3/envs/quarydet/lib/python3.8/site-packages/detectron2/evaluation/evaluator.py", line 141, in inference_on_dataset outputs = model(inputs) File "/home/ysl/anaconda3/envs/quarydet/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(_input, **kwargs) File "/data3/ysl/quarydet_code/QueryDet-PyTorch/models/querydet/detector.py", line 166, in forward return self.test(batched_inputs) File "/data3/ysl/quarydet_code/QueryDet-PyTorch/models/querydet/detector.py", line 209, in test results, total_time = self.test_forward(images) # normal test File "/data3/ysl/quarydet_code/QueryDet-PyTorch/models/querydet/detector.py", line 254, in test_forward det_cls_query, det_bbox_query, query_anchors = self.qInfer.run_qinfer(params, features_key, features_value, anchors_value) File "/data3/ysl/quarydet_code/QueryDet-PyTorch/models/querydet/qinfer.py", line 166, in run_qinfer cls_result = self._run_spconvs(x, self.cls_spconv).view(-1, self.anchor_num_self.num_classes)[inds] File "/data3/ysl/quarydet_code/QueryDet-PyTorch/models/querydet/qinfer.py", line 131, in _run_spconvs y = filters(x) File "/home/ysl/anaconda3/envs/quarydet/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/ysl/anaconda3/envs/quarydet/lib/python3.8/site-packages/spconv/pytorch/modules.py", line 137, in forward input = module(input) File "/home/ysl/anaconda3/envs/quarydet/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/ysl/anaconda3/envs/quarydet/lib/python3.8/site-packages/spconv/pytorch/conv.py", line 441, in forward out_features = Fsp.implicit_gemm( File "/home/ysl/anaconda3/envs/quarydet/lib/python3.8/site-packages/torch/cuda/amp/autocast_mode.py", line 209, in decorate_fwd return fwd(*args, **kwargs) File "/home/ysl/anaconda3/envs/quarydet/lib/python3.8/site-packages/spconv/pytorch/functional.py", line 200, in forward raise e File "/home/ysl/anaconda3/envs/quarydet/lib/python3.8/site-packages/spconv/pytorch/functional.py", line 185, in forward out, mask_out, mask_width = ops.implicit_gemm(features, filters, File "/home/ysl/anaconda3/envs/quarydet/lib/python3.8/site-packages/spconv/pytorch/ops.py", line 1103, in implicit_gemm tune_res, _ = CONV.tune_and_cache( File "/home/ysl/anaconda3/envs/quarydet/lib/python3.8/site-packages/spconv/algo.py", line 660, in tune_and_cache ConvMainUnitTest.implicit_gemm2(params) ValueError: /tmp/pip-build-env-xxm94833/overlay/lib/python3.8/site-packages/cumm/include/tensorview/check.h(32) shape_ten[i] == shape[i] assert faild. error shape [9, 148] expect [768, -1] python-BaseException
https://github.com/ChenhongyiYang/QueryDet-PyTorch/issues/42#issuecomment-1200774736
python 3.6 pytorch 1.6 cuda 10.2
conda install pytorch==1.6 torchvision torchaudio cudatoolkit=10.2 -c pytorchdetectron2 0.2.1
python -m pip install detectron2==0.2.1 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu102/torch1.6/index.htmlapex : note that gcc >5
can you share something about apex,I find when i install it, it will cause many errors.###
Hi, we have recently updated the whole repository to support newer versions of PyTorch, Detectron2, and Spconv. Now, you can set up your environment by running the sample setup script in ReadMe.