onnxruntime icon indicating copy to clipboard operation
onnxruntime copied to clipboard

Onnxruntime fail to run Fasterrcnn_resnet50_fpn test "Cannot split using values in 'split' attribute"

Open ghost opened this issue 6 years ago • 3 comments

🐛 Bug

I am trying to run the new test script " test_onnx.py " added to vision/test folder (bellow is google colab example with all details to reproduce the same error) . More specifically, I am trying to export Fasterrcnn_resnet50_fpn as ONNX but it is not working.

To Reproduce

Code Steps to reproduce the behavior:

  • I am sharing Google Colab notebook .. it generates the same issue: https://gist.github.com/moured/a18c07f847e63eb4811710e4d702ecdf

Error message :


Fail Traceback (most recent call last) in ----> 1 a.test_faster_rcnn()

/media/mmrg/vision/test/test_onnx.py in test_faster_rcnn(self) 342 output_names=["outputs"], 343 dynamic_axes={"images_tensors": [0, 1, 2, 3], "outputs": [0, 1, 2, 3]}, --> 344 tolerate_small_mismatch=True) 345 346 # Verify that paste_mask_in_image beahves the same in tracing.

/media/mmrg/vision/test/test_onnx.py in run_model(self, model, inputs_list, tolerate_small_mismatch, do_constant_folding, dynamic_axes, output_names, input_names) 49 if isinstance(test_ouputs, torch.Tensor): 50 test_ouputs = (test_ouputs,) ---> 51 self.ort_validate(onnx_io, test_inputs, test_ouputs, tolerate_small_mismatch) 52 53 def ort_validate(self, onnx_io, inputs, outputs, tolerate_small_mismatch=False):

/media/mmrg/vision/test/test_onnx.py in ort_validate(self, onnx_io, inputs, outputs, tolerate_small_mismatch) 68 # compute onnxruntime output prediction 69 ort_inputs = dict((ort_session.get_inputs()[i].name, inpt) for i, inpt in enumerate(inputs)) ---> 70 ort_outs = ort_session.run(None, ort_inputs) 71 for i in range(0, len(outputs)): 72 try:

/media/mmrg/company-env3.7/lib/python3.7/site-packages/onnxruntime/capi/session.py in run(self, output_names, input_feed, run_options) 140 output_names = [output.name for output in self._outputs_meta] 141 try: --> 142 return self._sess.run(output_names, input_feed, run_options) 143 except C.EPFail as err: 144 if self._enable_fallback:

Fail: [ONNXRuntimeError] : 1 : FAIL : Non-zero status code returned while running Split node. Name:'' Status Message: Cannot split using values in 'split' attribute. Axis=1 Input shape={1,14328} NumOutputs=5 Num entries in 'split' (must equal number of outputs) was 5 Sum of sizes in 'split' (must equal size of selected axis) was 17910

Expected behavior

ONNX runtime successfully does inference.

Environment

  • Latest OnnxRuntime
  • Latest Torchvision
  • Latest Pytorch
  • Ubuntu 18.04 - 64x
  • Python 3.7 & I also tried it on 3.6 same error
  • CPU / GPU

ghost avatar Apr 24 '20 13:04 ghost

It seems there has been an issue with the conversion to ONNX. 14328 cannot be split into 5 equal parts and hence the complaint by the runtime. Can you pls check with the model conversion project and raise a bug there ?

hariharans29 avatar May 04 '20 21:05 hariharans29

This issue has been automatically marked as stale due to inactivity and will be closed in 7 days if no further activity occurs. If further support is needed, please provide an update and/or more details.

stale[bot] avatar Jul 03 '20 22:07 stale[bot]

ersion project and raise a bug there ?

met the same problem, have you solved it? how? thanks

pustar avatar Sep 16 '22 02:09 pustar

Fasterrcnn_resnet50_fpn cannot achieve dynamic reasoning in the batch dimension, but it can achieve dynamic reasoning in the h and w dimensions. If you want to achieve multi-batch reasoning, you need to fix the batch size when export the model and then perform static reasoning in the batch dimension.I hope this can help you

twoapples1 avatar Feb 24 '23 02:02 twoapples1