Fazankabir

Results 2 comments of Fazankabir

@ogencoglu werer you able to solve the issue. I am facing the same with quantization and also i have more inference time with ONNX can be seen [here](https://github.com/NVlabs/SegFormer/issues/149) any help...

I am able to do quantization with: ``` model_fp32 = 'model_Segformer.onnx' model_quant = "model_dynamic_quant.onnx" quantized_model = quantize_dynamic(model_fp32, model_quant, weight_type=QuantType.QUInt8) ``` instead of **QuantType.QInt8** but while doing the inference it is...