yunyaoXYY
yunyaoXYY
> @jiangjiajun onnx: > > ``` > > `from onnxruntime.quantization import quantize_dynamic, QuantType > > model_fp32 = '/Users/tulpar/Project/devPaddleDetection/sliced_visdrone.onnx' > model_quant = '/Users/tulpar/Project/devPaddleDetection/88quant_sliced_visdrone.onnx' > # quantized_model = quantize_dynamic(model_fp32, model_quant) > _quantized_model...
> @jiangjiajun > > The model is : [https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_80e_sliced_visdrone_640_025.pdparams](https://github.com/hailo-ai/hailo_model_zoo) > > We have a competition for a huge project for object detection as in above model exactly. But we need...
> @yunyaoXYY I need and invitation to join Slack HI, please try this. https://join.slack.com/t/fastdeployworkspace/shared_invite/zt-1hm4rrdqs-RZEm6_EAanuwEVZ8EJsG~g
> 使用你的方法重新量化了,推理的时候还是出现报错。 /home/aistudio/work/FastDeploy/examples/vision/detection/paddledetection/quantize/python [INFO] fastdeploy/vision/common/processors/transform.cc(45)::FuseNormalizeCast Normalize and Cast are fused to Normalize in preprocessing pipeline. [INFO] fastdeploy/vision/common/processors/transform.cc(93)::FuseNormalizeHWC2CHW Normalize and HWC2CHW are fused to NormalizeAndPermute in preprocessing pipeline. [INFO] fastdeploy/vision/common/processors/transform.cc(159)::FuseNormalizeColorConvert BGR2RGB...
> 1、我量化的模型是ppyoloe_crn_s_300e_coco 2、Fastdeploy是fastdeploy-python==0.0.0 -f https://www.paddlepaddle.org.cn/whl/fastdeploy_nightly_build.html 这个ppyoloe的模型是从paddledetection导出的吗, 导出是怎样导出的呢. 方便的话, 可以发一下导出步骤, 我明天去复现一下.
> @yunyaoXYY , have you tried different calibration algorithm, tuning the QAT param? Also there is a sample sensitivity analysis code worth try in https://github.com/NVIDIA/NeMo/blob/main/examples/asr/quantization/speech_to_text_quant_infer.py#L71 > > Thanks! Hi, This...
> python infer.py --det_model ./ch_PP-OCRv3_det_infer --cls_model ./ch_ppocr_mobile_v2.0_cls_infer --rec_model ./ch_PP-OCRv3_rec_infer --rec_label_file ppocr_keys_v1.txt --image 22.jpg --device gpu --backend trt [INFO] fastdeploy/backends/tensorrt/trt_backend.cc(466)::BuildTrtEngine Start to building TensorRT Engine... [ERROR] fastdeploy/backends/tensorrt/trt_backend.cc(238)::log 4: [graphShapeAnalyzer.cpp::analyzeShapes::1294] Error Code...
> 我是直接 通过pip 安装的 没有通过编译安装,是这个问题吗 Hi, 可以试试单独把det模型关闭trt, cls模型和rec模型开启trt试一试. 这个问题我后续会排查
> softmax_0.tmp_0 1.这个op就在分类模型的结尾处呀..很奇怪, 你的模型都是从我们提供的链接下载的吗? 2.是否尝试过我上一条说的那种办法?把det的trt给关掉,可以看看后两个模型是否可行. 3.我看你的上一条log有些奇怪,读了两次rec模型的trt cahce, 报的op找不到是属于分类模型 4.是否方便留个联系方式,方便解决你的问题
> 我遇到了同样的问题。 环境:3090,cuda11.2, fastdeploy-gpu-python 0.7.0 python3 infer.py --det_model ch_PP-OCRv3_det_infer --cls_model ch_ppocr_mobile_v2.0_cls_infer --rec_model ch_PP-OCRv3_rec_infer --rec_label_file ppocr_keys_v1.txt --image 12.jpg --device gpu --backend trt [INFO] fastdeploy/backends/tensorrt/trt_backend.cc(479)::BuildTrtEngine Start to building TensorRT Engine... [ERROR] fastdeploy/backends/tensorrt/trt_backend.cc(238)::log...