Zheng Li

Results 18 comments of Zheng Li

> > > 转ONNX模型opset是多少? > 我用的TensorRT版本是[7.1.3.4](https://github.com/linghu8812/tensorrt_inference/blob/master/INSTALL.md#tensort-7134) ---------------------------------------------------------------- Input filename: G:\AI\PretrainedModel\InsightFace_Models\mnet.25\mnet.25.onnx ONNX IR version: 0.0.7 Opset version: 13 Producer name: Producer version: Domain: Model version: 0 Doc string: ---------------------------------------------------------------- 是这些信息吗?

还有请问下,以下这条警告信息会影响onnx模型的推理精度吗?因为trt把64位的权重值降为32位 TensorRT_WARNING: onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.

> > > 请问你有测试过转换出的onnx吗?我中间想尝试使用onnx推论比对结果,但报错 > > ``` > File "onnx_checker.py", line 50, in > ort_session = ort.InferenceSession(onnx_file) > File "/home/hyl/.local/bin/.virtualenvs/mxnet_copy/lib/python3.6/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 280, in __init__ > self._create_inference_session(providers, provider_options) > File "/home/hyl/.local/bin/.virtualenvs/mxnet_copy/lib/python3.6/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py",...

> > > > > 请问你有测试过转换出的onnx吗?我中间想尝试使用onnx推论比对结果,但报错 > > > ``` > > > File "onnx_checker.py", line 50, in > > > ort_session = ort.InferenceSession(onnx_file) > > > File "/home/hyl/.local/bin/.virtualenvs/mxnet_copy/lib/python3.6/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line...

Great,thank you very much for your sharing

> > > hi, > this conversion is not the source of the buggy detection. its rather that you need to preprocess inputs in specific way > suppose `input_data` is...

20180402-114759 is for SVM embedding compare,20170512-110547 is for Euclidean/Cosine distance embedding compare, is it?

补充下,若是我将服务器和ffmpeg都在一台电脑上,那运行起来就没有以上的问题;

有谁清楚这个问题吗?

> > > 局域网 啊 亲 或者局域网防火墙 ,你好好看看 是局域网,但就是几台电脑通过台交换机连接的局域网,非常简单,也没有什么防火墙,其他的网络通讯软件都能正常工作的