Jeff Hoang
Jeff Hoang
Hello @rajat-008. Could you please tell me, where to get openpose model. I want to run infer.py also
hi @peteryuX . i has follow your code many time and it is beatiful work. i am not familiar with tf-2 and i has a same question how to do...
thank for your advice.
hi @sky186 @flazerain have you fixed problem above? i also try to convert arcface LResNet100E-IR [mxnet](https://github.com/deepinsight/insightface/wiki/Model-Zoo) to onnx by using [convert_onnx.py](https://github.com/deepinsight/insightface/tree/master/deploy). Then, it seem that, i got error with PRelu...
hello @gan3sh500 can you share your onnx2tensorrt conversion?
@QuantumLiu why and where is tensorrt Retinaface in tensorrt >7.1?
This is the full log ``` &&&& RUNNING TensorRT.trtexec [TensorRT v8500] # /usr/src/tensorrt/bin/trtexec --onnx=qat_models/trained_qat/pgie/1/qat.onnx --int8 --fp16 --workspace=1024000 --minShapes=images:4x3x416x416 --optShapes=images:4x3x416x416 --maxShapes=images:4x3x416x416 [12/04/2023-09:06:56] [W] --workspace flag has been deprecated by --memPoolSize flag....
i change type of aligned_norm from uint8 to float32 and it work I use inference_model_993_quant.tflite model. But why is it so slow? embedding time is suround 0.40273499488830566s My CPU is...
i find out that invoke() takes up so much time. Do you have any solution?