anylabeling icon indicating copy to clipboard operation
anylabeling copied to clipboard

How should I configure GPU inference using the exe installation package?

Open ZJDATY opened this issue 2 years ago • 5 comments

After selecting the model, it will report this error. image

My local environment is cuda11.7+cudnn8.6+onnxruntime-gpu1.14.0. All of the above have added environment variables. CUDA_ PATH is also configured correctly. I have also placed the DLL file for onnxruntime-gpu1.14.0 in the running directory. image The model using YOLOV8N did not report any errors, but the task manager did not use GPU inference. @vietanhdev May I ask what I should do?

ZJDATY avatar May 25 '23 01:05 ZJDATY

Please check the version of CUDA and CUDNN:

vietanhdev avatar May 25 '23 15:05 vietanhdev

Please check the version of CUDA and CUDNN:

  • CUDA must be 11.6.
  • CUDNN: 8.2.4 (Linux), 8.5.0.96 (Windows). More information: https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html.

@vietanhdev I have set the path of the environment variable as required. Using cuda11.6.2 and cudnn 8.5.0.96. And copied all the folders of cudnn to/ Under the path of CUDA/v11.6/. But the software still reported the same error. image image image image image

ZJDATY avatar May 31 '23 02:05 ZJDATY

I placed all the correct version of the DLL files in the exe path, but the same error was still reported.

image

ZJDATY avatar May 31 '23 02:05 ZJDATY

I face the same problem.

I used CUDA 11.8 and cudnn 8.9.3 at the first time. It detected a segment successfully, then the program crashed immediately.
When I reopened the program and tried to load the model, it showed the error.

I found this issue ticket, and installed the correct version of CUDA and cudnn, but still no luck.

CryMasK avatar Jul 23 '23 04:07 CryMasK

@ZJDATY I have same problem, Did you solve it?

Loongle avatar Feb 03 '24 03:02 Loongle