ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider',
Total VRAM 12288 MB, total RAM 65304 MB
pytorch version: 2.2.0+cu121
xformers version: 0.0.24
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3060 : cudaMallocAsync
Using xformers cross attention
ASTERR config loaded successfully
Warn!: xFormers is available (Attention)
Warn!: Traceback (most recent call last):
File "D:\ComfyUI_Build\ComfyUI\nodes.py", line 1906, in load_custom_node
module_spec.loader.exec_module(module)
File "
Warn!: Cannot import D:\ComfyUI_Build\ComfyUI\custom_nodes\ComfyUI-3D-Pack module for custom nodes: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)
Check your custom_nodes directory, explicitly set providers to ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] when initializing the InferenceSession. The issue should go away, just blanket set everything.
Check your custom_nodes directory, explicitly set providers to ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] when initializing the InferenceSession. The issue should go away, just blanket set everything.
I edit D:\ComfyUI_Build\ComfyUI\custom_nodes\ComfyUI-3D-Pack\Gen_3D_Modules\Unique3D\scripts\utils.py providers= ['CUDAExecutionProvider', 'CPUExecutionProvider']
But errors came again : Total VRAM 12288 MB, total RAM 65304 MB pytorch version: 2.2.0+cu121 xformers version: 0.0.24 Set vram state to: NORMAL_VRAM Device: cuda:0 NVIDIA GeForce RTX 3060 : cudaMallocAsync Using xformers cross attention ASTERR config loaded successfully Warn!: xFormers is available (Attention) 2024-07-18 10:24:59.4045468 [E:onnxruntime:Default, provider_bridge_ort.cc:1351 onnxruntime::TryGetProviderInfo_CUDA] D:\a_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1131 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "D:\ComfyUI_Build\python_embeded\Lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"
EP Error D:\a_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:636 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasn't able to be loaded. Please install the correct version of CUDA and cuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported. when using ['CUDAExecutionProvider', 'CPUExecutionProvider'] Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying. 2024-07-18 10:24:59.4817444 [E:onnxruntime:Default, provider_bridge_ort.cc:1351 onnxruntime::TryGetProviderInfo_CUDA] D:\a_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1131 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "D:\ComfyUI_Build\python_embeded\Lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"
It seems onnxruntime 1.15 is incompatible with Cuda 1.12 and Cudnn 8.9.7? I install Cuda 1.12and Cudnn 8.9.7 in my computer
Check your custom_nodes directory, explicitly set providers to ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] when initializing the InferenceSession. The issue should go away, just blanket set everything.检查您的 custom_nodes 目录,在初始化 InferenceSession 时将提供程序明确设置为 ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']。问题应该消失,只需将所有内容都设置好即可。
Oh,yeah,u are right. I edit the code :
**providersCustom = ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']
session = new_session(providers=providersCustom)**
And because I install ZHO-ZHO YOLOWorld plugin, there is inference-gpu need onnxruntime-gpu 1.15.1.And if i upgrade onnxruntime-gpu , other site-packages need to change version . So, i finally install the 11.8 Cuda and v8.9.0 Cudnn in my pc, the problem sovled.
thanks!
That's great news! Glad it worked out