InferenceHelper
InferenceHelper copied to clipboard
iMX8 Plus NPU delegate
Environment (Hardware)
- Hardware: iMX8 Plus SOC with NPU.
- Software: yocto, qt, cmake
Information
I have a Qt6 application (cmake based) and included the Inference helper with Tensorflow Lite support. I include this Qt project into a Yocto project, generating a Linux image for iMX8 Plus platform.
The project compile all well and the Inference helper runs with the INFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_XNNPACK settings.
external_delegate_path
In Yocto tensorflow-lite and tensorflow-lite-vx-delegate for iMX8P is integrated. If I use the following command from the installed examples
USE_GPU_INFERENCE=0 ./label_image -m mobilenet_v1_1.0_224_quant.tflite -i grace_hopper.bmp -l labels.txt --external_delegate_path=/usr/lib/libvx_delegate.so
Tensorflow use the NPU hardware acceleration. Important for this are
- USE_GPU_INFERENCE=0
- --external_delegate_path=/usr/lib/libvx_delegate.so
Question
Is it possible to include USE_GPU_INFERENCE and --external_delegate_path=/usr/lib/libvx_delegate.so into the XNNPACK settings? Or need I create a completely custom delegate pack?