Error running efficient det lite d0 model on GPU
Hi, I have trained an object detection model on my custom data using efficient det lite d0 architecture. I have created a tflite model from a checkpoint using the export example. The model trains fine according to tensorboard and I can run inferences just fine on my PC. I want to run this model on an NPU so I quantized it to uint8 but when i try to run inference on the platform i get the following error: RuntimeError: Attempting to use a delegate that only supports static-sized tensors with a graph that has dynamic-sized tensors. From what i checked so far this error means there is a tensor with dynamic input somewhere in the graph. I tried examining the model using netron but I couldn't find any dynamic operations. Do you have any clue what this dynamic part might be? Thanks.
I wanted to get an npu for testing, but failed. I'll try it out if I get it in the future.