IsiRad
IsiRad
Hi, If I want to only resize (not crop) the images, should I just remove the crop parameter from the YAML file? If I do not want to resize or...
When I try to compile the quantized MobileNetV2 ONNX model (using the code from custom-model-onnx Jupyter notebook) in the EdgeAI Cloud, why does the kernel die? **Code for Compiling ONNX...
I set the compile options to the values listed above, and the kernel still dies when I try to compile the quantized MobV2 model. I also tried compiling the trained...
I can compile and run the quantized MobV2 model on ARM without TIDL Offload. However, when I try to compile the same model with TIDL Offload, the kernel dies.
When I try to attach the ONNX model to this message, I get an error that says "We don't support that file type". Could I send you an email with...
[MobileNetV2_checkpoint_quantized_2_best.zip](https://github.com/TexasInstruments/edgeai-torchvision/files/7995251/MobileNetV2_checkpoint_quantized_2_best.zip) Ok, I attached a zip file with the quantized MobV2 ONNX model to this message.
Yes, the original floating point model is attached to this message. [MobileNetV2_checkpoint_23_best.zip](https://github.com/TexasInstruments/edgeai-torchvision/files/7995299/MobileNetV2_checkpoint_23_best.zip)
I also tried replacing the fully connected layer (1x11520) with a 2D convolution layer (1x1280x3x3), and this version of the model can run only on the ARM without TIDL Offload....
After using shape inference, I can compile and run both versions of the quantized MobV2 ONNX model with TIDL Offload.
I'm trying to run the quantized MobV2 model with emulated TIDL offload locally on my computer, but I'm getting an error that the libvx_tidl_rt.so cannot be opened even though that...