libonnx
libonnx copied to clipboard
A lightweight, portable pure C99 onnx inference engine for embedded devices with hardware acceleration support.
hi, I try to load 'yolov5n.onnx' like this: ```c #include "onnx.h" int main(void) { struct onnx_context_t *sess = onnx_context_alloc_from_file("yolov5n.onnx", NULL, 0); onnx_context_dump(sess, 1); return 0; } ``` but nothing was...
IR Version: v6 Producer: pytorch 1.11.0 Domain: Imports: ai.onnx v11 Conv_0: Conv-11 (ai.onnx) Inputs: input.1: float32[1 x 3 x 352 x 352] = [...] onnx::Conv_760: float32[24 x 3 x 3...
I downloaded tiny yolo v2 model from https://github.com/onnx/models/tree/main/vision/object_detection_segmentation/tiny-yolov2 And when inferencing this, I got those outputs from Valgrind ==178736== Invalid read of size 1 ==178736== at 0x162DF9: shash (onnxconf.h:146) ==178736==...
Hi, Can I build and use libonnx on gpu ?
This is really a question, I don't think there is a bug here, just something I'm not understanding. I'm looking at the code for maxpool and how it handles dilations....
https://github.com/Maratyszcza/NNPACK
https://github.com/microsoft/onnxruntime/blob/main/onnxruntime/core/flatbuffers/schema/ort.fbs
It'd be nice to have python bindings, since `onnxruntime` by Micro$oft has telemetry and so it is a bit unethical to depend on it. Fortunately there can be a thin...