tnghieu
tnghieu
@codscino I see that you're following pytorch_lite and this package and I'm wondering if you came to a solution to running inference faster. I'm also trying to run inference on...
> I have not understood your example, but here is a working repo with yolov8 https://github.com/codscino/yolo_tflite I'm new to ML. How do I get the output detections from the interpreter...
> Here I am. I just published my work as opensource here: https://github.com/ferraridamiano/yolo_flutter > > I really wanted to improve it but I do not have enough time to put...
I was able to implement Isolates with the native dart isolates. It is indeed faster in the sense that work is delegated across these isolates by a factor of N...
Still troubleshooting with isolates. I notice that PyTorch_lite is using Computer library to spawn 2 workers through the ImageUtilsIsolate (default). Therefore if I have the main Ui thread (1), 2...
Can you expand on that? Could I not spawn an isolate, load the ModelObjectDetection class into it, and then have separate instances of the model to run predictions with? ```...
My intention was to split the work of inference between the threads. For a given set of 1000 images, give isolate 1: 333, isolate 2: 333, isolate 3: 334 to...
I see this function in the package: getImagePredictionListObjectDetection(List imageAsBytesList), is this intended to be used for a list of images?