Running a custom trained Yolov3-tiny model
Hello there,
Thank you for this tutorial, it was really well written and I had this up and running in no time. I have a custom made and functional Yolov3-tiny model which I want to try with this application. I however cannot seem to get this working.
I pointed the inference.py file to the new files, however when restarting the app it ignored it and just ran the included model. I then tried to rename my own files into the ones that came included, but it seems to be getting its files from somewhere else and not use the ones in the folder at all?
I am very new to docker, iotedge and basically everything in this tutorial so its probably simply me not understanding how the backend of this all works, can I get some pointers on how to make my own model work?
Thank you!
Hi,
I've been looking at the same. Steps appear to be:
- Put your
.cfgand.weightsin themodules/YoloModule/app/yolofolder. - Update the
YoloInference.pyto point to the right.cfgand.weightsfiles. - Rebuild with docker, the IoT Edge
YoloModule- very nice way in VSCode using docker and the IoT Extension. - Push the image to a container registry like dockerhub or Azure Container Registry.
- Update
deployment.template.jsonand.envfor the right custom Yolo image - Create the manifest and deploy to the Edge device.