YOLOv8 model metadata on export
From a discussion in the Ultralytics Discord, a user was having issues with a model exported after sparseml training. I cooked up a quick process to add the metadata to the output ONNX model and thought it would be worthwhile to add here.
Of course feel free to make changes as needed to comply with code standards/designs, I tried to minimize as much as possible, but suspect there could be other preferences. Let me know if there are any questions or changes needed 🚀
From a discussion in the Ultralytics Discord, a user was having issues with a model exported after
sparsemltraining. I cooked up a quick process to add the metadata to the output ONNX model and thought it would be worthwhile to add here.Of course feel free to make changes as needed to comply with code standards/designs, I tried to minimize as much as possible, but suspect there could be other preferences. Let me know if there are any questions or changes needed 🚀
Hi, there seems to be a problem with sparseml and ultralytics exploitation, apparently problems in the config, do you have quantisation working with sparseml? If so, can you tell me, please, the versions of onnx, ultralytics, sparseml, deepsparse? I'm despaired
@KozlovKY sorry I missed your message. I can't remember what versions of Ultralytics or SparseML I used when I opened this PR. I was able to successfully output a model following the SparseML docs guide with the changes added in this PR. Maybe you could try incorporating the changes proposed here into your local install to see if that works.
@Burhan-Q thx for your response, I have tried all available versions of Ultralytics sparseml DeepSparse, the problem seems to lie in the sparseml config itself, there is a problem with quantization, after export to onnx model layers are duplicated several times and quantization itself does not occur issue #2276, I have not been able to solve this mystery yet
@KozlovKY I can't recall the device or versions I used, but in our Discord server, I shared with a user my test results. Once you join the server and select a role, you can jump to the content using this link
Per the main README announcement, SparseML is being deprecated by early June 2, 2025. Closing the PR as work has been suspended; thank you for the inputs and support!