OMI Format Compatibility
Opening up this thread to raise the conversation on compatibility with the Open Model Initiative standardized key format. We'd engaged w/ Marc Sun a while back on this topic, but wanted to centralize the discussion w/ respect to diffusers here.
@Nerogar has consolidated the OMI formats for models in this repo: https://huggingface.co/Nerogar/omi_lora/tree/main
Before Onetrainer begins defaulting to the OMI format, we wanted to evaluate whether there was interest in getting support included in diffusers. I believe that Kohya will be introducing a converter for these, rather than setting as the default.
thanks for opening the thread, @hipsterusername we will include support for the OMI format but won't make it the default for now, the same approach as Kohya.
Also there's some people following the discord including me so we can be up to date with it.
@hipsterusername thanks for sharing about the OMI initiative!
In the repo you have shared above, I don't see configuration files for the different format. Where do you put the configuration attributes so that the load happens correctly?
I see some additional keys in the safetensors' metadata:
modelspec.implementation=>https://github.com/huggingface/diffusers
modelspec.title=>HiDream I1 Full LoRA
ot_revision=>46f1ed84
ot_branch=>hidream
modelspec.architecture=>hidream-i1/lora
modelspec.sai_model_spec=>1.0.0
modelspec.hash_sha256=>0x3a2a0930c7488fde37ab87e87069d8a4940feb9acc5523ee0cfb91b152f6b69a
modelspec.date=>2025-05-01
Do you have a spec of what you expect there to be?
In the repo you have shared above, I don't see configuration files for the different format. Where do you put the configuration attributes so that the load happens correctly?
I don't quite understand this question. The basic idea behind the format is that the key names are fully derived from the original model files. You don't need configuration files, you just need the model keys know exactly how the LoRA keys will look like.
The metadata in those files is defined here https://github.com/Stability-AI/ModelSpec. And you can find a (incomplete) list of architecture definitions in this file https://github.com/Open-Model-Initiative/OMI-Model-Standards/blob/main/src/omi_model_standards/model_spec/architecture.py. The idea would be to list all relevant models in there, but so far I've only added the ones supported by OneTrainer.
You can ignore these two. They are OneTrainer specific keys and not relevant to the format:
ot_revision=>46f1ed84
ot_branch=>hidream
This repo also contains key conversion functions that can convert between the diffusers key layout and the OMI key layout: https://github.com/Open-Model-Initiative. Again, for now they only support models that are already supported by OneTrainer. But the goal would be to use this repository as the central place for everything related to the OMI format, including spec definitions and conversion functions.
I believe that Kohya will be introducing a converter for these, rather than setting as the default.
Yes, I plan to introduce a bidirectional converter. I think it will be more practical than modifying all the existing scripts in my repositories.
In the future, once a standard is established across the ecosystem (not just training but inference as well), I expect to make it the default in my repositories as well.