Single File and GGUF support of Cosmos-Predict2
As a Mac user I'm usually restricted in the memory reduction options I can use, so would like to ask for GGUF support for the new Comos models. I have enough RAM for the 2B version but would like to be able to use the GGUF versions for the 14B model.
Unfortunately the current code for the transformer does not allow loading from a 'single file' so the following code will not load the transformer, and I guess there would be extra work to get GGUF running.
gguf_path = "/Volumes/SSD2TB/AI/caches/models/cosmos-predict2-14b-text2image-Q5_K_M.gguf"
transformer = CosmosTransformer3DModel.from_single_file(
gguf_path,
quantization_config=GGUFQuantizationConfig(compute_dtype=torch.bfloat16),
torch_dtype=torch.bfloat16,
# config=model_id
)
GGUF files are downloadable from https://huggingface.co/calcuis/cosmos-predict2-ggu
Thanks for the ask @Vargol! It's been on my mind to support single file in Cosmos. I'll try to have something this week
I think you can refer to ComryUI's implementation, they are very compatible with GGUF.