ml-stable-diffusion
ml-stable-diffusion copied to clipboard
Can't specify ``--image xx`` (`image2image`)
Does the current ml-stable-diffusion support image2image? I'm having the following error when trying to use it. Any help/comment is appreciated.
-
ml-stable-diffusionversion: the latest - Prompt entered:
swift run StableDiffusionSample --resource-path /Users/myUserName/ml-stable-diffusion/checkPoints/coreml-Deliberate/split-einsum/deliberate_v2_split-einsum --step-count 50 --compute-units cpuAndNeuralEngine --disable-safety --output-path ~/Downloads "A test description" --image-count 5 --image /Users/myUserName/Downloads/test\.png - Output with error:
Build complete! (0.08s)
Loading resources and creating pipeline
(Note: This can take a while the first time using these resources)
Sampling ...
StableDiffusion/Encoder.swift:96: Fatal error: Unexpectedly found nil while unwrapping an Optional value
[1] 20995 trace trap swift run StableDiffusionSample --resource-path --step-count 50 5```
Hey @ToddCool, could you please confirm that you have VAEEncoder.mlmodelc in your --resource-path? If not, you will need to run --convert-vae-encoder to generate it during the PyTorch to Core ML conversion phase.