ml-stable-diffusion icon indicating copy to clipboard operation
ml-stable-diffusion copied to clipboard

Inference with SDXL on Python

Open datovar4 opened this issue 2 years ago • 6 comments

I've tried running inference with SDXL model but it gives me an error NSLocalizedDescription = "Error Computing NN outputs". I don't have any issues with SD 1.4 or SD 2.1. I noticed that for the Swift Inference, you have to specify an --XL model. Is there something I should add to the pipeline.py to get inference to work for SDXL?

datovar4 avatar Aug 20 '23 16:08 datovar4

Hello! The Python pipeline is a proof-of-concept and it does not include the changes necessary to support SDXL variants. The error you mentioned could be due to various reasons (FAQ Q2) but it most frequently happens when the process runs out of memory. The SDXL pipeline consumes significantly more memory compared to non-XL SD models. What is your Mac chip generation and RAM size?

atiorh avatar Aug 20 '23 21:08 atiorh

Thanks for getting back to me so quickly! I have an M1 Pro with 32 GB of RAM

datovar4 avatar Aug 20 '23 21:08 datovar4

Those specs are just fine, SDXL runs on my M1 with 16 GB RAM. The issue might be that other concurrent processes might be taking too much RAM. Just to test the hypothesis, I would use Activity Monitor and cancel the top memory consuming processes and retry SDXL (with the Swift pipeline with --image-count 1 and --reduce-memory)

atiorh avatar Aug 20 '23 21:08 atiorh

I'm trying to integrate the images I create within a python pipeline, so that's why I've been trying get it to run inference with python and not swift. I'll see if I can work a swift call in there. Do you think you will release SDXL inference with python anytime soon?

datovar4 avatar Aug 20 '23 21:08 datovar4

I just got this running, I downloaded the files in this directory:

https://huggingface.co/apple/coreml-stable-diffusion-mixed-bit-palettization/tree/main/coreml-stable-diffusion-xl-base_mbp_4_50_palettized/compiled

Once I did that, my command looked like this:

$ python3 -m python_coreml_stable_diffusion.pipeline --prompt "ufo glowing 8k" --model-version stabilityai/stable-diffusion-xl-base-1.0 -i coreml-stable-diffusion-mixed-bit-palettization_original_compiled/compiled/ -o . --compute-unit CPU_AND_GPU

The only way I could get the XL pipeline to actually run was by including the --compute-unit CPU_AND_GPU. This was mentioned in a tweet.

burningion avatar Oct 06 '23 16:10 burningion

Just following up here, if you're trying to get this working in Python:

from diffusers import StableDiffusionXLPipeline
from python_coreml_stable_diffusion.pipeline import get_coreml_pipe

prompt = "ufo glowing 8k"
negative_prompt = ""

SDP = StableDiffusionXLPipeline    
pytorch_pipe = SDP.from_pretrained(
                          "stabilityai/stable-diffusion-xl-base-1.0",
                           use_auth_token=False,
                           )
coreml_pipe = get_coreml_pipe(
                pytorch_pipe=pytorch_pipe,
                mlpackages_dir="./coreml-stable-diffusion-mixed-bit-palettization_original_compiled/compiled/",
                model_version="stabilityai/stable-diffusion-xl-base-1.0",
                compute_unit="CPU_AND_GPU",
                scheduler_override=None,
                controlnet_models=None,
                force_zeros_for_empty_prompt=False,
                sources=None,
                )
controlnet_cond = None
image = coreml_pipe(
                prompt=prompt,
                height=coreml_pipe.height,
                width=coreml_pipe.width,
                num_inference_steps=60,
                guidance_scale=7.5,
                controlnet_cond=controlnet_cond,
                negative_prompt=negative_prompt,
            )
img = image["images"][0]
img.save("out.jpg")

(Sorry for the formatting)

img will be a Pillow image, saved as "out.jpg" in the current directory.

Inference for a single image takes around 4 mins on my M1 Pro w/ 64GB, after the .mlmodelc files compile for the first time. In my case, compilation took ~ 20 mins for the UNet alone.

burningion avatar Oct 06 '23 17:10 burningion