ml-stable-diffusion icon indicating copy to clipboard operation
ml-stable-diffusion copied to clipboard

Some models have a performance degradation of approximately 20 times on the iPhone Pro Max compared to running on M1 Pro Macbook while standard Stable Diffusion 2.1 is just ~1.5x degradation on iPhone.

Open jo32 opened this issue 2 years ago • 0 comments

I have tested the following models on my iPhone 14 Pro Max:

1: coreml-stable-diffusion-2-1-base https://huggingface.co/pcuenq/coreml-stable-diffusion-2-1-base https://huggingface.co/pcuenq/coreml-stable-diffusion-2-1-base/blob/main/coreml-stable-diffusion-2-1-base_split_einsum_compiled.zip

took ~15s on Mackbook M1 Pro took ~ 20s on iPhone 14 Pro Max

2: coreml-8528-diffusion https://huggingface.co/coreml/coreml-8528-diffusion https://huggingface.co/coreml/coreml-8528-diffusion/blob/main/split_einsum/8528-diffusion_split-einsum_compiled.zip

took ~15s on Mackbook M1 Pro took ~5min on iPhone 14 Pro Max

and the memory usage is alot more than the first model.

here is my configuration:


#if targetEnvironment(macCatalyst)
let runningOnMac = true
#else
let runningOnMac = false
#endif

let configuration = MLModelConfiguration()
configuration.computeUnits = runningOnMac ? .cpuAndGPU : .cpuAndNeuralEngine
let pipeline = try StableDiffusionPipeline(resourcesAt: url,
configuration: configuration,
disableSafety: false,
reduceMemory: !runningOnMac)

var config = StableDiffusionPipeline.Configuration(
    prompt: "string1"
)
config.negativePrompt = "string2"
config.stepCount = numInferenceSteps // 15
config.seed = UInt32(seed) // 32
config.guidanceScale = Float(guidanceScale) // 7.5
config.disableSafety = disableSafety // true
config.schedulerType = .dpmSolverMultistepScheduler

jo32 avatar Feb 11 '23 07:02 jo32