Avoiding Model Interference When Using set_options in a Multi-User Environment
Hello,
I would like to address an issue related to the usage of set_options in a multi-user environment. As my API is being used by multiple individuals simultaneously, I have noticed a specific scenario that requires attention.
Let me provide an example:
User A opens the webpage and selects the SD1.5 model to load an image. Meanwhile, User B utilizes the API interface to set the options and chooses the SD2.0 model.
In this situation, even though User A sees that they have selected the SD1.5 model on the webpage, the actual inference is performed using the SD2.0 model due to User B's subsequent selection.
My question is: How can we ensure that using set_options does not impact the model already chosen on the webpage?
Thank you for your assistance.
Best regards,
I think this issue is related to stable-diffusion-webui, it queues up the user requests https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/10205#issuecomment-1540212387
USER A: load the model `A`, please
SDUI: Okay (loading)
USER B: load the model `B`, please
SDUI: Okay (task A is pending, adding B to queue)
SDUI to USER A: model A is loaded
USER A: generate the image (user thinks that the model A is ready, but in reality theres a pending task to switch to model `B`
SDUI: Okay (task B is pending, adding generate_A to queue)
SDUI to USER B: model B is loaded
USER B: generate the image
SDUI: Okay (task generate_A is pending, adding generate_B to queue)
SDUI to USER A: Generated an image using model B
SDUI to USER B: Generated an image using model B
that results in both generate_A and generate_B to use model B
You can speed up model switching by caching multiple models at once, but I think it will require a lot of RAM: https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#caching-models
Hello,
I would like to address an issue related to the usage of set_options in a multi-user environment. As my API is being used by multiple individuals simultaneously, I have noticed a specific scenario that requires attention.
Let me provide an example:
User A opens the webpage and selects the SD1.5 model to load an image. Meanwhile, User B utilizes the API interface to set the options and chooses the SD2.0 model.
In this situation, even though User A sees that they have selected the SD1.5 model on the webpage, the actual inference is performed using the SD2.0 model due to User B's subsequent selection.
My question is: How can we ensure that using set_options does not impact the model already chosen on the webpage?
Thank you for your assistance.
Best regards,
Hi zhixideyu, did you found any sulotion for this? Thanks
Also:
- Interrupting proces or skipping currently generated image interrupts / skipps currently running proces, even one of another user.
Hey, we found pretty easy solution with what we are happy with. Using xyz plot script you can choose model and other parameters like clip skip and if evey user does it it works like a charm! Pro tip turn off generate legend option.
I am also in progress of making a1111 extension that just loads required model, but got stuck a little, only made a script, extension is breaking for some reason