is it possiable to shared cached models between two or more comfyui instances
i have two gpus, i excute two comfyui instances ,on different ports and different gpu, these two instances may run the same workflow , and most of the used models was same, but it takes twice ram space , it's two large ram cost, i want to shared these same models among the running instances, it may take less ram space ,is it possiable? thank your for your reply!
That's could be possible, but you have to change source code to use shared memory or some memory database like redis or memocache. I dont't think it's a good idea to do that.
Hope this helpful :)
If you want to do that, you should run multiple server instances within a single ComfyUI process instead of running multiple ComfyUI processes.