Jairo Correa
Jairo Correa
Same here, there is some memory leak, probably introduced day 14-16, older commits don't have that issue. The memory increases right after the generation of the batches starts, keeps the...
I found the problem, it is the gradio 3.5, the leak starts in the commit 4ed99d599640bb86bc793aa3cbed31c6d0bd6957, downgrading the gradio back to 3.4.1 solves the leak, I don't know what other...
> i have this problem when i use --medvram (ram fills up and then swap until the system crash), but not when i don't lowvram and medvram offloads the model...
> > > i have this problem when i use --medvram (ram fills up and then swap until the system crash), but not when i don't > > > >...
@leandrodreamer yes it may be a mix of settings, you can try to revert the commit 4ed99d599640bb86bc793aa3cbed31c6d0bd6957 to test if your problem is the same I identified or something else.
> Still had the same problem, nothing changed after latest git pull. Decided to reinstall from scratch, and lo and behold, no more memory leaks. Sadly it didn't worked for...
Changing to an inpainting model is calling the `load_model()` and creating a new model, but the previous model is not being removed from memory, even calling `gc.collect()` is not removing...
@random-thoughtss I tried with those lines to delete the sd_model but it didn't worked, it must be something else. About the inpainting logic, the `LatentInpaintDiffusion` just defines the properties `masked_image_key`...
@random-thoughtss it is the RAM not the VRAM, the RAM increases every time `load_model()` is called, and because the leak is the same size of the model I thought it...
Just to notify the progress I made, It is indeed a reference problem, some places are keeping a reference of the model, what prevents the garbage collector to free the...