Miguel Ortiz

Results 2 comments of Miguel Ortiz

Hi, i faced the same problem and it is because the model is not releasing the memory after inference i had to use: ``` torch.cuda.empty_cache() gc.collect() ``` before the return...

@faiza333 you can find that function in any gradio file of the repository i.e [gradio_scribble2image.py](https://github.com/lllyasviel/ControlNet/blob/d3284fcd0972c510635a4f5abe2eeb71dc0de524/gradio_scribble2image.py#L23)