T2I-Adapter
T2I-Adapter copied to clipboard
CUDA out of memory.
Seems that 8GB is not supported yet:
RuntimeError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 0; 8.00 GiB total capacity; 5.40 GiB already allocated; 0 bytes free; 6.53 GiB reserved in total by PyTorch)
Have you found a fix for this? I'm getting the same issue.
@imperator-maximus @successor1 Original study trained on 4-32GB Tesla V100 GPU.
pipe.enable_model_cpu_offload() works well for CUDA 10G.