torch.cuda.OutOfMemoryError: CUDA out of memory.
I'm always getting this error even I downed my settings to the bottom:
Here is my settings:
What's your GPU?
What's your GPU?
NVIDIA GeForce RTX 3050 Laptop GPU
You need GPU >= 8GB
Are all of your samples below 10 seconds? Too big dataset (more than 1h) on gpu with <6gb vram can cause OOM error. First try trimm samples and try again. When it didn't work then put smaller dataset like 10-30min.
I have exactly the same problem, same GPU. I tried reducing the batch size to 1 and using >10s audio samples, it didn't work.
Same problem. I tried running on two different pc. One with Asus NVIDIA GeForce RTX 2080 (6GB) and one with Colorful NVIDIA GeForce RTX 3060 (4GB). Both having the same problem.
Same problem. I tried running on two different pc. One with Asus NVIDIA GeForce RTX 2080 (6GB) and one with Colorful NVIDIA GeForce RTX 3060 (4GB). Both having the same problem.
In my case doing all things on Ubuntu 22.04 resolved all OOM errors. Unfortunately apparently Windows have some problems with allocating VRAM. You should start training with batch=1 on 4GB VRAM easily. I train without any problems on GTX 1660 Super 6GB with batch=3.
Same problem. I tried running on two different pc. One with Asus NVIDIA GeForce RTX 2080 (6GB) and one with Colorful NVIDIA GeForce RTX 3060 (4GB). Both having the same problem.
In my case doing all things on Ubuntu 22.04 resolved all OOM errors. Unfortunately apparently Windows have some problems with allocating VRAM. You should start training with batch=1 on 4GB VRAM easily. I train without any problems on GTX 1660 Super 6GB with batch=3.
Yes, I also think the problem is Windows related. I'll try with Fedora tomorrow and report back
If it's windows related issue, is there any walkaround found for this? I'm also facing the same problem. Though v1 runs perfectly fine on my system.
I'm having the same issues. I have a GeForce RTX 3050 with 4GB. I can run stable diffusion and deforum locally. I would really love to be able to run it.
https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/wiki/FAQ-(Frequently-Asked-Questions)#q8cuda-errorcuda-out-of-memory
I'm always getting this error even I downed my settings to the bottom:
Here is my settings:
Did you find a solution
I also had this problem before on my 4GB graphics card
I set the following options and the problem was solved, of course you can increase the values, I set the minimum for testing and the problem was solved My graphics card is "NVIDIA GeForce GTX 1650 with Max-Q Design"
rmvpe Save frequency = 1 Batch size per GPU = 1 Cache all training sets to GPU = no