CUDA out of memory
Your team has done an excellent job. I would like to know that when I use four NVIDIA RTX 2080 and the batch_size is set to the minimum of 4, the output is always ' CUDA out of memory' when I run it. I would like to know if there are any parameters in the model that can be reduced to solve this problem. Thank you very much.
During training, you can reduce the parameter here and here to save memory.
during test, how to save memory? Thank you.
During training, you can reduce the parameter here and here to save memory.
Could you tell me if I reduce the parameter LIMIT, will it have any effect on the model? Will it reduce the performance? Thank you.
limited performance decrease if you are not reducing it extremely; for inference, you can refer to here
I reduce both limit from 30 to 10 , but still cuda out of memory on ti2080 12G