Лэюань
Лэюань
I'm sure no special operation in the batch, because when I changed to torch.istft, everything is fine. I'm using one GeForce RTX 3080, total 10015 MB memory.
Thank you very much. This problem can not be reproduced. I checked my losses and comment some of them, the speed looks normal.
I still didn't solve this problem, today I found when GPU-Util reached 100%, loss.backward() got stuck, otherwise it runs normally, this problem happened periodically.
I'm using GeForce RTX 2080, the same code everything is fine.
> It seems to be a GPU bottleneck to me. Can you check when you use torch.istft, what is the GPU utility? Does it reaches 100%? > > If torch.istft...
I test a RTX 3090, it face the same issue, @KinWaiCheuk do you have any plans or suggestions to solve?
> Currently this problem happens in RTX 3090 and RTX 3080, but not on RTX 2080? I am not sure if there is something in the RTX 3000 series that...
> I did a quick search, and I found that other people are facing the same problem using PyTorch on a RTX 3090. Since nnAudio is implemented using convolutional layers...
> I am afraid that there is nothing we can do at the moment. We might need to wait for NVIDIA to release new version of CUDA or cudnn. I...
> @ABC0408 Just curious if you post a colab, to see what the performance differences are there I tried to use a small model and a random batch inputs to...