Pan Xie
Pan Xie
@italosalgado14 > I have the same problem with cuda 11 to run the code. This happens because of the incompatibility of cudnn 8, which in turn we are forced to...
@lzj9072 i meet the same situation,and I make some changes of source code. Everything is ok after testing, and it can be suitable for different modeling unit. This is my...
hi @iamhankai I am confused with this line `x_j, _ = torch.max(x_j - x_i, -1, keepdim=True)` in MRConv code, `x_j=[b, c, n, k]` is the selected node by computing the...
@iamhankai Thanks, I understand!
@alznn maybe you can try `tf.nn.moments(inputs, axes)`. I use this, and no error. And you need to pay attention that after compute the mean and variance, you need to use...
I find out where the problem is. The results of ctcdecoder executed on GPU and CPU are quite different. I changed the calculation of ctc_loss on cpu, and get desired...
Moreover, when tested on A100, the GPU memory of linfusion is 4675M, and the GPU memory of dreamshape-v8 is 4345M. This doesn't seem reasonable either?