Ziang Long

Results 6 comments of Ziang Long

I am suggesting the following line instead: `contex = V.cumsum(dim=-2) / torch.arange(L_V).view(1, 1, L_V, 1).to(V)`

> > > Hi, thanks for your attention to our work. You can also refer to issue #140 , #88 and #23 for more information. Thanks. It is good to...

I take a closer look into the code and found the LayerNorm layers followed were with learnable parameters. I wonder if you are aware of the effect of using `cumsum`...

> 您好,此代码和原版差不多都要训练120个epoch。Learning rate 是指数下降的,Loss的下降也是先快后慢,我自己的实验数据如下,仅供参考。 1 Epoch Loss 3.0518 2 Epoch Loss 2.8765 3 Epoch Loss 2.8402 4 Epoch Loss 2.8101 5 Epoch Loss 2.8025 6 Epoch Loss 2.8103 7 Epoch...

> 请问数据处理后是这样么 data shape:(3200000000,) features shape:(180000000,) 这样执行下去代码会报错: > > Traceback (most recent call last): File "/Users/bytedance/Desktop/mywork/lpcnet/main.py", line 33, in dataloader = get_data(pcm_file, feature_file) File "/Users/bytedance/Desktop/mywork/lpcnet/utils.py", line 30, in get_data **features...

> When checking the code, I find Mdense layer without softmax with a little different from the official code. And when testing, bad samples are generated. Have you ever met...