Tiance Wang

Results 18 comments of Tiance Wang

@boshs It looks the same as https://github.com/k2-fsa/icefall/pull/1039. Updating your k2 version may solve it.

Hello, not sure if I should open a new issue for this, but are the pretrained models trained with default hyperparameters? And do all the pretrained models match the accuracies...

> > Hello, not sure if I should open a new issue for this, but are the pretrained models trained with default hyperparameters? And do all the pretrained models match...

Hi, have you got any results with phone based models? I previously tried with librispeech and the result was worse than BPE. For pruned transducer I only got 4-5 WER...

Thanks! But your result seems very close. Will try your recipe on librispeech sometime.

> > Please remove all files/folders whose name contains k2 inside the directory > > /root/miniconda3/lib/python3.8/site-packages > > and then re-run `python3 setup.py install`. > > Sorry forget to thanks...

These numbers look pretty good. Would love to see some streaming model results too!

> @yaozengwei the results you got with the 20M parameter model are better than those with the 70M model according to the results posted https://github.com/k2-fsa/icefall/blob/master/egs/librispeech/ASR/RESULTS.md > > Isn't that unexpected?...

Model Clean Other Size (M) RNN-T 5.9 15.71 30 Conformer 5.7 14.24 29 ContextNet 6.02 14.42 28 ConvRNN-T 5.11 13.82 29 The WER shown in the paper seems a lot...

> I think there are some problems in your training codes. Your hyps are empty. You can print the ids of the output. Look at what they are. I suspect...