qzfnihao

Results 6 issues of qzfnihao

I try to reproduce the streaming_convnets in librispeech data on a 4-gpu machine, I find it hard to train all the data with libri-light, so I just use 1k hours...

inference
am training

HI, as we know , mwer can reduce wer by 8% more on ctc and s2s. Does it have any play to realize it?

enhancement
question

i tried to reproduce the results in librispeech, and using train_am_tds_ctc.cfg: --runname=am_tds_ctc_librispeech --rundir=/root/wav2letter.debug/recipes/models/sota/2019/librispeech/ --archdir=/root/wav2letter.debug/recipes/models/sota/2019/ --arch=am_arch/am_tds_ctc.arch --tokensdir=/root/wav2letter.debug/recipes/models/sota/2019/model_data/am --tokens=librispeech-train-all-unigram-10000.tokens --lexicon=/root/wav2letter.debug/recipes/models/sota/2019/model_data/am/librispeech-train+dev-unigram-10000-nbest10.lexicon --train=/root/librispeech/lists/train-clean-100.lst,/root/librispeech/lists/train-clean-360.lst,/root/librispeech/lists/train-other-500.lst --valid=dev-clean:/root/librispeech/lists/dev-clean.lst,dev-other:/root/librispeech/lists/dev-other.lst --batchsize=16 --lr=0.3 --momentum=0.5 --maxgradnorm=1 --onorm=target --sqnorm=true --mfsc=true --nthread=10 --criterion=ctc --wordseparator=_...

am training

when i try to debug ctc model in decoder, in recipes/streaming_convnets, I find something confused. i found the beam search algorithm in lexicon decoder, not prefix beam search for ctc....

question
inference
decoder

I trained model with ta use transformer on aishell1 with encoder left window 15, right window 15, decoder window left 15, right 2. I got better acc on train data....

In streamin_transformer.py, prefix_recognize looks like frame-synchronize decoding algorithm, and merges chunk decoding and trigger decoding。I try to search papers about chunk transformer and trigger attention, but not found! Can you...