Boris Ginsburg
Boris Ginsburg
**Call for contribution:** "Can you build supervised speaker diarization model, similar to [https://ai.googleblog.com/2018/11/accurate-online-speaker-diarization.html] but convnet based". _You can build it for float32 only, and we will help to add support...
Can you attach the complete logs for mixed precision, please?
Looks like a bug in AutoScaling which we use in mixed precision. Can you retry transfer learning with mixed with one additional parameter: "loss_scaling": 1000.0, # "loss_scaling": 100.0 , and...
First of all, if If you use **Horovod**, please set **"num_gpus": 1,** in config file. Next: > "The program shows "No enough steps for benchmarking," and it stops." Do you...
What dataset do you use?
Do you mean to compute the validation loss for K last epochs and save only the best of them? Or save the checkpoints for all last K epochs?
To address the memory problem you can use this parameter: 'num_checkpoints': int, # maximum number of last checkpoints to keep We also support another useful flag, which takes the best...
You can set flags: `'use_trt': True,` See more flags here [model.py](https://github.com/NVIDIA/OpenSeq2Seq/blob/master/open_seq2seq/models/model.py)
You should set flag in the config file, not in model.py. On Thu, Jul 11, 2019 at 3:08 PM ajaysg wrote: > @borisgin thank you very much. is it enough...
Current implementation is targeted for off-line ASR.