Geonmin Kim
Geonmin Kim
Thanks for your interests on AAS. Check the readme page. "The pre-trained enhanement model (E) trained on Librispeech + DEMAND is available here. (acoustic:adversarial = 1:100000, #hidden = 500, #layer...
You can add '--mode test --load_path PATH_TO_PRETRAINED_MODEL' to the training script. For example, python main.py --mode test --trainer AAS --DB_name chime --rnn_size 500 --rnn_layers 4 --ASR_path ../AM_training/models/librispeech_final.pth.tar --load_path /data/kenkim/AAS_enhancement/model.pth.tar If...
1. For train/test FSEGAN You can use '--mode test --trainer FSEGAN --load_path PATH_TO_PRETRAINED_MODEL' for test FSEGAN model. I found that main.py does not link to trainer_FSEGAN.py so i just added...
Sorry for late upload. Check main page :)
Thanks for the clarification. I have some follow-up questions. Does example_dataset/test.oraclewordns imply "oracle keywords"? Does "longest sub-sequences" used for training automatic keyword extractor imply "oracle keywords"? 
No. still does not understand the reason.
I got the same error. Do you have any modifications or progress on this issue?
@okhat Could you provide example arguments when using `utility/triples.py`? I can see arguments `--ranking, --output, --positives, --depth, --permissive, --biased, --seed` and it would be nice to understand if you provide...
oops.. The paper already mention about Question1. "However, we do not claim that this is a new method to quantitatively evaluate generative models yet. The constant scaling factor that depends...
I extract linear spectrogram magnitude and did Griffin-Lim with librosa and it sounds good. This scheme is also used in recent TTS paper Tacotron (https://arxiv.org/abs/1703.10135). However, my concern is that...