Sebastian Veile
Sebastian Veile
This is only the case if you have more than 1 gpu. If you only use 1 gpu, try with gpus = 0. Then it will download the model and...
Are you using a different bert model? I got an error like this when I used a smaller bert model to train on a dataset that was preprocessed using a...
I believe your issue might be that you have preprocessed the data using the bert_multilingual, but you are trying to train on bert_base-uncased - It all depends on if you...
You should re-frame the bert_data path as follow "../bert_data/cnndm"
Late reply. But here is the solution -visible_gpus = 0
Check this pull request for a Japanese model - https://github.com/nlpyang/PreSumm/pull/118
Model path is where you save each checkpoint. Currently you save a checkpoint at every 2000 step
This is not an error. It is just informing you that it is downloading the model to "/tmp/tmpq67imxb7"
What was your training command and how many gpus did you train on?
The author describes here https://github.com/nlpyang/PreSumm/issues/44 that you need to adjust the accum_count when only training on one GPU