Geert Heyman
Results
2
comments of
Geert Heyman
You can have a look at the preprocessing script for the wmt16 data [here](https://github.com/google/seq2seq/blob/master/bin/data/wmt16_en_de.sh). The script creates vocabularies of characters, words and BPE units.
My guess is that you are running the nmt_medium/nmt_large models with the same _output_dir_ you used for training the nmt_small model. Therefore, seq2seq is trying to initialize your nmt_medium/nmt_large model...