PreSumm
PreSumm copied to clipboard
Advice on prediction and replacing the bert model
Hi, Thanks for your good work and I manage to run preprocessing and training on colab. Now I want to do two things:
- predict without gold (generate a summary of text not found in training, validation, test set)
- replace the bert model with that of a different language
Can you provide some general advice on these two tasks? It appear that all the translation, validation ,test do with a gold set. Thanks again.
Check this pull request for a Japanese model - https://github.com/nlpyang/PreSumm/pull/118