Using embedding
I try to use seq2seq in summarization task. In more detail, I have 60k pairs of abstracts and titles and I'm using modified codes from NMT tutorial. I want to improve my results using word2vec embedding. How can i use pre-trained embedding?
Same samples from training (the first line is predicted title and the second is reference):
a model features for audio sounds recordings signals SEQUENCE_END autoregressive acoustical modelling of free field cough sound SEQUENCE_END
printer classification using the evaluation biomimetic pattern recognition SEQUENCE_END cancer classification using the extended biomimetic pattern recognition SEQUENCE_END
a of the polytonic term historical indian texts SEQUENCE_END hmms SEQUENCE_END recognition of greek polytonic on historical degraded texts using hmms SEQUENCE_END
assessing the intended enthusiasm of singing voice using spectral spectrum SEQUENCE_END assessing the intended enthusiasm of singing voice using energy variance SEQUENCE_END
I'm looking for the same thing! @rottik , did you find out how to do that?
Good issue, I am meetting the same problem, I also wanna use pre-trained embedding, somebody help?
https://github.com/google/seq2seq/issues/111 does that means it doesn't support by now?