seq2seq icon indicating copy to clipboard operation
seq2seq copied to clipboard

Using embedding

Open rottik opened this issue 8 years ago • 3 comments

I try to use seq2seq in summarization task. In more detail, I have 60k pairs of abstracts and titles and I'm using modified codes from NMT tutorial. I want to improve my results using word2vec embedding. How can i use pre-trained embedding?

Same samples from training (the first line is predicted title and the second is reference):

a model features for audio sounds recordings signals SEQUENCE_END autoregressive acoustical modelling of free field cough sound SEQUENCE_END

printer classification using the evaluation biomimetic pattern recognition SEQUENCE_END cancer classification using the extended biomimetic pattern recognition SEQUENCE_END

a of the polytonic term historical indian texts SEQUENCE_END hmms SEQUENCE_END recognition of greek polytonic on historical degraded texts using hmms SEQUENCE_END

assessing the intended enthusiasm of singing voice using spectral spectrum SEQUENCE_END assessing the intended enthusiasm of singing voice using energy variance SEQUENCE_END

rottik avatar May 12 '17 13:05 rottik

I'm looking for the same thing! @rottik , did you find out how to do that?

micheletufano avatar Dec 13 '17 23:12 micheletufano

Good issue, I am meetting the same problem, I also wanna use pre-trained embedding, somebody help?

stevenkwong avatar Mar 23 '18 07:03 stevenkwong

https://github.com/google/seq2seq/issues/111 does that means it doesn't support by now?

stevenkwong avatar Mar 23 '18 07:03 stevenkwong