Ganesh
Ganesh
I think the final sentence embedding is computed by doing average of word vectors. After training the model, you can do this on your own if you get access to...
Yes, its a bug. Change that portion to this: `loss = self.sess.run([self.loss], feed_dict={ self.input: x, self.time: time, self.target: target, self.context: context})` so that we just seek the loss without doing...
@vanzytay As I've mentioned in the README, I guess the difference in the results is because of the unreported hyper-parameters. But now that the original author has released his code,...
As I just glanced through the author's code, I see some serious differences: a) Use Stanford Tokenizer to tokenize the sentence b) Replace the aspect term in the original sentence...
http://ir.hit.edu.cn/~dytang/paper/aspect_memnet/src.zip
On inspecting deeper, I find the second axis of output logits to be of size 40k. This means, the output can contain indices between 0 -39999 which will be a...
Further inspection shows the default values of char vocab size and lang vocab size is 40k. Is that expected? https://github.com/davidjurgens/equilid/blob/master/equilid/equilid.py#L81