Gabriele Pergola
Gabriele Pergola
Ok, thank you! Still looking for any other approach.
Hi! Sorry, I have seen you have recently used this library.. I am training the Sentiment Specific embedding. At the end of each epoch, I have got a message like...
Thank you!! I have just found out the comment you have referred: https://github.com/attardi/deepnl/issues/32
@Sandeep42 @hungthanhpham94 I wonder whether there is an error due to what Pytorch is expecting. In the function `train_data()`, it's written: ``` for i in xrange(max_sents): _s, state_word, _ =...
You have to prepend "std::" to each error. For instance, "isnan" becomes "std::isnan". This solved for me! g++ (Ubuntu 5.4.0-6ubuntu1~16.04.2) 5.4.0 20160609 See also: https://cgit.freedesktop.org/beignet/commit/?id=14bd8855dddcf683df8138c1062bc65b05d46f94
Oh, thank you for confirming this! I've already modified the regular expression; but unfortunately, they are not only abbreviations but "mistakes". Thank you anyway!
Ok, brilliant! So, I don't need to make any changes. I will try and let you know. Thank you for your prompt reply! :)
By the way, it worked! Thank you! :)
So, I need only to divide it by the document length, great! Thank you for your prompt reply!
I write in this "discussion" because I think my question should be "in topic" :) If I want to use an alternative word embeddings (e.g. word2vec), should I generate also...