suggestions and questions on training the data
- The default batch size is 16 which is the root cause of taking too much time to train the data. when I changed the batch size as 512, the convergence is much quicker.
- I have adjusted your code to train data with keras by showing up the loss value and acc value of each iteration. But when loss value is less than 0.04 and acc value is hight than 0.99. I want to generate the weight file. How do I to control when to generate that weight file?
-
Thanks for informing. I guess I have tried different batch sizes but I'm not sure how large I have tried. Batch size depends on the memory size of GPU. Nevertheless, It's worth knowing.
-
I think you can get loss value which is written as cost in my original code.
cost = self.model.train_on_batch(batch[0], batch[1])
https://github.com/kwonmha/Improving-RNN-recommendation-model/blob/f63ba48ef45fc621d9ea613863950fce7488ef18/neural_networks/rnn_base.py#L217
And I can't imagine how you can get accuracy but if you can have loss and accuracy after performing each iteration, it seems easy to save weight file by adding simple condition.
if acc > 0.99 or loss < 0.04:
SAVE_MODEL
Related code is here : https://github.com/kwonmha/Improving-RNN-recommendation-model/blob/f63ba48ef45fc621d9ea613863950fce7488ef18/neural_networks/rnn_base.py#L260
Maybe modifying codes in that block would work.
- Yep, it depends on GPU performance.
- After digging into the source code, I found there was 2 parameter from the command line to force save a weight file in iterations. --progress and --min-iter. I have set the two parameters as 500. the weight file can be generated out quickly.