nan loss
I use your implemented loss module to my SiamesRPN++. It was working fine after some first epochs, but the loss is getting to 'nan' after that. Do you know why does it happen? Btw, Can you implementation work with batch size is higher than 1?
step=224,loss=0.21591535210609436,cls_loss=0.22106705605983734,reg_loss=0.04982298985123634,lr=0.0010000000474974513,time=0.13663458824157715 step=225,loss=0.23805415630340576,cls_loss=0.18039944767951965,reg_loss=0.018808992579579353,lr=0.0010000000474974513,time=0.14361572265625 step=226,loss=0.22827929258346558,cls_loss=0.22993800044059753,reg_loss=0.027202464640140533,lr=0.0010000000474974513,time=0.14561033248901367 step=227,loss=0.18403905630111694,cls_loss=0.20121710002422333,reg_loss=0.019027791917324066,lr=0.0010000000474974513,time=0.1216745376586914 step=228,loss=nan,cls_loss=nan,reg_loss=nan,lr=0.0010000000474974513,time=0.17054390907287598 step=229,loss=nan,cls_loss=nan,reg_loss=nan,lr=0.0010000000474974513,time=0.12932848930358887 step=230,loss=nan,cls_loss=nan,reg_loss=nan,lr=0.0010000000474974513,time=0.12520909309387207 step=231,loss=nan,cls_loss=nan,reg_loss=nan,lr=0.0010000000474974513,time=0.13763189315795898
You might make a mistake in image_reader_cuda.py, line 78, index_d might be out of range
index_d=[tf.cond(tf.greater(index_t-interval,node_min),lambda:index_t-interval,lambda:index_t+interval),tf.cond(tf.less(index_t+interval,node_max),lambda:index_t+interval,lambda:index_t-interval)]
For example, if length of an input array is 120, index_t = 60 (random between 0 and 120), interval = 98 (random between 30 and 100) => index_d = [158, -38] would be wrong