mx-maskrcnn
mx-maskrcnn copied to clipboard
why RPNLogLoss=nan and RPN1Loss=nan?
I use one GPU; after bash /scripts/train_alternate.sh ; Just started showing the correct information; when Batch[680], showing RPNLogLoss=nan and RPN1Loss=nan; why?
You can try to lowering the learning rate.
I solved the problem .Thanks. but But every training time,Iteration for some time,My computer will restart。why?
Why do you restart the computer while training? @tkuanlun350