Hoa Vu
Hoa Vu
@Dipendra77 Your code works just fine for me, except that I use 1 channel and sample width 2.
As stated in the referenced link by TimZaman, adding -DNDEBUG to the compiler did solve the issue. For more info, please follow the link.
I updated previous comment with error logs when I built with cuda 7.0
creating the model in the run function of the process will solve this issue.
We're having this on GPU as well. Memory keeps increasing after each inference.