Xin Lin
Xin Lin
 Does anyone come across the same problem? For the batchsize of 1, the coversion is ok. Does that mean nanodet does not support dynamic inference by using tensorrt?
I trained my model using based on the instructions and got really great results, but I found that the model still used GPU taking up about 1.8G to make predictions....
I am bit confused about the final result by running the main.py when inferencing. Are they just a couple of feature embeddings for the input images?
I am a bit confused with the word 'offline'. Does that mean I do not have to log into the ngc registry by using this replicator to run the pre-trained...