Kishore Chandra Sahoo
Kishore Chandra Sahoo
@jaybdub Hello John, Thanks for your response. Here is the code to load the pretrained model. ``` device = torch.device('cuda' if cuda else 'cpu') model = Model(opt) model = torch.nn.DataParallel(model).to(device)...
Also, How can we provide dynamic input while converting the model?
How do we convert this craft torch model (craft_mlt_25k.pth) to TensorRT format format with dynamic input size? Any guides/steps will be helpful.
How do we convert this craft torch model (craft_mlt_25k.pth) to TensorRT format or ONNX format?
How to convert TRBA pth model to onnx or TensorRT format?
I too have same query, like how do we feed the image of numpy array to the model for prediction? And if we use DataLoader, can we give multiple images...
I am getting error as below: ``` model.load_state_dict(torch.load(trained_model, map_location=device)) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1604, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for Model: Missing...
I am trying to convert model to onnx or TRT. I am facing issue while converting to onnx or TRT. Anyone tried and succeeded ?
I was looking into same. I would like to convert the mode to TensorRT. But I am stuck with `input`. How do I give input for the conversion?
I am also facing same issue with a different model (https://github.com/clovaai/deep-text-recognition-benchmark) while converting to TRT.