✨[Feature] Torch.jit.trace_module support in Torch-TensorRT
Is your feature request related to a problem? Please describe.
More information here: https://pytorch.org/docs/stable/generated/torch.jit.trace_module.html#torch.jit.trace_module
Compile specific methods on a module (specified in inputs), Convert a constructed ScriptModule with forward and forward_endoder methods
Describe the solution you'd like
I'd like to have identical solution to Torchscript in Torch-TensorRT To be able to compile models that were traced like this:
encoder_forward_input = torch.rand(1, 3, 2016, 2016).to("cuda")
decoder_forward_input = torch.rand(128, 64, 3, 3).to("cuda")
inputs = {'forward' : encoder_forward_input, 'forward_encoder' : encoder_forward_input, 'forward_decoder' : decoder_forward_input}
traced_model_cuda = torch.jit.trace_module(self.netG_A, inputs)
Describe alternatives you've considered
I've tried TensorRTCompileSpec, but it returns Method 'forward_encoder' is not defined.
spec = {
"forward_encoder":
trtorch.TensorRTCompileSpec({
"inputs": [trtorch.Input(#[1, 3, 2016, 2016])],
min_shape = [1, 3, 224, 224],
opt_shape = [1, 3, 1512, 2016],
max_shape = [1, 3, 2016, 2016],
dtype = torch.float,
)],
# For static size shape=[1, 3, 224, 224]
"enabled_precisions": {torch.float}, # Run with FP16
"workspace_size": 1 << 20,
"truncate_long_and_double": True,
})
]
trt_ts_module = trtorch.compile(self.netG_A, spec)
Additional context
Might be related to #798 #621
This issue has not seen activity for 90 days, Remove stale label or comment or this will be closed in 10 days
This issue has not seen activity for 90 days, Remove stale label or comment or this will be closed in 10 days
@p1x31 thanks for the feature request! Are there any public models which require this which you're interested in?
This issue has not seen activity for 90 days, Remove stale label or comment or this will be closed in 10 days
Closing as dup of #621