Tensorrt conversion of Unet model
I am trying to convert both HD and DC trained Vton Unet models to TensorRT to explore possible performance improvement. I was successfully able to convert it to ONNX model first, but when tried to verify the outputs on CPU the ram usage goes as high as 50GB.
The tensor outputs from both pytorch inference and ONNX inference seems more or less same with following difference measured by np.testing.assert_allclose().
Mismatched elements: 28172 / 98304 (28.7%)
Max absolute difference: 0.001953
Max relative difference: 38.72
x: array([[[[ 3.0884e-01, 1.2903e-01, -3.4229e-01, ..., 5.1221e-01,
-2.5537e-01, 5.0244e-01],
[-3.2568e-01, 5.4248e-01, 6.1426e-01, ..., -6.7383e-02,...
y: array([[[[ 3.0884e-01, 1.2842e-01, -3.4204e-01, ..., 5.1270e-01,
-2.5610e-01, 5.0244e-01],
[-3.2544e-01, 5.4297e-01, 6.1426e-01, ..., -6.6467e-02,...
With this high memory usage, I cannot convert it to TensorRT with the main culprit being one tensor exceding TensorRT's tensor size limitations. I would assume this tensor is related to spatial_attn_inputs .
Sharing TensorRT converions error logs here.
[05/29/2024-18:18:28] [E] Error[4]: [graphShapeAnalyzer.cpp::processCheck::587] Error Code 4: Internal Error ((Unnamed Layer* 123) [Matrix Multiply]_output: tensor volume exceeds (2^31)-1, dimensions are [2,8,24576,24576])
[05/29/2024-18:18:28] [E] [TRT] parsers/onnx/ModelImporter.cpp:773: While parsing node number 77 [MatMul -> "/down_blocks.0/attentions.0/transformer_blocks.0/attn1/MatMul_output_0"]:
[05/29/2024-18:18:28] [E] [TRT] parsers/onnx/ModelImporter.cpp:774: --- Begin node ---
[05/29/2024-18:18:28] [E] [TRT] parsers/onnx/ModelImporter.cpp:775: input: "/down_blocks.0/attentions.0/transformer_blocks.0/attn1/Mul_output_0"
input: "/down_blocks.0/attentions.0/transformer_blocks.0/attn1/Mul_1_output_0"
output: "/down_blocks.0/attentions.0/transformer_blocks.0/attn1/MatMul_output_0"
name: "/down_blocks.0/attentions.0/transformer_blocks.0/attn1/MatMul"
op_type: "MatMul"
[05/29/2024-18:18:28] [E] [TRT] parsers/onnx/ModelImporter.cpp:776: --- End node ---
[05/29/2024-18:18:28] [E] [TRT] parsers/onnx/ModelImporter.cpp:778: ERROR: parsers/onnx/ModelImporter.cpp:180 In function parseGraph:
[6] Invalid Node - /down_blocks.0/attentions.0/transformer_blocks.0/attn1/MatMul
[graphShapeAnalyzer.cpp::processCheck::587] Error Code 4: Internal Error ((Unnamed Layer* 123) [Matrix Multiply]_output: tensor volume exceeds (2^31)-1, dimensions are [2,8,24576,24576])
Do you think would it be possible to convert into TensorRT or are there any graph-level optimizations we can do to make it possible? Because Pytorch 2.0's inference does seem superior for now.
hi @littleGiant-28 , were you able to do it?
No, I arrived on a conclusion that its not possible to convert this model to tensorrt as it goes beyond limits set by Nvidia's tensorrt conversion specifically tensor volume as it mentions in error.