[Converter] Add support for group_norm
Could we get support for aten::group_norm?
Original PyTorch API: https://pytorch.org/docs/stable/generated/torch.nn.GroupNorm.html
@peri044 do we need a plugin for group norm?
Yes, we need a plugin. Group norm plugin is shipped with TensorRT package (as a part of libnvinfer_plugin.so). The source code is available here https://github.com/NVIDIA/TensorRT/tree/main/plugin/groupNormalizationPlugin.
An implementation would look like this We implement a converter in Torch-TensorRT for group norm which calls the GN plugin in TensorRT as follows :
auto creator = getPluginRegistry()->getPluginCreator("GroupNormalizationPlugin", "1", "torch_tensorrt");
auto group_norm_plugin = creator->createPlugin(name, &fc); // fc is the collection of parameters passed to the plugin.
an example for reference : https://github.com/NVIDIA/Torch-TensorRT/blob/master/core/conversion/converters/impl/interpolate.cpp#L56
Any updates on this issue? It would be helpful for my use case too.
Hi all, I have a first version for the the group_norm layer converter using the GN plugin in TensorRT.
I add a group_norm.cpp file that wrap the correct pytorch signature. I also add to ignore the batch size checking that lead to unsupported operator used for the check...
Now the compilation/conversion from a pytorch model is ok but when I run an inference of the model (typically by modifying the examples/network.py file) lead to a cudnn error : ..... WARNING: [Torch-TensorRT] - Group norm layer is an experimental development features and used the group_norm plugin from TensorRT plugins library WARNING: [Torch-TensorRT] - Create group norm plugin from TensorRT plugin registry... WARNING: [Torch-TensorRT] - Get the creator for group norm WARNING: [Torch-TensorRT] - Create plugin WARNING: [Torch-TensorRT] - Add plugins to the context Warm up ... ERROR: [Torch-TensorRT] - 2: [pluginV2DynamicExtRunner.cpp::execute::115] Error Code 2: Internal Error (Assertion status == kSTATUS_SUCCESS failed. ) ERROR: [Torch-TensorRT] - 1: [context.cpp::setStream::121] Error Code 1: Cudnn (CUDNN_STATUS_MAPPING_ERROR) ERROR: [Torch-TensorRT] - 1: [context.cpp::setStream::121] Error Code 1: Cudnn (CUDNN_STATUS_MAPPING_ERROR) ERROR: [Torch-TensorRT] - 1: [context.cpp::setStream::121] Error Code 1: Cudnn (CUDNN_STATUS_MAPPING_ERROR)
...
Traceback (most recent call last):
File "network.py", line 108, in
Here is my new version of the network :
class ConvGelu(torch.nn.Module): def init(self): super(ConvGelu, self).init() self.conv = nn.Conv2d(3, 32, 3, 1) self.gelu = nn.GELU()
def forward(self, x):
x = self.conv(x)
x = F.group_norm(x,num_groups=32)
x = self.gelu(x)
return x
I'm not sure if there is a problem with cudnn call from the GroupNorm plugin layer with is mostly based around the cuDNN BatchNorm function.
Any feedback ?
Cheers,
David
Hi David, I'm also tying to implement a converter, can you tell me how you manage to ignore the batch size check?
I have patch the group_norm from PyTorch without the check of the batch size, but it seems that with the ngc21.12 there is no need to do that
I see. I'll do that as well for now then. How do you manage to pass the weight and bias argument to the GroupNormPlugin? I can't find a way to convert the Tensors to ITensors.
The GroupNormPlugin provided by TensorRT doesn't support the fine-tuned weights and biases. I bypass this missing feature with the addScaleNd TensorRT layer that I plugged after the call of the GroupNormalizationPlugin
This issue has not seen activity for 90 days, Remove stale label or comment or this will be closed in 10 days
This issue has not seen activity for 90 days, Remove stale label or comment or this will be closed in 10 days
GroupNormalizationPlugin
Has the development of the plugin been completed? Can you pull request or share the code?Thanks!
Any updates on this? Would also be useful for my model.
This issue has not seen activity for 90 days, Remove stale label or comment or this will be closed in 10 days
This issue has not seen activity for 90 days, Remove stale label or comment or this will be closed in 10 days