feiyuvl
feiyuvl
Will onnx-mlir support using naive cuda kernel to write operator kernels?
Is there any plan to support cudnn/cublas call for convolution、dot computation?
I write a simple test to get the triton code of `WeightOnlyInt8Linear`,the test code is as follows: ``` import torch import torch.nn as nn import torch.nn.functional as F class WeightOnlyInt8Linear(torch.nn.Module):...
After reading the code in `compiler/lib/Dialect/mhlo/DynamicShapeRegister/Convolution.cpp`, I find there is no `ReifyReturnTyeShapes` function for convolution op. Does convolution op not support symbol relation computing between input and output, or we...