Siavash Nazari
Siavash Nazari
Hi folks! My team and I are looking into having compiler support for block floating point (BFP) in Torch-MLIR. Wondering what you think about extending the Torch-MLIR support for these...
I am trying to compile a portion of a PyTorch Self-Attention module down to TOSA backend and am hitting an error on legalizing the `torch.contant.int` Op in TOSA conversion pass....
- This example script fails on torch_mlir.compile() API - Compiles fine with no block_quantize in SimpleModel.forward() - Having block_quantize only on the inputs tensors compiles fine
Building qtorch using pip fails due to the missing package: wheel - Add it to the requirements so that `pip install .` builds qtoch