transformers icon indicating copy to clipboard operation
transformers copied to clipboard

Changing __repr__ in torchao to show quantized Linear

Open MekkCyber opened this issue 1 year ago • 1 comments

What does this PR do?

When a model is quantized using TorchAO and then loaded, the representation of its Linear layers is expected to be different compared to the standard representation. This pull request (PR) modifies the representation of these Linear layers to match the format used in TorchAO's implementation : https://github.com/pytorch/ao/blob/main/torchao/quantization/quant_api.py

Before : Linear(in_features=4096, out_features=4096, bias=False) After :

Linear(in_features=4096, out_features=4096, weight=AffineQuantizedTensor(shape=torch.Size([4096, 4096]), block_size=(1, 128), device=cuda:0, layout_type=TensorCoreTiledLayoutType(inner_k_tiles=8), layout_tensor_dtype=torch.int32, quant_min=0, quant_max=15))

Who can review?

cc @SunMarc

MekkCyber avatar Oct 16 '24 22:10 MekkCyber

cc @SunMarc for review ! Thank you !

MekkCyber avatar Oct 18 '24 12:10 MekkCyber

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

We can merge in the mean time 🤗

ArthurZucker avatar Nov 05 '24 15:11 ArthurZucker