Transformer 4.46.1 compat
@HandH1998 Is there plan to bring the llama/qwen2.5 modeling code up-to-date with latest 4.46.1? Upon testing I find the modeling code is out of sync and QQQ will only run with fixed 4.38 transformers.
It is a pity that I have no time to support this. But I think you can try to do it yourself as it is not so complex.
@HandH1998 Understood. Second question, will the vllm hqq kernel be maintained by you or someone associated with qqq or is that kernel also be left to the open source community as well?
The vllm qqq kernel now is maintained by the vllm team. The open source community can also modify it for your use and only need to maintain the copyright statement and cite our paper.
@HandH1998 I will be doing some testing next week. If QQQ quantization quality is stable and inference is good, I will ask my team to integrate qqq into GPTQModel via QuantizeConfig.format=QQQ. Full citation will added including any files we checkpick over.
That is great! If you have any question, chat with me.
@HandH1998 WIP: https://github.com/ModelCloud/GPTQModel/pull/1402 Please check the PR. I need your help creating a torch qqq kernel.
I have made a transformers 4.45.0 support , may help. #35