Sawradip Saha

Results 9 issues of Sawradip Saha

Is it possible to fine tune this in limited hardware environment, like a single 3090? Any thoughts in Lora Implementation?

When I try to run the quantization pipeline for 16-bit precision, ``` CUDA_VISIBLE_DEVICES=0 python llama.py ./llama-hf/llama-7b c4 --wbits 16 --true-sequential --act-order --save llama7b-16bit.pt ``` It raises error that quantizers are...

If an user is not authenticated, and tries to run `composio add `, now it will first be redirected to login, and after login, the tool will be added.

* Refactor & cleaned up