Kaixin Li
Kaixin Li
## Motivation `sh` defaults to `dash` in least some distros of Linux, which does not support the syntax of something like `${@:3}` in tools/dist_train.sh. Switch to bash and it works...
I am trying to finetune a Llama model. The finetuning process runs smoothly, but the resulting `adapter_model.bin` is only 443 bytes. Does anybody know why is this happening?
Thank you for creating this great repo! Following your work, we took a step further to build a dataset containing around 100k instruction-tuning data for **code editing**. Please feel free...
Hi, I am trying to compare models using ScreenSpot. What were the prompts you used for QwenVL, Fuyu, and CogAgent?