Zhiqiang Hu
Zhiqiang Hu
Thanks for your reply!
> For everyone's convenience, I've uploaded **llama models converted with the latest transformer git head** here: > > **7B** - https://huggingface.co/yahma/llama-7b-hf **13B** - https://huggingface.co/yahma/llama-13b-hf Hi @gururise , is it possible...
use `export XDG_CACHE_HOME="path/to/folder"` or put it in your `~/.bashrc file
> Hi @younesbelkada , I have tried to install `accelerate` from source, but I got another Error: ``` NotImplementedError: Cannot copy out of meta tensor; no data! ``` Do you...
Hi, The commonsense_15k is sampled from the commonsense_170k for debugging. The results reported in the paper are based on commonsense_170k.
Hi @AaronZLT , I recommend to use math_10k, math_7, or math_14k to do fine-tuning. In order to reproduce the result in README, you need to use math_10k. math_50k is an...
Hi, According to the error message, one possible reason is that the fine-tuning of the model crashed. Can you check the training loss when you are fine-tuning the model? if...
Hi, According to the issue https://github.com/tloen/alpaca-lora/issues/408, it seems like a CUDA issue. However, I can't reproduce the error from my side. But I found a solution for it by commenting...
Hi, What GPUs are you using to finetune llama2? I used to have this issue with V100s, but works well with 3090s, A100s.
Hi, the packages are showing below: ``` _libgcc_mutex 0.1 main _openmp_mutex 5.1 1_gnu accelerate 0.21.0 pypi_0 pypi aiofiles 23.1.0 pypi_0 pypi aiohttp 3.8.4 pypi_0 pypi aiosignal 1.3.1 pypi_0 pypi altair...