Unable to combine Vicuna’s delta weight and original weight
Has anyone else had the same problem as me?

I followed PrepareVicuna.md and download llama-13b-hf by
git clone https://huggingface.co/decapoda-research/llama-13b-hf.
But in the last step, the problem happened. Could this be the cause of the problem? (But I run it on ubuntu)

Hello! Are these files downloaded successfully? In case you cannot download them via git lfs, you can also download them manually
Thanks for answering! I then downloaded the file with the following command and it no longer shows an error.
git-lfs clone https://huggingface.co/decapoda-research/llama-13b-hf
But it still does not merge the weights.
Could it be that the HF version converted from 13B model this model using is not the same version in https://huggingface.co/decapoda-research/llama-13b-hf?
what errors you received when merging the weights?
I have the same problem,
windows 11 wsl2, memory 64G

wsl sometimes disconnects,error like: [已退出进程,代码为 11 (0x0000000b)] [已退出进程,代码为 4294967295 (0xffffffff)] [Process exited with code 4294967295 (0xffffffff)]
Most likely you don't have enough RAM (conversion is successful with 128Gb)
很可能您没有足够的 RAM(128Gb 转换成功)
I only have around 64G of RAM, which means this won't run up on my GPU? Is there any possible solution?
Thank you so much!!!!! I solved this problem by expanding the virtual memory.