llama.cpp icon indicating copy to clipboard operation
llama.cpp copied to clipboard

KeyError: 'model.embed_tokens.weight' when converting .safetensors to ggml

Open Jake36921 opened this issue 2 years ago • 1 comments

(base) PS E:\Games\llama.cpp> python3 convert.py OPT-13B-Erebus-4bit-128g.safetensors --outtype q4_1 --outfile 4ggml.bin Loading model file OPT-13B-Erebus-4bit-128g.safetensors Loading vocab file tokenizer.model Traceback (most recent call last): File "E:\Games\llama.cpp\convert.py", line 1147, in main() File "E:\Games\llama.cpp\convert.py", line 1137, in main model = do_necessary_conversions(model) File "E:\Games\llama.cpp\convert.py", line 983, in do_necessary_conversions model = convert_transformers_to_orig(model) File "E:\Games\llama.cpp\convert.py", line 588, in convert_transformers_to_orig out["tok_embeddings.weight"] = model["model.embed_tokens.weight"] KeyError: 'model.embed_tokens.weight' (base) PS E:\Games\llama.cpp>

Model is from here: https://huggingface.co/notstoic/OPT-13B-Erebus-4bit-128g

Jake36921 avatar Apr 15 '23 15:04 Jake36921

I don't think OPT 13B is currently supported.

jon-chuang avatar Apr 15 '23 16:04 jon-chuang

This issue was closed because it has been inactive for 14 days since being marked as stale.

github-actions[bot] avatar Apr 09 '24 01:04 github-actions[bot]