KeyError: 'model.embed_tokens.weight' when converting .safetensors to ggml
(base) PS E:\Games\llama.cpp> python3 convert.py OPT-13B-Erebus-4bit-128g.safetensors --outtype q4_1 --outfile 4ggml.bin
Loading model file OPT-13B-Erebus-4bit-128g.safetensors
Loading vocab file tokenizer.model
Traceback (most recent call last):
File "E:\Games\llama.cpp\convert.py", line 1147, in
Model is from here: https://huggingface.co/notstoic/OPT-13B-Erebus-4bit-128g
I don't think OPT 13B is currently supported.
This issue was closed because it has been inactive for 14 days since being marked as stale.