llama.cpp
llama.cpp copied to clipboard
Why skip rope_freqs, attn_rot_embd modules when serialization to gguf?
During convert llama-2-7b-hf & vicuna-13b, I found that convert script skip rope_freqs, attn_rot_embd modules in each layer.
These are defined in MODEL_TENSOR_SKIP as well.
What are they and why can we skip those modules?