llama-cpp-python icon indicating copy to clipboard operation
llama-cpp-python copied to clipboard

llama_get_logits_ith: invalid logits id -1, reason: no logits

Open ba0gu0 opened this issue 1 year ago • 3 comments

llama_get_logits_ith: invalid logits id -1 error when embedding=True

Expected Behavior

When using llama-cpp-python with Qwen2 model, the chat completion should work normally regardless of whether the embedding parameter is enabled or not.

Current Behavior

The model works fine when embedding=False, but throws an error llama_get_logits_ith: invalid logits id -1, reason: no logits when embedding=True.

Working Code Example

from llama_cpp import Llama

# This works fine
llm = Llama(
    model_path="./models/qwen2-0_5b-instruct-q8_0.gguf", 
    chat_format="chatml", 
    verbose=False
)

messages = [
    {"role": "system", "content": "Summarize this text for me: You are an assistant who creates short stories."},
    {"role": "user", "content": "Long ago, in a peaceful village, a little girl named Leah loved watching the stars at night..."}
]

response = llm.create_chat_completion(messages=messages)

'''
{'id': 'chatcmpl-17ca45ef-d13b-425a-96be-7631e3b9a7f4',
 'object': 'chat.completion',
 'created': 1730125699,
 'model': './models/qwen2-0_5b-instruct-q8_0.gguf',
 'choices': [{'index': 0,
   'message': {'role': 'assistant',
    'content': 'This text is a short story about a little girl named Leah who loves watching the stars at night. One day, she noticed a particularly bright star that seemed to wink at her, and she made a wish to become friends with the star. This star spirit helped Leah take her on a magical adventure among the stars, and she visited countless constellations and stardust rivers.'},
   'logprobs': None,
   'finish_reason': 'stop'}],
 'usage': {'prompt_tokens': 145, 'completion_tokens': 76, 'total_tokens': 221}
}
'''

# Works successfully

Error Reproduction

from llama_cpp import Llama

# This causes an error
llm = Llama(
    model_path="./models/qwen2-0_5b-instruct-q8_0.gguf", 
    chat_format="chatml", 
    verbose=False, 
    embedding=True  # Only difference is enabling embedding
)

messages = [
    {"role": "system", "content": "Summarize this text for me: You are an assistant who creates short stories."},
    {"role": "user", "content": "Long ago, in a peaceful village, a little girl named Leah loved watching the stars at night..."}
]

llm.create_chat_completion(messages=messages)
# Error: llama_get_logits_ith: invalid logits id -1, reason: no logits

embeddings = llm.create_embedding("Hello, world!")
# Here is normal

'''
{'object': 'list',
 'data': [{'object': 'embedding',
   'embedding': [[0.9160200953483582,
     5.090432167053223,
     1.487088680267334, ......
'''

Environment Info

  • Python version: 3.10
  • llama-cpp-python version: latest
  • Model: Qwen2-0.5B-Chat (GGUF format)

Steps to Reproduce

  1. Install llama-cpp-python
  2. Download Qwen2-0.5B-Chat GGUF model
  3. Run the error reproduction code above with embedding=True

Additional Context

The error only occurs when:

  1. The embedding parameter is set to True
  2. Using the chat completion functionality

The model works fine for chat completion when embedding=False, suggesting this might be related to how the embedding functionality is implemented for this specific model.

ba0gu0 avatar Oct 28 '24 14:10 ba0gu0

confirming the same issue llama_get_logits_ith: invalid logits id -1, reason: no logits when using https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B-GGUF, setting embedding=False works (my default configuration uses True)

Environment Info

Python version: 3.9.16 llama-cpp-python version: 0.3.1 Model: Hermes-3-Llama-3.1-8B (GGUF format)

jayendren avatar Nov 03 '24 10:11 jayendren

I was getting this same error with a Qwen2.5-14b finetune and spent a few hours searching for the answer. It became obvious to me that this was a regression in the llama-cpp codebase, and it may have been addressed recently. Not sure if llama-cpp-python has received upstream patches yet or not, but this may be fixed in the future.

https://github.com/ggerganov/llama.cpp/issues/8076#issuecomment-2185147824

For now, I've resorted to using an embedding specific model with SentenceTransformer, but I'd love to ideally use the same model to get embeddings and and generations from the same model to save on memory.

aimerib avatar Dec 03 '24 18:12 aimerib

I also had this issue. I resolved it by a bit of a hack - I just have llama cpp produce logits regardless of whether you set embeddings to true or false. I'm not sure what the intention is behind turning logit production to off when embedding is set true in the context especially since the context is permanent for a model and so modifying it would require creating a new model.

The two changes I made were:

  1. In llama-context.cpp::decode change the t_logits* pointer so that it gets the logits regardless of if the embeddings parameter is set to true

auto * t_logits = res->get_logits();

  1. In llama-context::output_reserve set

bool has_logits = true;

rather than the opposite of the embeddings setting.

For completeness, you should also update llama-context.cpp::encode to set the t_logits* pointer accordingly.

Let me know if you have any thoughts on this approach.

RamMidsummer avatar Jul 07 '25 13:07 RamMidsummer