llama.cpp icon indicating copy to clipboard operation
llama.cpp copied to clipboard

Misc. bug: Deepseek R1 incompatible with grammars / structured output

Open noahhaon opened this issue 1 year ago • 4 comments

Name and Version

Using llama.cpp at b4516 and ollama at 7bb35

Operating systems

No response

Which llama.cpp modules do you know to be affected?

No response

Command line


Problem description & steps to reproduce

Apologies if this is misfiled as a bug. I'm not sure if it is an enhancement, but grammars are a commonly used feature of llama.cpp and the Deepseek R1 model is very popular, so I wanted to raise it and perhaps think about solutions

Reasoning based models are trained to use tokens to add their own chain of thought to the context. These tokens get suppressed by design when using the sampler with JSON grammar causing the model performance to suffer significantly.

As suppressing the output of the tokens within the tokens is currently being discussed in #11325, I wanted to mention that using grammar to effectively suppress these tokens causes the model to perform badly.

Looking at https://github.com/ggerganov/llama.cpp/blob/master/common/sampling.cpp#L235 , the third parameter to common_sampler_accept appears to be a boolean to enable constraining the sampler by grammar. As the end token is defined at least in Deepseek R1 tokens.json, perhaps it would be a generic-enough solution to disable grammar for these models until this token is reached, if it is a declared token in the loaded model? Or, perhaps a user provided parameter like start_grammar_after_token?

        {
            "id": 128799,
            "content": "</think>",
            "single_word": false,
            "lstrip": false,
            "rstrip": false,
            "normalized": true,
            "special": false
        },

Thanks for all your great work! <3

First Bad Commit

No response

Relevant log output


noahhaon avatar Jan 21 '25 20:01 noahhaon

+1

KanishkNavale avatar Jan 22 '25 13:01 KanishkNavale

You should be able to do this using the grammar itself, eg:

# Modified root rule with think section first
root        ::= "<think>" think-content "</think>" ws json-value

# Think content rules (any characters except </think>)
think-content ::= ( normal-char | safe-lt )*
normal-char    ::= [^<]
safe-lt        ::= "<" ( [^/] | "/" [^t] | "/t" [^h] | "/th" [^i] | "/thi" [^n] | "/thin" [^k] | "/think" [^>] )

# Original JSON rules below (only changed 'value' -> 'json-value' for clarity)
json-value  ::= object | array | string | number | ("true" | "false" | "null") ws

object      ::= "{" ws (
            string ":" ws json-value
    ("," ws string ":" ws json-value)*
  )? "}" ws

array       ::= "[" ws (
            json-value
    ("," ws json-value)*
  )? "]" ws

string      ::= "\"" (
    [^"\\\x7F\x00-\x1F] |
    "\\" (["\\bfnrt] | "u" [0-9a-fA-F]{4}) # escapes
  )* "\"" ws

number      ::= ("-"? ([0-9] | [1-9][0-9]{0,15})) ("." [0-9]+)? ([eE][-+]?[0-9]+)? ws

ws          ::= ( " " | "\n" [ \t]{0,20} )*

Not tested so might need some corrections, but something like this should work I think.

jukofyork avatar Jan 24 '25 08:01 jukofyork

@jukofyork Thanks! I wondered if this could be solved with grammar. Unfortunately I suspect most use cases for grammar are with the json_schema_to_grammar, so this would need to be modified there in the various clients that do this.

Also, the .. tags and content needs to be suppressed in the string output to the client to support automated usage, but not sure if this is a client problem or a llama.cpp problem. In any case this is being discussed in #11325

I did a quick spike on ollama to support toggling grammar on and off based on a grammar_start_token parameter, which solves both these issues, but I'm not completely happy with the implementation as it requires both patches to ollama and vendor patches to llama.cpp to support the configuration and toggling.

The patches to the vendored llama.cpp are minimal however, and involve adding an extra param to common_sampler_sample to support toggling grammar when calling from ollama:

-llama_token common_sampler_sample(struct common_sampler * gsmpl, struct llama_context * ctx, int idx, bool grammar_first) {
+llama_token common_sampler_sample(struct common_sampler * gsmpl, struct llama_context * ctx, int idx, bool grammar_first, bool skip_grammar) {
     gsmpl->set_logits(ctx, idx);

     auto & grmr  = gsmpl->grmr;
@@ -308,7 +308,7 @@ llama_token common_sampler_sample(struct common_sampler * gsmpl, struct llama_co

     const llama_token id = cur_p.data[cur_p.selected].id;

-    if (grammar_first) {
+    if (grammar_first || skip_grammar) {
         return id;
     }

noahhaon avatar Jan 24 '25 08:01 noahhaon

@jukofyork Thanks! I wondered if this could be solved with grammar. Unfortunately I suspect most use cases for grammar are with the json_schema_to_grammar, so this would need to be modified there in the various clients that do this.

Also, the .. tags and content needs to be suppressed in the string output to the client to support automated usage, but not sure if this is a client problem or a llama.cpp problem. In any case this is being discussed in #11325

Yeah, the grammar only solves one specific problem sadly. It still might be useful to enforce English only as I've seen R1 start using Chinese a couple of times and then (luckily) reverted back to English (maybe "thinking" in Chinese helps it, but would definitely not want the final answer in Chinese!).

jukofyork avatar Jan 24 '25 08:01 jukofyork

This issue was closed because it has been inactive for 14 days since being marked as stale.

github-actions[bot] avatar Mar 11 '25 01:03 github-actions[bot]