llama.cpp icon indicating copy to clipboard operation
llama.cpp copied to clipboard

`tool-call`: fix Qwen 2.5 Coder support, add micro benchmarks, support trigger patterns for lazy grammars

Open ochafik opened this issue 1 year ago • 2 comments

TL;DR: fixes tool calling of Qwen 2.5 Coder 0.5B/1.5B/3B/7B/32B... at any temperature

instructions to build this branch
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
git remote add ochafik https://github.com/ochafik/llama.cpp
git fetch ochafik
git checkout ochafik/tool-bench-prod
cmake -B build -DLLAMA_CURL=1
cmake --build build -t llama-server --parallel --config Release
alias llama-server=./build/bin/llama-server
llama-server --jinja -fa -c 0 -hf unsloth/Qwen2.5-Coder-7B-Instruct-128K-GGUF
  • Added support for regex grammar triggers, and respect when they should be matching at the start only (was already declared but not implemented; should avoid spurious triggering when the triggers were defined as wide-catches).
    • In llama.h, deprecating llama_sampler_init_grammar_lazy (which used to take tokens or words) in favour of llama_sampler_init_grammar_lazy_patterns (which takes tokens or full-string regex patterns w/ a group that marks from where the grammar is triggered)
  • Dramatically improved tool calls success rate of Qwen 2.5 Coder (Hermes 2 format) w/ more triggers that match what the models tends to output (esp. at higher temperatures) / looser triggers w/ regular expressions
    • 32B model can power Cline decently w/ this PR: https://github.com/cline/cline/pull/1946
  • Added scripts/tool_bench.py to evaluate tool call compliance probability of llama-server & ollama on different models, at different temperatures

The following heatmap shows compliance ratio on two super basic tool call tests (hello world & weather tests from examples/server/tests/unit/test_tool_call.py, now shared w/ the bench tool). 3 pairs of columns for llama-server of this PR, baseline llama-server (master) and ollama.

image

qwenc1 5b

export ARGS=( --n 30 --llama-baseline="$(which llama-server)" --temp -1 --temp 0 --temp 0.5 --temp 0.75 --temp 1 --temp 1.5 --temp 2 --temp 5 ) 

./scripts/tool_bench.py run ${ARGS[@]} --model "Qwen 2.5 Coder 7B Q4_K_M"             --output ../qwenc7b.jsonl   --hf unsloth/Qwen2.5-Coder-7B-Instruct-128K-GGUF:Q4_K_M   --ollama qwen2.5-coder:7b-instruct-q4_K_M
./scripts/tool_bench.py run ${ARGS[@]} --model "Qwen 2.5 Coder 1.5B Q4_K_M"           --output ../qwenc1.5b.jsonl --hf unsloth/Qwen2.5-Coder-1.5B-Instruct-128K-GGUF:Q4_K_M --ollama qwen2.5-coder:1.5b-instruct-q4_K_M

See gist with results for many more models

Notes about results:

  • the failures of llama-server at temp = 2 are model humour / stylistic choice (Sure! You can use the following Python code... instead of tool call)
  • ollama seems to only recognize the tool call format of the template, but models like Qwen 2.5 Coder 7B is quite... creative in its tool call outputs, esp. at higher temperatures.
  • ollama's default temperature seems to be 0.6 (hence why the row w/ @ None kinda fits results of lower rows)
  • The tests may need further tweaking to accept arguably “correct” answers. The framing of the hello world test is questionable, sometimes models just explain how they would write the code.
  • The benchmark tool also supports running test_calc_results which evaluates how well a model follows up on tool results. This seems to have more varied failure modes so it's not evaluated by default.

TODO:

  • [x] Run & share more bench results (esp. other Qwen Coder variants!)
  • [x] Stabilize tests / ci
  • [ ] Analyze bench times

ochafik avatar Feb 22 '25 22:02 ochafik

Was Qwen 2.5 Coder even trained for tool use? 🤯

GuuD avatar Feb 22 '25 22:02 GuuD

Was Qwen 2.5 Coder even trained for tool use? 🤯

@GuuD I guess all models must be to some extent, these days. Their technical report only mentions in passing the fact that BigCodeBench is "primarily aimed at evaluating the ability of tool-use and complex instruction following" and their results on that benchmark look quite decent. But given the variety of outputs the model wraps tool calls in, I doubt they stuck to the syntax used in their jinja template.

ochafik avatar Feb 23 '25 00:02 ochafik

I see you also opened a PR for Cline to actually utilize this. Is there any chance you could do the same for Roo Code? I have been using both Cline and Roo Code with Qwen2.5-Coder-32b with moderate success, so any improvements to that workflow are more than welcome!

Mushoz avatar Mar 02 '25 12:03 Mushoz

I see you also opened a PR for Cline to actually utilize this. Is there any chance you could do the same for Roo Code? I have been using both Cline and Roo Code with Qwen2.5-Coder-32b with moderate success, so any improvements to that workflow are more than welcome!

Hey @Mushoz, I'll check Roo Code out, but probably won't be before a few weeks. Also, my PR on Cline was somewhat promptly rejected 😓 (https://github.com/cline/cline/pull/1946). Maybe I should have opened with a screencast of the very decent results I've been getting 🫣.

I've started looking at llama-vscode instead (cc/ @ggerganov @igardev ). I'm trying to get streamed tool calls to work on both ends as a follow up to this PR (requires some partial JSON decoding on llama-server's side - nearly done, and streamed / partial JSON decoding on the IDE side - fully done - to get the same kind of streamed diffs as Cline has).

Hey @ngxson, would you have time to take a look at this one?

ochafik avatar Mar 02 '25 21:03 ochafik

@ochafik Roo Code is a fork of cline rapidly growing in popularity. Since it's a fork, I am hopeful it's relatively "easy" to port your PR to Roo Code instead

Mushoz avatar Mar 02 '25 21:03 Mushoz

@ochafik I saw the PR in Cline, but I don't have much to add as I am neither familiar with Cline nor with the tools/MCP support yet. It seems to me the functionality is powerful, but I need to dedicate some time to understand the details and how it works in order to provide any meaningful feedback. In any case, I think adding this kind of functionality to llama-vscode would be welcome as I believe @igardev (the maintainer of the extension) is interested in such high-level/agentic workflows.

ggerganov avatar Mar 03 '25 12:03 ggerganov

My backlog is quite full today, I'll do a review tomorrow

ngxson avatar Mar 03 '25 18:03 ngxson

In general this LGTM, although some part I'm not understand 100%.

Btw, since the complexity of chat.cpp is growing recently, I think it would be beneficial to have less dependency on json type. The problem is that json type is known to be bad for performance, and also make static analyzing code impossible.

Thanks @ngxson !

I wonder if we could use some of the static json tricks used by alibaba/yalantinglibs (nifty macros that make data structures statically reflectable, for faster & safer serialization / deserialization). But if performance is the main concern I'd start w/ some benchmarks once the tool / mcp things stabilize (incl. streaming), there's larger bits I'd like to optimize / cache (grammar parsing, schema generation...).

ochafik avatar Mar 05 '25 02:03 ochafik

For me the main concerns with json.hpp is the static analysis difficulties (as @ngxson mentioned) and also increased compilation time. I don't have a good feeling how much json affects the performance of applications, but it's second-tier concern in my list compared to the former.

ggerganov avatar Mar 05 '25 07:03 ggerganov

I wonder if we could use some of the static json tricks used by alibaba/yalantinglibs (nifty macros that make data structures statically reflectable, for faster & safer serialization / deserialization)

IMO this doesn't seem to be beneficial of us, as it implies even more dependency to json (which we currently want to avoid).

Also I'm doubt if it is even faster. The benchmark shown is for struct_pack and not struct_json, I don't think we can go further when parsing/serializing json.

Looking at their example:

struct person {
  std::string name;
  int age;
};
YLT_REFL(person, name, age);

struct_json::to_json(p, str);
struct_json::from_json(p1, str);

Already, this is quite bad for us since sometimes we what to have optional fields or fields with multiple possible values. For example, "prompt" can be string or an array of token in our case, which breaks that "statically reflectable" that you mentioned.

Also, having explicit to_json / from_json seems safer since you can explicitly validate the input data.

But if performance is the main concern I'd start w/ some benchmarks once the tool / mcp things stabilize (incl. streaming), there's larger bits I'd like to optimize / cache (grammar parsing, schema generation...).

The performance lost is not about serializing / deserializing the json, but it comes from the fact that you do need to get something by .at("...") and set by doing json_obj["..."] = .... The reason why this is bad because this set/get operation relies on [un]ordered_map, which has complexity O(log n)

For example, in these 2 versions, json is clearly slower:

json data = {
  {"name", "abc"},
  {"age", 123}
};

std::string salutation = "Hello, " + data.at("name"); // O(log n) each time you use "at"

// versus struct
struct person {
  std::string name;
  int age;
};
person p = person{"abc", 123};

std::string salutation = "Hello, " + p.name; // always O(1)

ngxson avatar Mar 05 '25 11:03 ngxson

IMO this doesn't seem to be beneficial of us, as it implies even more dependency to json (which we currently want to avoid). Also I'm doubt if it is even faster. The benchmark shown is for struct_pack and not struct_json, I don't think we can go further when parsing/serializing json.

@ngxson I didn't test this yet tbh, but even without static json metaprogramming tricks (which on paper could unlock some speed) there's definitely room for faster parsing and serialisation, cf. @jart's https://github.com/jart/json.cpp (a serious option to also lower compilation times btw, with a maybe slightly more verbose interface but also maybe less syntax gotchas)

Already, this is quite bad for us since sometimes we what to have optional fields or fields with multiple possible values. For example, "prompt" can be string or an array of token in our case, which breaks that "statically reflectable" that you mentioned.

Yeah we'd need to evaluate feasibility of support std::variant and std::optional to make this work (potentially a cool weekend exploration - deciding on variant branches, if possible, could be interesting - might require extra metadata declaration, and runtime backtracking and/or a static disambiguation strategy - assuming we can do enough constexpr magic).

I think the main value in adding a bit of static typing would be to reduce the surface for misuse. (and also, would move the json dependency entirely in a single generic serialization compilation unit)

is bad because this set/get operation relies on [un]ordered_map, which has complexity O(log n)

~~Actually it's a hashmap~~ (Edit: my bad, looks like ordered_json uses a vector so it's all O(n) (ouch!), and the default unordered json - which I don't think we use - uses std::map so indeed O(log n))

ochafik avatar Mar 05 '25 11:03 ochafik

there's definitely room for faster parsing and serialisation

As said, there are not many places in the code we use parsing / serializing, so just to remind here that it may not be beneficial to optimize this part.

Actually it's a hashmap so average O(1), worst case O(n) (with n very small anyway).

Ok maybe I confused with std::map that uses red-black tree.

But in anyway, that O(1) is still slower than accessing struct member. Don't forget that with a std::map<std::string, ...> you also need to calculate the hash of the string, which depends on the string length. By contrast, accessing class member can be done in just one pointer dereference, and can be even optimized at compile time.

I think the main value in adding a bit of static typing would be to reduce the surface for misuse. (and also, would move the json dependency entirely in a single generic serialization compilation unit)

I'm not sure if I understand this correctly, aren't struct already used for static typing?

ngxson avatar Mar 05 '25 12:03 ngxson

static json metaprogramming

Maybe a bit off-topic, IMO doing this with C macro should not be a big problem. The biggest problem is that we try not to use too many macro (or nested macro), as it is impossible to debug with something like gbd. When possible, use template instead.

ngxson avatar Mar 05 '25 12:03 ngxson

Ok maybe I confused with std::map that uses red-black tree.

@ngxson cf. my update above, the default nlohmann::json (which we mostly don't use) uses an std::map indeed, but nlohmann::unordered_json uses... a vector 😅 (probably not a huge issue given they're all smallish anyway)

I think the main value in adding a bit of static typing would be to reduce the surface for misuse. (and also, would move the json dependency entirely in a single generic serialization compilation unit)

I'm not sure if I understand this correctly, aren't struct already used for static typing?

Yes but not in places like the to_json_oaicompat*, where I've been caught many times by an extraneous pair of curly braces that completely change the meaning of a json array, etc.

static json metaprogramming

Maybe a bit off-topic, IMO doing this with C macro should not be a big problem. The biggest problem is that we try not to use too many macro (or nested macro), as it is impossible to debug with something like gbd. When possible, use template instead.

Absolutely! Those variadic macros give me bad chills haha, and not sure I like the idea of maintaining variadic templates either, but I'm keeping an open mind ;-)

ochafik avatar Mar 05 '25 12:03 ochafik

Yes but not in places like the to_json_oaicompat*, where I've been caught many times by an extraneous pair of curly braces that completely change the meaning of a json array, etc.

I see. Yeah in fact the main purpose of to_json_oaicompat* is to convert the struct data into OAI-compat schema. Problem is that, the original schema is in Open API format, so I'm not sure how we can translate it into cpp. For now, we rely mostly on the server pytest scripts to check that.

ngxson avatar Mar 05 '25 12:03 ngxson

ran into some snags and listed them here. https://github.com/ggml-org/llama.cpp/issues/12279 one case almost passed. If anyone can take a look, I'd love to get something stable working as I'm demonstrating this in about 10 days.

codefromthecrypt avatar Mar 09 '25 06:03 codefromthecrypt