feat: add streaming tool use
This PR upgrades the chatml-function-calling chat handler with support for streaming tool use and fixes #1883, #1869, and #1756, among other improvements.
Changes:
- General:
a. ✨ If no system message is supplied, add an empty system message to hold the tool metadata.
b. ✨ Add function descriptions to the system message so that tool use is better informed (fixes #1869).
c. ✨ Replace
printstatements relating to JSON grammars withRuntimeWarningwarnings. d. ✅ Add tests with fairly broad coverage of the different scenarios. - Case "Tool choice by user": a. ✨ Add support for more than one function call by making this a special case of "Automatic tool choice" with a single tool (subsumes #1503).
- Case "Automatic tool choice -> respond with a message":
a. ✨ Use user-defined
stopandmax_tokens. b. 🐛 Replace incorrect use of follow-up grammar with user-defined grammar. - Case "Automatic tool choice -> one or more function calls":
a. ✨ Add support for streaming the function calls (fixes #1883).
b. ✨ Make tool calling more robust by giving the LLM an explicit way to terminate the tool calls by wrapping them in a
<function_calls></function_calls>block. c. 🐛 Add missing ":" stop token to determine whether to continue with another tool call, which prevented parallel function calling (fixes #1756). d. ✨ Set temperature=0 to determine whether to continue with another tool call, similar to the initial decision on whether to call a tool.
@abetlen The tests all pass, but the macOS ones were terminated after a timeout. I think this is because of a lack of CPU and or memory resources because the tests run fine on my macOS machine.
I would love to see this merged! Actually there are quite a lot of good pull requests here that i would like to see merged... But this one is top priority!
Update: I rebased on the latest main and included a few tiny improvements to further improve tool calling robustness.
Update: I rebased on the latest main and conditionally skipped the added tests on macOS when not enough resources are available to run them.
Worked well for me, would you mind rebasing to the latest commit to allow for tool streaming with Qwen models? Thanks for your work!
Would love to see this merged - is there anything holding it up?
@abetlen I rebased the PR on the latest upstream main and added a small commit to fix the returned logprobs format.