[TRTLLM-6842][feat] Support Response API for general purpose
Summary by CodeRabbit
-
New Features
- Added support for reasoning parsers including DeepSeek-R1 and Qwen3 in responses API
- Enhanced OpenAI-compatible protocol with expanded response event types and text format configuration
-
Improvements
- Improved responses streaming workflow with flexible multi-path processing
- Enhanced response history management and state tracking
- Added optional tokenization control in chat template application
โ๏ธ Tip: You can customize this high-level summary in your review settings.
Description
- Add non-harmony model support for Responses API, supporting the same set of features of harmony model (gpt_oss).
- Add test for non-harmony models.
- Modified codes to conform the coding style
- Add more comments
Test Coverage
PR Checklist
Please review the following before submitting your PR:
-
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
-
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
-
Test cases are provided for new code paths (see test instructions)
-
Any new dependencies have been scanned for license and vulnerabilities
-
CODEOWNERS updated if ownership changes
-
Documentation updated as needed
-
Update tava architecture diagram if there is a significant design change in PR.
-
The reviewers assigned automatically/manually are appropriate for the PR.
-
[x] Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...
Provide a user friendly way for developers to interact with a Jenkins server.
Run /bot [-h|--help] to print this help message.
See details below for each supported subcommand.
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]
Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.
--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.
--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.
--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.
--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.
--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.
--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.
--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.
--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.
--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.
--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.
--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".
--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.
--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.
For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.
kill
kill
Kill all running builds associated with pull request.
skip
skip --comment COMMENT
Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.
reuse-pipeline
reuse-pipeline
Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.
/bot run
๐ Walkthrough
Walkthrough
This PR refactors the Responses API to support configurable reasoning and tool parsers, replacing direct Harmony adapter dependency with a flexible use_harmony flag. Changes span chat template tokenization control, reasoning parser whitespace handling, protocol/response format updates, server orchestration of streaming paths, and comprehensive responses utility refactoring with Harmony/non-Harmony processing pipelines.
Changes
| Cohort / File(s) | Summary |
|---|---|
Chat Template Configuration tensorrt_llm/inputs/utils.py |
Added enable_tokenize: bool = False parameter to apply_chat_template to make tokenization behavior configurable during template application. |
Reasoning Parser Updates tensorrt_llm/llmapi/reasoning_parser.py |
Modified DeepSeekR1Parser.parse to strip leading/trailing whitespace from both reasoning_content and content segments after partitioning. |
Protocol and Response Formats tensorrt_llm/serve/openai_protocol.py |
Added imports for Response*Event types and related constructs; introduced _response_format_text_config_to_guided_decoding_params helper; added StreamingResponsesResponse type alias; changed ResponsesResponse.max_output_tokens to Optional[int]; refactored ResponsesRequest.to_sampling_params to remove default_max_tokens parameter and integrate guided decoding via new helper. |
Server Responses Orchestration tensorrt_llm/serve/openai_server.py |
Replaced direct Harmony adapter dependency with flexible use_harmony flag; refactored create_stream_response and preprocessing/response creation paths to accept use_harmony, reasoning_parser, and tool_parser parameters; removed harmony_adapter argument from downstream utilities. |
Responses Utility Implementation tensorrt_llm/serve/responses_utils.py |
Large refactor introducing Harmony-enabled multi-path processing: added new public classes (ResponsesStreamingStateTracker, ResponsesStreamingEventsHelper); renamed many public helpers to private underscored variants (get_system_message โ _get_system_message, etc.); introduced Harmony/non-Harmony conditional branching; expanded conversation history store logic with richer message handling, capacity management, and LRU-like eviction; added internal pipelines for input preprocessing (_create_input_messages, _create_input_tokens), output postprocessing (_create_output_content), and streaming event sequencing; refactored process_streaming_events and create_response to support both modes. |
Test Fixtures and Multi-Model Support tests/unittest/llmapi/apps/_test_openai_responses.py |
Replaced fixed model fixture with parameterized fixture supporting three model paths ("gpt_oss/gpt-oss-20b", "DeepSeek-R1-Distill-Qwen-1.5B", "Qwen3/Qwen3-0.6B"); added conditional reasoning_parser and tool_parser argument construction based on model prefix; updated tool-calling tests with reasoning extraction checks and DeepSeek-R1 early skips. |
Sequence Diagram(s)
sequenceDiagram
participant Client
participant OpenAIServer as openai_server.py
participant ResponseUtils as responses_utils.py
participant HarmonyAdapter
participant TokenRenderer
rect rgb(200, 240, 255)
Note over OpenAIServer,ResponseUtils: use_harmony = true (Harmony path)
Client->>OpenAIServer: POST /responses
OpenAIServer->>ResponseUtils: request_preprocess<br/>(use_harmony=true)
ResponseUtils->>HarmonyAdapter: Initialize & prepare
OpenAIServer->>ResponseUtils: process_streaming_events<br/>(use_harmony=true, reasoning_parser, tool_parser)
ResponseUtils->>ResponseUtils: _construct_harmony_messages()
ResponseUtils->>HarmonyAdapter: Process & emit harmony events
ResponseUtils->>ResponseUtils: _apply_reasoning_parser()
ResponseUtils->>ResponseUtils: _apply_tool_parser()
ResponseUtils->>Client: Stream response.created, response.in_progress, ..., response.completed
end
rect rgb(240, 200, 255)
Note over OpenAIServer,ResponseUtils: use_harmony = false (Non-Harmony path)
Client->>OpenAIServer: POST /responses
OpenAIServer->>ResponseUtils: request_preprocess<br/>(use_harmony=false)
ResponseUtils->>ResponseUtils: _create_input_tokens()
OpenAIServer->>ResponseUtils: process_streaming_events<br/>(use_harmony=false, reasoning_parser, tool_parser)
ResponseUtils->>TokenRenderer: Render messages
ResponseUtils->>ResponseUtils: _apply_reasoning_parser()
ResponseUtils->>ResponseUtils: _apply_tool_parser()
ResponseUtils->>ResponseUtils: _create_output_content()
ResponseUtils->>Client: Stream response.created, response.in_progress, delta events, response.completed
end
Estimated code review effort
๐ฏ 4 (Complex) | โฑ๏ธ ~45 minutes
Key areas requiring attention:
- responses_utils.py: Extensive refactoring with substantial new logic for Harmony/non-Harmony branching, streaming state management, and conversation history handling; verify control flow correctness and state consistency across both paths
- openai_server.py: Verify that removal of direct Harmony adapter initialization does not break edge cases; confirm parameter threading through all downstream calls
-
openai_protocol.py: Review changes to
ResponsesRequest.to_sampling_paramsfor correctness of max_tokens calculation and guided decoding parameter propagation - responses_utils.py public API changes: Numerous function renamings to private variants; verify all call sites updated and public API surface is intentional
- test parameterization: Ensure model-specific reasoning_parser and tool_parser flags are correctly wired and DeepSeek-R1 test skips do not mask actual failures
Possibly related PRs
-
NVIDIA/TensorRT-LLM#7341: Modifies the same Responses API code paths (
openai_protocol.py,openai_server.py,responses_utils.py) with overlapping changes to response request/response handling, streaming orchestration, and Harmony/non-Harmony plumbing.
Suggested reviewers
- LinPoly
Pre-merge checks and finishing touches
โ Passed checks (3 passed)
| Check name | Status | Explanation |
|---|---|---|
| Title check | โ Passed | The title clearly summarizes the main change: adding general purpose (non-harmony model) support for the Response API, which aligns with the substantial changes across multiple files enabling non-harmony model paths. |
| Description check | โ Passed | The description adequately explains the PR objectives including non-harmony model support, test coverage, code style conformance, and additional comments. The PR checklist is completed and matches template requirements. |
| Docstring Coverage | โ Passed | No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check. |
โจ Finishing touches
- [ ] ๐ Generate docstrings
๐งช Generate unit tests (beta)
- [ ] Create PR with unit tests
- [ ] Post copyable unit tests in a comment
[!TIP]
๐ Customizable high-level summaries are now available in beta!
You can now customize how CodeRabbit generates the high-level summary in your pull requests โ including its content, structure, tone, and formatting.
- Provide your own instructions using the
high_level_summary_instructionssetting.- Format the summary however you like (bullet lists, tables, multi-section layouts, contributor stats, etc.).
- Use
high_level_summary_in_walkthroughto move the summary from the description to the walkthrough section.Example instruction:
"Divide the high-level summary into five sections:
- ๐ Description โ Summarize the main change in 50โ60 words, explaining what was done.
- ๐ References โ List relevant issues, discussions, documentation, or related PRs.
- ๐ฆ Dependencies & Requirements โ Mention any new/updated dependencies, environment variable changes, or configuration updates.
- ๐ Contributor Summary โ Include a Markdown table showing contributions:
| Contributor | Lines Added | Lines Removed | Files Changed |- โ๏ธ Additional Notes โ Add any extra reviewer context. Keep each section concise (under 200 words) and use bullet or numbered lists for clarity."
Note: This feature is currently in beta for Pro-tier users, and pricing will be announced later.
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.
Comment @coderabbitai help to get the list of available commands and usage tips.
PR_Github #25550 [ run ] triggered by Bot. Commit: 1378f31
PR_Github #25550 [ run ] completed with state SUCCESS. Commit: 1378f31
/LLM/main/L0_MergeRequest_PR pipeline #19349 completed with status: 'FAILURE'
/bot run
PR_Github #25667 [ run ] triggered by Bot. Commit: 6db1ac4
PR_Github #25667 [ run ] completed with state SUCCESS. Commit: 6db1ac4
/LLM/main/L0_MergeRequest_PR pipeline #19452 completed with status: 'FAILURE'
/bot run
PR_Github #25716 [ run ] triggered by Bot. Commit: fd21c8a
PR_Github #25716 [ run ] completed with state SUCCESS. Commit: fd21c8a
/LLM/main/L0_MergeRequest_PR pipeline #19498 completed with status: 'FAILURE'
/bot run
PR_Github #26092 [ run ] triggered by Bot. Commit: c6dd6f8
PR_Github #26092 [ run ] completed with state SUCCESS. Commit: c6dd6f8
/LLM/main/L0_MergeRequest_PR pipeline #19812 completed with status: 'FAILURE'
/bot run
PR_Github #26351 [ run ] triggered by Bot. Commit: 73cb11a
PR_Github #26351 [ run ] completed with state SUCCESS. Commit: 73cb11a
/LLM/main/L0_MergeRequest_PR pipeline #20012 completed with status: 'FAILURE'
/bot run
PR_Github #26375 [ run ] triggered by Bot. Commit: 73cb11a
PR_Github #26375 [ run ] completed with state SUCCESS. Commit: 73cb11a
/LLM/main/L0_MergeRequest_PR pipeline #20034 completed with status: 'FAILURE'
/bot run
PR_Github #26525 [ run ] triggered by Bot. Commit: f74ded0
PR_Github #26525 [ run ] completed with state FAILURE. Commit: f74ded0
/LLM/main/L0_MergeRequest_PR pipeline #20169 completed with status: 'FAILURE'
/bot run
PR_Github #26534 [ run ] triggered by Bot. Commit: f74ded0
PR_Github #26534 [ run ] completed with state FAILURE. Commit: f74ded0
/LLM/main/L0_MergeRequest_PR pipeline #20175 completed with status: 'FAILURE'
/bot run
PR_Github #26554 [ run ] triggered by Bot. Commit: f74ded0
PR_Github #26554 [ run ] completed with state FAILURE. Commit: f74ded0
/LLM/main/L0_MergeRequest_PR pipeline #20193 completed with status: 'FAILURE'
/bot run
PR_Github #26573 [ run ] triggered by Bot. Commit: 4d2462d