TensorRT-LLM icon indicating copy to clipboard operation
TensorRT-LLM copied to clipboard

[None][chore] Weekly mass integration of release/1.1

Open mikeiovine opened this issue 2 months ago โ€ข 50 comments

Description

PRs explicitly excluded in this round:

  • https://github.com/NVIDIA/TensorRT-LLM/pull/8621: DLFW upgrade, already applied in main branch.
  • https://github.com/NVIDIA/TensorRT-LLM/pull/8877: Another DLFW related upgrade.
  • https://github.com/NVIDIA/TensorRT-LLM/pull/8860: CI change intended for release branch only.
  • https://github.com/NVIDIA/TensorRT-LLM/pull/8891: Bug already fixed in refactor merged to main a few weeks ago
  • https://github.com/NVIDIA/TensorRT-LLM/pull/9324: Dropped by request of author
  • https://github.com/NVIDIA/TensorRT-LLM/pull/8888: Test fails on main.
  • https://github.com/NVIDIA/TensorRT-LLM/pull/8835: Test deleted on main
  • All PRs which only add test waives are also excluded as per usual.

Test Coverage

N/A

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • [x] Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Summary by CodeRabbit

  • New Features

    • Added persistent KV cache connector for cross-instance cache reuse
    • Expanded multimodal model support (Phi-4, Mistral-Small-3.1, Qwen2.5 VL)
    • Added FP8 quantized model deployment guidance
  • Bug Fixes

    • Improved memory management with configurable GPU memory limits for KV cache
    • Optimized stop token evaluation with early-exit for single-token stops
    • Enhanced CUDA memory handling during graph capture
  • Documentation

    • Updated hyperlinks and performance documentation references
    • Expanded multimodal model feature support matrix
    • Added quick-start examples for FP8-quantized models
  • Tests

    • Extended test timeouts for complex multi-GPU scenarios
    • Added new test coverage for Phi-4 multimodal fused vision configurations

โœ๏ธ Tip: You can customize this high-level summary in your review settings.

mikeiovine avatar Nov 20 '25 21:11 mikeiovine

๐Ÿ“ Walkthrough

Walkthrough

This PR encompasses kernel optimization tuning (FMHA v2, Cutlass heuristics), KV cache memory management improvements with new persistent connector, PyTorch 2.9+ Dynamo compatibility fixes, memory profiling infrastructure, sampler enhancements, documentation updates, model support matrix expansions, and broad test coverage adjustments.

Changes

Cohort / File(s) Change Summary
Kernel Optimizations
cpp/kernels/fmha_v2/setup.py, cpp/tensorrt_llm/kernels/cutlass_kernels/cutlass_heuristic.cpp, cpp/tensorrt_llm/kernels/fmhaDispatcher.cpp
FMHA v2 adds Gemma3 VL head_size 72 support; Cutlass heuristic reorders FP8 GROUPED_GEMM tile configs for SM89/120+; FMHA dispatcher excludes head sizes 72 and 80 from TRTLLM-GEN path.
Memory Management & Profiling
cpp/tensorrt_llm/common/opUtils.cpp
Introduces per-thread observer map lifecycle management with destructor, adds memory profiling utilities (MemoryInfo, getMemoryInfo, logMemoryUsage), augments handle creation with memory logging and error context, replaces raw new with smart pointers.
KV Cache Resource Management
tensorrt_llm/_torch/pyexecutor/resource_manager.py, tensorrt_llm/_torch/pyexecutor/py_executor_creator.py, tensorrt_llm/_torch/auto_deploy/shim/ad_executor.py
Adds enforce_memory_limit parameter to KVCacheManager and calculate_max_num_blocks, integrates garbage collection, threads memory enforcement through block calculation logic.
Persistent KV Cache Connector
examples/llm-api/llm_kv_cache_connector.py
Implements PersistentKvCacheConnectorWorker and PersistentKvCacheConnectorLeader with metadata holder for cross-instance KV cache reuse via disk persistence; replaces placeholder with functional connector demo.
Sampler Improvements
tensorrt_llm/_torch/pyexecutor/sampler.py
Adds new_token parameter to stop-token criteria, optimizes single-token stop-word path with early exit, refactors multi-token handling.
IPC & MPI Infrastructure
tensorrt_llm/llmapi/mpi_session.py, tensorrt_llm/commands/serve.py
Adds find_free_ipc_addr() and split_mpi_env() functions; replaces TCP-based port discovery with IPC address for disaggregated leader launcher.
PyTorch 2.9+ Compatibility
examples/models/contrib/dit/vae_decoder_trt.py, examples/models/core/qwenvl/vit_onnx_trt.py, tensorrt_llm/tools/multimodal_builder.py
Adds dynamo=False to torch.onnx.export calls with comments explaining PyTorch >= 2.9.0 Dynamo opset_version=17 incompatibility.
Documentation Updates
README.md, docs/source/blogs/*, docs/source/features/disagg-serving.md, docs/source/overview.md, examples/models/core/multimodal/README.md, examples/sample_weight_stripping/README.md
Updates hyperlink references: TensorRT-LLM overview paths, performance docs versioning, Dynamo backends URL, NeVA toolkit links, developer guide inclusion in index.
Model Support & Feature Matrix
docs/source/models/supported-models.md, docs/source/legacy/reference/multimodal-feature-support-matrix.md
Updates feature flags (KV Cache Reuse, Chunked Prefill) for LavaNext, Nemotron, Phi-4-multimodal, Qwen2 VL variants; renames and consolidates multimodal entries.
Configuration & Scripts
examples/llm-api/extra-llm-api-config.yml, examples/llm-api/llm_mgmn_llm_distributed.sh
Adds YAML config with cuda_graph_config and moe_config; adds --max_batch_size 256 to llm-api-launch invocation.
Quick-Start & API Docs
docs/source/quick-start-guide.md
Adds FP8 model deployment guidance and example trtllm-serve command for FP8-quantized models.
Disaggregated Test Updates
tests/integration/defs/disaggregated/test_disaggregated.py, tests/integration/defs/disaggregated/test_disaggregated_single_gpu.py
Removes skip_warmup parameter from run_disaggregated_benchmark, adds free_gpu_memory_fraction=0.25 to KvCacheConfig in single-GPU tests.
Accuracy & Model Tests
tests/integration/defs/accuracy/test_llm_api_pytorch.py, tests/integration/defs/accuracy/references/mmmu.yaml, tests/integration/defs/accuracy/test_disaggregated_serving.py
Adds free_gpu_memory_fraction to FP8 tests, introduces test_nvfp4_multi_gpus_sm120, adds Phi-4-multimodal fused vision LoRA test class, increases Qwen3 timeout to 3600s, adds Phi-4-multimodal MMMU reference accuracy.
End-to-End Tests
tests/integration/defs/test_e2e.py
Reduces parameterization (removes match_ratio/modality), bypasses keyword validation (0.0 match_ratio), adds --kv_cache_fraction flags, adds early-exit paths for flaky models, reorganizes multimodal variants.
Test Infrastructure & Lists
tests/integration/test_lists/*, tests/integration/test_lists/qa/*, tests/integration/test_lists/test-db/*, tests/integration/test_lists/waives.txt
Updates timeouts, adds/removes test entries, removes SKIP markers, adjusts parameterization (removes "-0.6-" suffix), adds Phi-4 fused vision LoRA and SM120 tests.
Unit Tests
tests/unittest/_torch/modules/test_fused_moe.py, tests/unittest/_torch/sampler/test_trtllm_sampler.py, tests/unittest/llmapi/apps/openai_server.py
Increases HIDDEN_SIZE to 4096 and refactors device binding in MOE test; adds sampler factory functions with TRTLLMSampler/TorchSampler wrappers and stop-token tests; increases RemoteOpenAIServer timeout from 600s to 7200s.

Sequence Diagram(s)

sequenceDiagram
    participant App as Application
    participant Leader as PersistentKvCacheConnectorLeader
    participant Worker as PersistentKvCacheConnectorWorker
    participant Disk as Disk Storage
    participant GPU as GPU Memory

    App->>Leader: Request KV cache connector
    Leader->>Leader: Compute block hashes
    
    rect rgb(220, 240, 255)
    Note over Leader: Generation 1
    App->>Worker: Register KV cache tensor
    Worker->>GPU: Hold tensor reference
    App->>Leader: New blocks to load
    Leader->>Disk: Query cached blocks
    Disk-->>Leader: Block data
    Leader->>Worker: Load blocks command
    Worker->>GPU: Load from disk โ†’ GPU
    App->>Leader: Blocks to save
    Leader->>Worker: Save blocks command
    Worker->>Disk: Write blocks to disk
    end
    
    rect rgb(240, 255, 220)
    Note over Leader: Generation 2 (cross-instance)
    App->>Worker: Register KV cache tensor
    App->>Leader: Load same prompt blocks
    Leader->>Disk: Query cached blocks
    Disk-->>Leader: Block data
    Leader->>Worker: Load blocks command
    Worker->>GPU: Load from disk โ†’ GPU
    Worker-->>App: Cache hit - fast reuse
    end
sequenceDiagram
    participant User as PyTorch Code
    participant Export as torch.onnx.export
    participant Dynamo as PyTorch Dynamo (โ‰ฅ2.9.0)
    participant ONNX as ONNX Exporter

    User->>Export: Call with dynamo=False, opset_version=17
    Export->>Dynamo: Dynamo disabled (default skip)
    Export->>ONNX: Use standard exporter
    ONNX-->>User: โœ“ Successful export

    rect rgb(255, 240, 220)
    Note over User,ONNX: Previous behavior (issue)
    User->>Export: Call with opset_version=17 (no dynamo arg)
    Export->>Dynamo: Dynamo enabled (default in 2.9+)
    Dynamo-->>Export: โœ— Opset 17 incompatibility
    end

Estimated code review effort

๐ŸŽฏ 4 (Complex) | โฑ๏ธ ~75 minutes

Areas requiring extra attention:

  • Memory management in opUtils.cpp: Pointer-based observer map lifecycle, destructor cleanup, and error propagation with memory context require careful review of initialization, access patterns, and teardown correctness.
  • KV cache parameter threading: The enforce_memory_limit parameter propagation across multiple layers (KVCacheManager โ†’ calculate_max_num_blocks โ†’ block allocation logic) needs verification for consistency and correct memory-limit enforcement semantics.
  • Sampler optimization in sampler.py: The new fast-path for single-token stop words requires validation that the early-exit logic correctly handles edge cases and doesn't skip multi-token stop-word setup.
  • KV cache connector implementation: New persistent connector classes (PersistentKvCacheConnectorWorker/Leader) introduce cache serialization and cross-instance reuse logic that requires validation of correctness, file management, and block hashing.
  • Test parameterization reduction in test_e2e.py: Substantial simplification of multimodal test coverage and introduction of 0.0 match_ratio bypass warrants verification that smoke-test behavior is intentional and doesn't hide regressions.
  • Kernel tuning decisions: Head-size exclusions in FMHA dispatcher and tile reordering in Cutlass heuristic require domain knowledge to validate correctness and performance intent.

Possibly related PRs

  • NVIDIA/TensorRT-LLM#6655: Modifies fmha_v2 kernel setup and FMHA attention code to adjust head-size handling and kernel enumeration, overlapping at cpp/kernels/fmha_v2/setup.py level.

Suggested reviewers

  • hchings
  • byshiue
  • niukuo
  • liji-nv
  • symphonylyh
  • yuxianq
  • govind-ramnarayan

Pre-merge checks and finishing touches

โŒ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage โš ๏ธ Warning Docstring coverage is 24.14% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Description check โ“ Inconclusive PR description lacks clear explanation of what changes are being integrated and why, mentioning only excluded PRs and CI bot help documentation. Add a concise summary of the main changes being integrated in this mass integration PR, explaining the purpose and impact of the integrated features.
โœ… Passed checks (1 passed)
Check name Status Explanation
Title check โœ… Passed The title '[None][chore] Weekly mass integration of release/1.1' clearly summarizes the main change: a mass integration of changes from the 1.1 release branch into the main branch. It is specific, concise, and directly reflects the pull request's purpose.
โœจ Finishing touches
  • [ ] ๐Ÿ“ Generate docstrings
๐Ÿงช Generate unit tests (beta)
  • [ ] Create PR with unit tests
  • [ ] Post copyable unit tests in a comment

[!TIP]

๐Ÿ“ Customizable high-level summaries are now available in beta!

You can now customize how CodeRabbit generates the high-level summary in your pull requests โ€” including its content, structure, tone, and formatting.

  • Provide your own instructions using the high_level_summary_instructions setting.
  • Format the summary however you like (bullet lists, tables, multi-section layouts, contributor stats, etc.).
  • Use high_level_summary_in_walkthrough to move the summary from the description to the walkthrough section.

Example instruction:

"Divide the high-level summary into five sections:

  1. ๐Ÿ“ Description โ€” Summarize the main change in 50โ€“60 words, explaining what was done.
  2. ๐Ÿ““ References โ€” List relevant issues, discussions, documentation, or related PRs.
  3. ๐Ÿ“ฆ Dependencies & Requirements โ€” Mention any new/updated dependencies, environment variable changes, or configuration updates.
  4. ๐Ÿ“Š Contributor Summary โ€” Include a Markdown table showing contributions: | Contributor | Lines Added | Lines Removed | Files Changed |
  5. โœ”๏ธ Additional Notes โ€” Add any extra reviewer context. Keep each section concise (under 200 words) and use bullet or numbered lists for clarity."

Note: This feature is currently in beta for Pro-tier users, and pricing will be announced later.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

โค๏ธ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

coderabbitai[bot] avatar Nov 20 '25 21:11 coderabbitai[bot]

/bot run --disable-fail-fast

mikeiovine avatar Nov 20 '25 21:11 mikeiovine

/bot run --disable-fail-fast

mikeiovine avatar Nov 20 '25 21:11 mikeiovine

PR_Github #25249 [ run ] triggered by Bot. Commit: 4e908f2

tensorrt-cicd avatar Nov 20 '25 21:11 tensorrt-cicd

PR_Github #25249 [ run ] completed with state ABORTED. Commit: 4e908f2 LLM/main/L0_MergeRequest_PR #19098 (Blue Ocean) completed with status: ABORTED

tensorrt-cicd avatar Nov 20 '25 23:11 tensorrt-cicd

Hi michael, please also exclude this one: https://github.com/NVIDIA/TensorRT-LLM/pull/9324. Since there is a standalong cherry-pick PR created to try resolving the CI issue: https://github.com/NVIDIA/TensorRT-LLM/pull/9346

Thanks!

JunyiXu-nv avatar Nov 21 '25 04:11 JunyiXu-nv

/bot run --disable-fail-fast

mikeiovine avatar Nov 21 '25 16:11 mikeiovine

PR_Github #25372 [ run ] triggered by Bot. Commit: 0160972

tensorrt-cicd avatar Nov 21 '25 16:11 tensorrt-cicd

/bot run --disable-fail-fast

mikeiovine avatar Nov 21 '25 17:11 mikeiovine

PR_Github #25377 [ run ] triggered by Bot. Commit: b899348

tensorrt-cicd avatar Nov 21 '25 17:11 tensorrt-cicd

PR_Github #25372 [ run ] completed with state ABORTED. Commit: 0160972 LLM/main/L0_MergeRequest_PR #19190 (Blue Ocean) completed with status: ABORTED

tensorrt-cicd avatar Nov 21 '25 17:11 tensorrt-cicd

/bot run --disable-fail-fast

mikeiovine avatar Nov 21 '25 18:11 mikeiovine

PR_Github #25387 [ run ] triggered by Bot. Commit: 144f3d5

tensorrt-cicd avatar Nov 21 '25 18:11 tensorrt-cicd

PR_Github #25377 [ run ] completed with state ABORTED. Commit: b899348 LLM/main/L0_MergeRequest_PR #19195 (Blue Ocean) completed with status: ABORTED

tensorrt-cicd avatar Nov 21 '25 18:11 tensorrt-cicd

/bot run --disable-fail-fast

mikeiovine avatar Nov 21 '25 21:11 mikeiovine

PR_Github #25397 [ run ] triggered by Bot. Commit: f2729b4

tensorrt-cicd avatar Nov 21 '25 21:11 tensorrt-cicd

PR_Github #25387 [ run ] completed with state ABORTED. Commit: 144f3d5 LLM/main/L0_MergeRequest_PR #19204 (Blue Ocean) completed with status: ABORTED

tensorrt-cicd avatar Nov 21 '25 21:11 tensorrt-cicd

PR_Github #25397 [ run ] completed with state SUCCESS. Commit: f2729b4 /LLM/main/L0_MergeRequest_PR pipeline #19215 completed with status: 'FAILURE'

tensorrt-cicd avatar Nov 22 '25 05:11 tensorrt-cicd

/bot run --disable-fail-fast

mikeiovine avatar Nov 22 '25 20:11 mikeiovine

PR_Github #25431 [ run ] triggered by Bot. Commit: f2729b4

tensorrt-cicd avatar Nov 22 '25 20:11 tensorrt-cicd

PR_Github #25431 [ run ] completed with state SUCCESS. Commit: f2729b4 /LLM/main/L0_MergeRequest_PR pipeline #19245 completed with status: 'FAILURE'

tensorrt-cicd avatar Nov 22 '25 22:11 tensorrt-cicd

/bot run --disable-fail-fast

mikeiovine avatar Nov 23 '25 00:11 mikeiovine

PR_Github #25434 [ run ] triggered by Bot. Commit: 3ac3904

tensorrt-cicd avatar Nov 23 '25 01:11 tensorrt-cicd

PR_Github #25434 [ run ] completed with state SUCCESS. Commit: 3ac3904 /LLM/main/L0_MergeRequest_PR pipeline #19248 completed with status: 'FAILURE'

tensorrt-cicd avatar Nov 23 '25 07:11 tensorrt-cicd

/bot run --disable-fail-fast

mikeiovine avatar Nov 23 '25 15:11 mikeiovine

PR_Github #25454 [ run ] triggered by Bot. Commit: 3ac3904

tensorrt-cicd avatar Nov 23 '25 15:11 tensorrt-cicd

PR_Github #25454 [ run ] completed with state SUCCESS. Commit: 3ac3904 /LLM/main/L0_MergeRequest_PR pipeline #19268 completed with status: 'FAILURE'

tensorrt-cicd avatar Nov 23 '25 17:11 tensorrt-cicd

/bot run --disable-fail-fast

mikeiovine avatar Nov 23 '25 18:11 mikeiovine

PR_Github #25461 [ run ] triggered by Bot. Commit: 92510a2

tensorrt-cicd avatar Nov 23 '25 18:11 tensorrt-cicd

/bot run --disable-fail-fast

mikeiovine avatar Nov 23 '25 19:11 mikeiovine