trigger.dev icon indicating copy to clipboard operation
trigger.dev copied to clipboard

feat: add support for configuring worker pod priority class

Open NERLOE opened this issue 2 months ago • 0 comments

Is your feature request related to a problem? Please describe.

Description

When running Trigger.dev on GKE Autopilot (or resource-constrained Kubernetes clusters), task run pods with default priority (0) can be preempted by higher-priority system pods during cluster scale-up operations, causing runs to fail with SIGTERM errors.

Current Behavior

  • Task run pods are created without a priorityClassName
  • Default priority is 0 (lowest possible)
  • During Autopilot scale-up (60-120s), system pods may need resources immediately
  • System pods (priority 1B+) preempt low-priority task-run pods
  • Results in run failures with error code TASK_PROCESS_SIGTERM

Describe the solution you'd like to see

Desired Behavior

  • Allow configuration of a priority class for task run pods
  • Prevent preemption by system pods during scale-up
  • Maintain cluster stability while protecting task execution

Proposed Solution

Add a new environment variable KUBERNETES_WORKER_PRIORITY_CLASS_NAME that allows operators to configure a priority class for task run pods.

Example configuration:

supervisor:
  extraEnvVars:
    - name: KUBERNETES_WORKER_PRIORITY_CLASS_NAME
      value: "trigger-task-runs"

extraManifests:
  - apiVersion: scheduling.k8s.io/v1
    kind: PriorityClass
    metadata:
      name: trigger-task-runs
    value: 100000
    globalDefault: false

Describe alternate solutions

Additional information

No response

NERLOE avatar Oct 24 '25 11:10 NERLOE