actions-runner-controller icon indicating copy to clipboard operation
actions-runner-controller copied to clipboard

How to apply dynamic labels to workflow job pods

Open rikeshpatel14 opened this issue 2 months ago • 5 comments

How to apply dynamic labels to workflow job pods

We are running ARC v0.13.0 on K8s in containerMode: Kubernetes. Can you please inform how can we apply the below key-value pairs as labels to the workflow job pod dynamically. Just an FYI - The variables can be accessed in the workflow job as $<variable-name>. Though the variables don't appear when we exec to the workflow job pod and run env

GITHUB_REPOSITORY GITHUB_WORKFLOW GITHUB_JOB RUNNER_NAME ACTIONS_RUNNER_POD_NAME

We would like to apply the below labels so we can group resource utilization metrics while presenting in Grafana for a specific actions workflow. We can run a RunnerScaleSet instance for each repo, but this might not be feasible if the GitHub org has a few hundred repos

Tried below

  1. Tried applying dynamic labels through runner-container-hooks, but seems it works on applying static labels, as Kubernetes complains on rendering the dynamic labels. Please refer below:
  • configmap
apiVersion: v1
kind: ConfigMap
metadata:
  name: workflow-job-hook-extension-cm
  namespace: arc-poc
data:
  template.yaml: |
    metadata:
      labels:
        github-workflow: "{{ .WorkflowName }}"
    spec:
      containers:
      - name: "$job"
        resources:
          requests:
            cpu: 1000m
            memory: 1Gi
          limits:
            cpu: 2000m
            memory: 2Gi
  • Workflow job execution gives below error: Error: failed to create job pod: Pod "my-runners-workflow" is invalid: metadata.labels: Invalid value: "{{ .WorkflowName }}": a valid label must be an empty string or consist of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyValue', or 'my_value', or '12345', regex used for validation is '(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])?')

Additional context

Possible solutions

  1. Update runner-container-hooks as per suggestions - Though not sure if its feasible as don't see the above variables as environment variables by running kubectl exec -it <pod> -n <namespace> -- env for runner and workflow job pods
  2. Use K8s Admission Controllers as per suggestions - Though not sure if it's feasible as don't see these variables as labels or annotations in runner job pods

rikeshpatel14 avatar Nov 21 '25 03:11 rikeshpatel14

Hello! Thank you for filing an issue.

The maintainers will triage your issue shortly.

In the meantime, please take a look at the troubleshooting guide for bug reports.

If this is a feature request, please review our contribution guidelines.

github-actions[bot] avatar Nov 21 '25 03:11 github-actions[bot]

As per the below "Data Flow", hence not showing when running env (PID 1) inside runner pod:

  • GitHub assigns a job to the Runner Scale Set.
  • The Runner Listener (inside the Runner Pod) receives the job details and sets environment variables (like GITHUB_WORKFLOW, GITHUB_REPOSITORY) in its own process.
  • The Runner Listener executes the runner-container-hooks (specifically prepare-job) to create the job pod.
  • The Hook Script runs as a child process, inheriting the environment variables from the Runner Listener.

The variables are set (please refer below), hence the suggested possible solutions should work

runner@my-runners:~$ cat /proc/1/environ | tr '\0' '\n' | egrep -i "GITHUB_JOB=|RUNNER_NAME=|ACTIONS_RUNNER_POD_NAME=|GITHUB_WORKFLOW=|GITHUB_REPOSITORY="
ACTIONS_RUNNER_POD_NAME=my-runners
runner@my-runners:~$

runner@my-runners]:~$ cat /proc/90/environ | tr '\0' '\n' | egrep -i "GITHUB_JOB=|RUNNER_NAME=|ACTIONS_RUNNER_POD_NAME=|GITHUB_WORKFLOW=|GITHUB_REPOSITORY="
ACTIONS_RUNNER_POD_NAME=my-runners
GITHUB_JOB=build
GITHUB_REPOSITORY=rikesh/arc-poc
GITHUB_WORKFLOW=CI
RUNNER_NAME=my-runners
runner@my-runners]:~$

rikeshpatel14 avatar Nov 21 '25 19:11 rikeshpatel14

We solve this with a pre-run script that the Actions runner invokes before runner-container-hooks that builds up our Pod template. Script looks like this:

apiVersion: v1
kind: ConfigMap
metadata:
  name: runner-hooks
data:
  pre-run.sh: |
    #!/bin/sh

    sed -e 's|$GITHUB_JOB|'"$GITHUB_JOB"'|g' \
        -e 's|$GITHUB_SERVER_URL|'"$GITHUB_SERVER_URL"'|g' \
        -e 's|$GITHUB_REPOSITORY|'"$GITHUB_REPOSITORY"'|g' \
        -e 's|$GITHUB_PROJECT_ID|'"$GITHUB_RUN_ID"'|g' \
        -e 's|$GITHUB_RUNNER_NAME|'"$RUNNER_NAME"'|g' \
        -e 's|$GITHUB_RUNNER_OS|'"$RUNNER_OS"'|g' \
        -e 's|$GITHUB_PIPELINE_NAME|'"$GITHUB_WORKFLOW"'|g' \
        -e 's|$GITHUB_PIPELINE_URL|'"$GITHUB_SERVER_URL/$GITHUB_REPOSITORY/actions/runs/$GITHUB_RUN_ID"'|g' \
        -e 's|$GITHUB_REPO_URL|'"$GITHUB_SERVER_URL/$GITHUB_REPOSITORY"'|g' \
        -e 's|$GITHUB_REPOSITORY_OWNER|'"$GITHUB_REPOSITORY_OWNER"'|g' \
        -e 's|$GITHUB_REF_NAME|'"$GITHUB_REF_NAME"'|g' \
        -e 's|$GITHUB_REF_TYPE|'"$GITHUB_REF_TYPE"'|g' \
        -e 's|$GITHUB_SHA|'"$GITHUB_SHA"'|g' \
        "$ACTIONS_RUNNER_CONTAINER_HOOK_TEMPLATE_SRC" > "$ACTIONS_RUNNER_CONTAINER_HOOK_TEMPLATE"

and it modifies the "template" pod spec that looks like this

apiVersion: v1
kind: ConfigMap
metadata:
  name: runner-template
data:
  template: |
    metadata:
      annotations:
        karpenter.sh/do-not-disrupt: "true"
        ad.datadoghq.com/tags: |
          {
            "ci.job.name": "$GITHUB_JOB",
            "ci.project.id": "$GITHUB_PROJECT_ID",
            "ci.runner.name": "$GITHUB_RUNNER_NAME",
            "ci.runner.os": "$GITHUB_RUNNER_OS",
            "ci.pipeline.name": "$GITHUB_PIPELINE_NAME",
            "ci.pipeline.url": "$GITHUB_PIPELINE_URL",
            "ci.repo.url": "$GITHUB_REPO_URL",
            "ci.repo.owner": "$GITHUB_REPOSITORY_OWNER",
            "ci.repo.name": "$GITHUB_REPOSITORY",
            "ci.provider.name": "github",
            "git.ref": "$GITHUB_REF_NAME",
            "git.ref.type": "$GITHUB_REF_TYPE",
            "git.commit.sha": "$GITHUB_SHA",
            "resource_name": "$GITHUB_JOB",
            "runner": "$GITHUB_RUNNER_NAME"
          }

valenvb-ag avatar Dec 04 '25 20:12 valenvb-ag

@valenvb-ag - Thank you for sharing the information. Assuming the configmap runner-template needs to be mounted in the runner pod template, can you please share the environment variable specific to pre-run script similar to ACTIONS_RUNNER_CONTAINER_HOOK_TEMPLATE... ( or a complete runner pod template will be even better :) )

rikeshpatel14 avatar Dec 05 '25 01:12 rikeshpatel14

Here's what our container & volume runner template spec looks like - we just provide an emptyDir that the script dumps the config into, using the same var that container hooks looks for to read the pod template.

template:
        containers:
          - name: runner
            image: ghcr.io/actions/actions-runner:latest
            command: ["/home/runner/run.sh"]
            securityContext:
              allowPrivilegeEscalation: false
              capabilities:
                drop: ["ALL"]
            env:
              - name: ACTIONS_RUNNER_CONTAINER_HOOK_TEMPLATE_SRC
                value: /home/runner/pod-template-src/template
              - name: ACTIONS_RUNNER_CONTAINER_HOOK_TEMPLATE
                value: /home/runner/pod-template/template
              - name: ACTIONS_RUNNER_HOOK_JOB_STARTED
                value: /home/runner/hooks/pre-run.sh
            volumeMounts:
              - name: job-pod-template
                mountPath: /home/runner/pod-template
              - name: pod-template-source
                mountPath: /home/runner/pod-template-src
              - name: runner-hooks
                mountPath: /home/runner/hooks
        volumes:
          - name: job-pod-template
            emptyDir:
          - name: pod-template-source
            configMap:
              name: k8s-runner-template
          - name: runner-hooks
            configMap:
              name: runner-hooks

valenvb-ag avatar Dec 05 '25 14:12 valenvb-ag