actions-runner-controller icon indicating copy to clipboard operation
actions-runner-controller copied to clipboard

Sometimes, a job need to wait 30/60 minutes before getting a runner

Open julien-michaud opened this issue 1 year ago • 60 comments

Checks

  • [x] I've already read https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/troubleshooting-actions-runner-controller-errors and I'm sure my issue is not covered in the troubleshooting guide.
  • [x] I am using charts that are officially provided

Controller Version

0.10.1

Deployment Method

Helm

Checks

  • [x] This isn't a question or user support case (For Q&A and community support, go to Discussions).
  • [x] I've read the Changelog before submitting this issue and I'm sure it's not due to any recently-introduced backward-incompatible changes

To Reproduce

1. Start workflows
2. The first two jobs will get a runner very quickly
3. The third one will sometimes stay pending for 30/40 minutes before getting a runner

Describe the bug

Let's say that I have a workflow with 3 jobs running in parallel.

Sometimes, the jobs 1 and 2 will get a runner right away but the third one will have to wait 30 minutes to an hour before getting a runner.

Describe the expected behavior

All the jobs should start right away.

Note that I have two runner-scale-sets with the same runnerScaleSetName name, I don't know if its a bad practice or not but its working fine 🤷‍♂

I did that to ease teh upgrade process when a new chart is available, I update the gha-runner-scale-sets one by one to avoid service interruptions.

Thanks

Additional Context

gha-runner-scale-set-controller:
  enabled: true
  flags:
    logLevel: "warn"
  podLabels:
    finops.company.net/cloud_provider: gcp
    finops.company.net/cost_center: compute
    finops.company.net/product: tools
    finops.company.net/service: actions-runner-controller
    finops.company.net/region: europe-west1
  replicaCount: 3
  podAnnotations:
    ad.datadoghq.com/manager.checks: |
      {
        "openmetrics": {
          "instances": [
            {
              "openmetrics_endpoint": "http://%%host%%:8080/metrics",
              "histogram_buckets_as_distributions": true,
              "namespace": "actions-runner-system",
              "metrics": [".*"]
            }
          ]
        }
      }
  metrics:
    controllerManagerAddr: ":8080"
    listenerAddr: ":8080"
    listenerEndpoint: "/metrics"

gha-runner-scale-set:
  enabled: true
  githubConfigUrl: https://github.com/company
  githubConfigSecret:
    github_token: <path:secret/github_token/actions_runner_controller#token>

  maxRunners: 100
  minRunners: 1

  containerMode:
    type: "dind"  ## type can be set to dind or kubernetes

  listenerTemplate:
    metadata:
      labels:
        finops.company.net/cloud_provider: gcp
        finops.company.net/cost_center: compute
        finops.company.net/product: tools
        finops.company.net/service: actions-runner-controller
        finops.company.net/region: europe-west1
      annotations:
        ad.datadoghq.com/listener.checks: |
          {
            "openmetrics": {
              "instances": [
                {
                  "openmetrics_endpoint": "http://%%host%%:8080/metrics",
                  "histogram_buckets_as_distributions": true,
                  "namespace": "actions-runner-system",
                  "max_returned_metrics": 6000,
                  "metrics": [".*"],
                  "exclude_metrics": [
                    "gha_job_startup_duration_seconds",
                    "gha_job_execution_duration_seconds"
                  ],
                  "exclude_labels": [
                    "enterprise",
                    "event_name",
                    "job_name",
                    "job_result",
                    "job_workflow_ref",
                    "organization",
                    "repository",
                    "runner_name"
                  ]
                }
              ]
            }
          }
    spec:
      containers:
      - name: listener
        securityContext:
          runAsUser: 1000
  template:
    metadata:
      labels:
        finops.company.net/cloud_provider: gcp
        finops.company.net/cost_center: compute
        finops.company.net/product: tools
        finops.company.net/service: actions-runner-controller
        finops.company.net/region: europe-west1
    spec:
      restartPolicy: OnFailure
      imagePullSecrets:
        - name: company-prod-registry
      containers:
        - name: runner
          image: eu.gcr.io/company-production/devex/gha-runners:v1.0.0-snapshot5
          command: ["/home/runner/run.sh"]

  controllerServiceAccount:
    namespace: actions-runner-system
    name: actions-runner-controller-gha-rs-controller

Controller Logs

https://gist.github.com/julien-michaud/dce55b9320fb236b622cbb00919277ce

Runner Pod Logs

/

julien-michaud avatar Feb 28 '25 14:02 julien-michaud

We are seeing the same issue and we have a similar setup. We are unsure if the two runnerscalesets (for upgrade ease) is actually causing problems.

avadhanij avatar Apr 03 '25 18:04 avadhanij

We are on 0.8.2, and seem to be encountering a similar issue. We recently upgraded Karpenter to 1.3.3, and that's when we began seeing this issue. But it may have been existing before that.

emmahsax avatar Apr 04 '25 19:04 emmahsax

I’m observing similar behavior, even when not running in a high availability setup (single cluster on Azure). Unfortunately, the logs offer no insight, and the latency is unpredictable.

marcusisnard avatar Apr 14 '25 15:04 marcusisnard

Our organization is experiencing job queue delays exceeding 12 hours, severely impacting production workloads. No error logs are observed on our side. What steps can we take to troubleshoot this issue? @nikola-jokic

marcusisnard avatar Apr 22 '25 11:04 marcusisnard

Hey everyone,

Could you please submit these logs without obfuscation through the support? We cannot investigate it without understanding which workflow runs are stuck. If you have failed runners, they count as the number of runners, so we can avoid creating an indefinite number of runners if something goes wrong with the cluster. But if the delay is caused on the back-end side, please submit the workflow run that is stuck, and the unobfuscated log, so we can troubleshoot it. Thanks!

nikola-jokic avatar Apr 22 '25 11:04 nikola-jokic

Hey everyone,

We found the root cause of the issue, and it should be fixed now. Please let us know if you are still experiencing this issue. I will leave this issue open for now for visibility. Thank you all for reporting it!

nikola-jokic avatar Apr 22 '25 12:04 nikola-jokic

Hey everyone,

We found the root cause of the issue, and it should be fixed now. Please let us know if you are still experiencing this issue. I will leave this issue open for now for visibility. Thank you all for reporting it!

Do we need to uninstall and re-deploy ARC?

marcusisnard avatar Apr 22 '25 12:04 marcusisnard

No, the issue was on the back-end side, so it should start working properly without touching the ARC installation.

nikola-jokic avatar Apr 22 '25 12:04 nikola-jokic

Image Image

We are still seeing this issue, lots of jobs still pending, we do not have a cap on the maximum number of runners. Please let me know how I can send the appropriate logs and helm chart values used for our deployment.

marcusisnard avatar Apr 22 '25 14:04 marcusisnard

Do you have failed ephemeral runners? If you don't have failed ones, please send the listener log, the controller log and workflow URLs of the pending jobs. You can submit them in the support issue if you don't want to share them publicly. If you do have failed runners, please remove all failed ephemeral runner instances, which would free up the slots to scale up.

nikola-jokic avatar Apr 22 '25 14:04 nikola-jokic

@marcusisnard unfortunately I offer no help, but I wanted to ask how you view that particular UI. It looks like a GitHub view showing the scale sets directly in the UI. I have no such view, but it would be nice to see it.

niodice avatar Apr 22 '25 16:04 niodice

Hey @niodice, the UI doesn't show that. The failed runners are scoped to the cluster, and is a mechanism we use to guard against the bad state. So you would have to inspect these runners inside the cluster, and not in the UI (if I understood you correctly)

nikola-jokic avatar Apr 25 '25 12:04 nikola-jokic

@nikola-jokic we just saw this again. Where can I open a support case to share logs in a private venue?

niodice avatar Apr 30 '25 17:04 niodice

Hey @niodice,

Very sorry, I'm responding slowly. Can you please reach out to our support? Please share the workflow run, controller and the listener log.

nikola-jokic avatar May 07 '25 13:05 nikola-jokic

Thanks @nikola-jokic , submitted under https://support.github.com/ticket/personal/0/3381328

niodice avatar May 07 '25 19:05 niodice

We are still seeing this. Self-hosted k8s ARC. Runner version 2.323.0.

Everything runs fine, until we see the number of "Failed ephemeral runners" in the AutoscalingRunnerSet climb to match the total number of allowed runners. Then everything stops. No runners are in the web console. All Listeners stop at "listener-app.listener","message":"Getting next message","lastMessageID":0

Image

We are automating a cron that checks the number of "Failed ephemeral runners" and if it goes up we delete and recreate the AutoscalingRunnerSet. It appears to reregister, runners appear in the web console, and builds all start again.

hlascelles avatar May 12 '25 10:05 hlascelles

Just a small hint @hlascelles, you can delete these ephemeral runner resources, and the ephemeral runner set will re-create them with the fresh config. Might be easier to manage than updating the autoscaling runner set

nikola-jokic avatar May 12 '25 11:05 nikola-jokic

@nikola-jokic how does one do that? Just reinstall the chart? Because there are no failed pods or related errors visible.

uladzislauhramovich avatar May 13 '25 14:05 uladzislauhramovich

That's also happening in our cluster, periodically I have to delete the ephemeralrunerset, also it's not scaling as it should. @hlascelles your cron is restarting only the runnerset or also the controller?

forgondolin avatar May 14 '25 14:05 forgondolin

@nikola-jokic Same question as @uladzislauhramovich ... How do you delete the ephemeral runners? Are they k8s resources? There is nothing in the GitHub web console, no runners there.

@forgondolin We are deleting the autoscalingrunnersets (which are then autorecreated by flux).

hlascelles avatar May 14 '25 18:05 hlascelles

Idk if it helps but: kubectl delete ephemeralrunnersets -n namespace --all or runnerscaleset for checking the runner admin

forgondolin avatar May 14 '25 19:05 forgondolin

For us, I had to reinstall it to make it work, I had to do this a couple of times lately!

OneideLuizSchneider avatar May 15 '25 09:05 OneideLuizSchneider

We are running into the exact same issue as described in this issue.

One thing we notice is that, while github UI reports the correct number of active runners, the ephemeralrunnerset k8s resource doesn't

We are running the lastest version of the chart, crds and docker agent

Tombar avatar May 15 '25 20:05 Tombar

@hlascelles suggestion current works for us, is not the ideal, but the cronjob made it better, still not ideal but I'm experimenting with different timesets

forgondolin avatar May 16 '25 17:05 forgondolin

Since I upgraded it to 0.11.0, I'm not facing this issue anymore!

OneideLuizSchneider avatar May 16 '25 17:05 OneideLuizSchneider

@OneideLuizSchneider I've tried 0.11.0 but it keeps spinning and killing the autoscaller like crazy in a loop, any tips on that? ty!

forgondolin avatar May 16 '25 18:05 forgondolin

@OneideLuizSchneider I've tried 0.11.0 but it keeps spinning and killing the autoscaller like crazy in a loop, any tips on that? ty!

@forgondolin Well, I did upgrade the k8s autoscaler as well, and another point, we have many runners, for example(dev, staging, prod, etc), not sure if it has something to do with it, maybe create some runners for testing, like testing-build and see if it still happens, it seems that if you use one a lot it starts to happen(again, not sure).

OneideLuizSchneider avatar May 16 '25 19:05 OneideLuizSchneider

@OneideLuizSchneider gonna give that a try in the next sprint. Thanks a lot

forgondolin avatar May 23 '25 18:05 forgondolin

Well, I did upgrade the k8s autoscaler as well, and another point, we have many runners, for example(dev, staging, prod, etc), not sure if it has something to do with it, maybe create some runners for testing, like testing-build and see if it still happens, it seems that if you use one a lot it starts to happen(again, not sure).

Same, it only happens with the most used label of our runners. The others are fine.

uladzislauhramovich avatar May 26 '25 09:05 uladzislauhramovich

We face the same problem. Version: 0.11.0

mmack avatar May 27 '25 06:05 mmack