Add a new metrics to indicates the the current queue length.
Should we add a metrics named controller_runtime_queue_length to indicates the length of the queue in the controller. Some times the Reconcile has not been invoked, i want to know if the queue is emtpy.
You should use workqueue_depth, which is the current depth of workqueue. Here are some metrics for workqueue: https://github.com/kubernetes-sigs/controller-runtime/blob/master/pkg/metrics/workqueue.go#L41
Will there be any conflict between https://github.com/kubernetes-sigs/controller-runtime/blob/7399a3a595bf254add9d0c96c49af462e1aac193/pkg/metrics/workqueue.go#L99 and https://github.com/kubernetes/component-base/blob/03d57670a9cda43def5d9c960823d6d4558e99ff/metrics/prometheus/workqueue/metrics.go#L101?
Both repository try to set the Provider but only the earliest will take effect. In the case where component-base library is initialized first, the workqueue_depth metrics in component-base will be used and the metrics in controller-runtime will not work. It will cause the default exposed metrics in controller-runtime to be unable to show the workflow_depth number.
Is there any hint or recommendation for handling that?
@Somefive k/component-base is synced from k/k/staging/src/k8s.io/component-base and mostly for those core components of Kubernetes, such as KCM, kube-scheduler, which will not import controller-runtime.
On the other hand, most custom Operators based on controller-runtime probably don't have to rely on component-base. But if they are both imported by a project, you will find they all register to workqueue.SetProvider. So why do you need the component-base?
@Somefive k/component-base is synced from k/k/staging/src/k8s.io/component-base and mostly for those core components of Kubernetes, such as KCM, kube-scheduler, which will not import controller-runtime.
On the other hand, most custom Operators based on controller-runtime probably don't have to rely on component-base. But if they are both imported by a project, you will find they all register to
workqueue.SetProvider. So why do you need the component-base?
The component-base library might not be directly depended. However, other libraries like k8s.io/apiextensions-apiserver, sigs.k8s.io/controller-runtime, github.com/coreos/prometheus-operator, and many others depends on that. If the codes in these libraries call component-base functions, the initialization function in component-base will work and might call workqueue.SetProvider before controller-runtime, which will prevent the later controller-runtime from setting its own workqueue_depth metrics.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.