Ability to add custom labels to resources
Feature request
We have a service mesh in place in our cluster which relies on the ability to label namespaces and workloads (pods) to configure it.
To get Tekton running on the mesh, we need to be able to label:
- The namespace created by tekton-operator (
targetNamespace) - The various controllers created in that namespace (tekton-dashboard, tekton-operator-proxy-webhook, tekton-pipelines-controller, tekton-pipelines-webhook, tekton-triggers-controller, tekton-triggers-core-interceptors, tekton-triggers-webhook).
Use cases
There are many: Service mesh sidecar injection, as mentioned above. Reporting on workloads running in the cluster, OPA rule compliance, some types of affinity/anti-affinity, kubernetes NetworkPolicy.
This is an interesting idea. The challenge now is that the deployment gets reset to initial state, if the operator detects that the deployment has changed on cluster. So if we are supporting this we will also have to make sure that the operator is tolerant to certain changes (eg: adding additional labels)
This is an interesting idea. The challenge now is that the deployment gets reset to initial state, if the operator detects that the deployment has changed on cluster. So if we are supporting this we will also have to make sure that the operator is tolerant to certain changes (eg: adding additional labels)
I'd suggest that you should add labels: fields into the operator's CRD (perhaps even for each pod/resource created) so that users can define the labels from there.
Having to define them as a second step after the resources are created could present problems in some (e.g. GitOps) scenarios.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale with a justification.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close with a justification.
If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.
/lifecycle stale
Send feedback to tektoncd/plumbing.
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten with a justification.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close with a justification.
If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.
/lifecycle rotten
Send feedback to tektoncd/plumbing.
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen with a justification.
Mark the issue as fresh with /remove-lifecycle rotten with a justification.
If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.
/close
Send feedback to tektoncd/plumbing.
@tekton-robot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopenwith a justification. Mark the issue as fresh with/remove-lifecycle rottenwith a justification. If this issue should be exempted, mark the issue as frozen with/lifecycle frozenwith a justification./close
Send feedback to tektoncd/plumbing.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.