Support incrementing Generation on fake client Update and Patch
When calling Update and Patch using the fake client, the metadata.Generation field is not incremented. This makes testing logic that depends on this field significantly harder (e.g. knowing if conditions are up to date by checking if the generation matches the observed generation).
I've taken a look at the code, and it seems this is a known limitation as documented in #1031.
WARNING: ⚠️ Current Limitations / Known Issues with the fake Client ⚠️
- This client does not have a way to inject specific errors to test handled vs. unhandled errors.
- There is some support for sub resources which can cause issues with tests if you're trying to update
e.g. metadata and status in the same reconcile.
- No OpeanAPI validation is performed when creating or updating objects.
- ObjectMeta's `Generation` and `ResourceVersion` don't behave properly, Patch or Update
operations that rely on these fields will fail, or give false positives.
I would like to contribute a fix for it and wanted to better understand the complexities behind implementing the fix. I also came across #553 that commented on this.
// * The assumption that the Generation is incremented only on writing to the spec does not hold for all APIs.
// E.g For Deployment objects the Generation is also incremented on writes to the metadata.annotations field.
// For object types other than CustomResources be sure to verify which fields will trigger a Generation increment when they are written to.
It seems that the logic that should decide if the generation should be incremented is not straight forward. @jpatel-pivotal, @hasbro17 if any of you (as the referenced CL authors) have any insights I would be grateful if you can share them. Meanwhile I'll try to look up answers myself.
My current intention is to add a conditional increment of the metadata.Generation field in versionedTracker.Update. This should cover both Update and Patch operations since Patch also calls tracker.Update under the hood if I read the code correctly.
@danaradg Note that not all resource types have the generation field. Actually, most of the built-in types of K8s will not be set generation by kube-apiserver, while custom types will all have generation.
@FillZpp Is there any documentation or code reference as to which non-custom resources have a Generation field, and how is it treated?
I'm afraid there is no documentation for this. You may have to read the code.
generation is set by PrepareForCreate function when created and increased by PrepareForUpdate function when modified (including update and patch). These are interfaces and they have different implementation for different resource types, in which you can find whether a type should be set generation or not.
Thanks for the pointers. I'll go do my research :)
I've taken a look at the code, and the logic for incrementing generation is indeed distributed. Each resource has different create and update structs which polymorphically implement BeforeCreate and BeforeUpdate.
There are several ways to continue:
- Close the issue. It's too hard to keep track and sync the behavior in this repo to match the one in the kubernetes repo.
- Implement generation increment only for custom resource objects. I'm not yet sure how can I know a resource is custom, but assuming it's not too difficult it may be a good enough middle ground.
- Try to somehow depend on kubernetes code or manually match it. This is probably too hard.
I think one of the first two options is probably the way to go, but would appreciate maintainer opinions on this.
As a (possibly) useful documentation for others and for myself, I created the table below based on commit 820247a3aec7d7d1e65cd5ad34b0f07adb16e907 in the kubernetes repo. The meaning of the columns:
-
Create- Generation is set to 1 on create -
Update- Generation is incremented on update -
Update Fields- Which fields trigger the increment
| Resource | Create | Update | Update Fields |
|---|---|---|---|
| mutatingwebhookconfiguration | V | V | Webhooks |
| validatingwebhookconfiguration | V | V | Webhooks |
| storageversion | X | X | |
| controllerrevision | X | X | |
| daemonset | V | V | Spec |
| deployment | V | V | Spec, Annotations |
| replicaset | V | V | Spec |
| statefulset | V | V | Spec |
| horizontalpodautoscaler | X | X | |
| cronjob | V | V | Spec |
| job | V | V | Spec |
| certificatesigningrequest | X | X | |
| lease | X | X | |
| configmap | X | X | |
| endpoint | X | X | |
| event | X | X | |
| limitrange | X | X | |
| namespace | X | X | |
| node | X | X | |
| persistentvolume | X | X | |
| persistentvolumeclaim | X | X | |
| pod | X | X | |
| podtemplate | V | V | Template |
| replicationcontroller | V | V | Spec |
| resourcequota | X | X | |
| secret | X | X | |
| service | X | X | |
| serviceaccount | X | X | |
| endpointslice | X | X | |
| flowschema | V | V | Spec |
| prioritylevelconfiguration | V | V | Spec |
| clustercidrconfig | X | X | |
| ingress | V | V | Spec |
| ingressclass | V | V | Spec |
| networkpolicy | V | V | Spec |
| runtimeclass | X | X | |
| poddisruptionbudget | V | V | Spec |
| podsecuritypolicy | X | X | |
| clusterrole | X | X | |
| clusterrolebinding | X | X | |
| role | X | X | |
| rolebinding | X | X | |
| priorityclass | V | X | |
| csidriver | X | V | Spec.TokenRequests, Spec.RequiresRepublish |
| csinode | X | X | |
| csistoragecapacity | X | X | |
| storageclass | X | X | |
| volumeattachment | X | X | |
| customresource | V | V | Everything except Metadata |
| customresourcedefinition | V | V | Spec |
| apiservice | X | X | |
| fischer | X | X | |
| flunder | X | X |
@FillZpp pinging here. I want to reach an agreement before implementing a fix (or closing this issue). Can you weigh in or tag relevant people?
I'm not yet sure how can I know a resource is custom
We can find out whether a resource is built-in or custom by recognizing it with the original scheme in client-go. You can see how we enable protobuf for built-in resources.
There are several ways to continue:
- Close the issue. It's too hard to keep track and sync the behavior in this repo to match the one in the kubernetes repo.
- Implement generation increment only for custom resource objects. I'm not yet sure how can I know a resource is custom, but assuming it's not too difficult it may be a good enough middle ground.
- Try to somehow depend on kubernetes code or manually match it. This is probably too hard.
I think one of the first two options is probably the way to go, but would appreciate maintainer opinions on this.
I prefer the first two options, too. Any suggestions? @alvaroaleman
Yeah I agree. Doing this for CRDs (recognized by not being recognized by client-go scheme) seems fine. For everything else, it seems a lot of work and might fall out of sync with what upstream does.
Sounds good. It might take me some time to get the patch ready, but I'll get on it.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.