controller-runtime icon indicating copy to clipboard operation
controller-runtime copied to clipboard

Support incrementing Generation on fake client Update and Patch

Open danaradg opened this issue 3 years ago • 11 comments

When calling Update and Patch using the fake client, the metadata.Generation field is not incremented. This makes testing logic that depends on this field significantly harder (e.g. knowing if conditions are up to date by checking if the generation matches the observed generation).

I've taken a look at the code, and it seems this is a known limitation as documented in #1031.

pkg/client/fake/doc.go:

WARNING: ⚠️ Current Limitations / Known Issues with the fake Client ⚠️
- This client does not have a way to inject specific errors to test handled vs. unhandled errors.
- There is some support for sub resources which can cause issues with tests if you're trying to update
  e.g. metadata and status in the same reconcile.
- No OpeanAPI validation is performed when creating or updating objects.
- ObjectMeta's `Generation` and `ResourceVersion` don't behave properly, Patch or Update
operations that rely on these fields will fail, or give false positives.

I would like to contribute a fix for it and wanted to better understand the complexities behind implementing the fix. I also came across #553 that commented on this.

pkg/predicate/predicate.go:

// * The assumption that the Generation is incremented only on writing to the spec does not hold for all APIs.
// E.g For Deployment objects the Generation is also incremented on writes to the metadata.annotations field.
// For object types other than CustomResources be sure to verify which fields will trigger a Generation increment when they are written to.

It seems that the logic that should decide if the generation should be incremented is not straight forward. @jpatel-pivotal, @hasbro17 if any of you (as the referenced CL authors) have any insights I would be grateful if you can share them. Meanwhile I'll try to look up answers myself.

My current intention is to add a conditional increment of the metadata.Generation field in versionedTracker.Update. This should cover both Update and Patch operations since Patch also calls tracker.Update under the hood if I read the code correctly.

danaradg avatar Apr 06 '22 13:04 danaradg

@danaradg Note that not all resource types have the generation field. Actually, most of the built-in types of K8s will not be set generation by kube-apiserver, while custom types will all have generation.

FillZpp avatar Apr 11 '22 04:04 FillZpp

@FillZpp Is there any documentation or code reference as to which non-custom resources have a Generation field, and how is it treated?

danaradg avatar Apr 11 '22 05:04 danaradg

I'm afraid there is no documentation for this. You may have to read the code. generation is set by PrepareForCreate function when created and increased by PrepareForUpdate function when modified (including update and patch). These are interfaces and they have different implementation for different resource types, in which you can find whether a type should be set generation or not.

FillZpp avatar Apr 11 '22 06:04 FillZpp

Thanks for the pointers. I'll go do my research :)

danaradg avatar Apr 11 '22 06:04 danaradg

I've taken a look at the code, and the logic for incrementing generation is indeed distributed. Each resource has different create and update structs which polymorphically implement BeforeCreate and BeforeUpdate.

There are several ways to continue:

  • Close the issue. It's too hard to keep track and sync the behavior in this repo to match the one in the kubernetes repo.
  • Implement generation increment only for custom resource objects. I'm not yet sure how can I know a resource is custom, but assuming it's not too difficult it may be a good enough middle ground.
  • Try to somehow depend on kubernetes code or manually match it. This is probably too hard.

I think one of the first two options is probably the way to go, but would appreciate maintainer opinions on this.

As a (possibly) useful documentation for others and for myself, I created the table below based on commit 820247a3aec7d7d1e65cd5ad34b0f07adb16e907 in the kubernetes repo. The meaning of the columns:

  • Create - Generation is set to 1 on create
  • Update - Generation is incremented on update
  • Update Fields - Which fields trigger the increment
Resource Create Update Update Fields
mutatingwebhookconfiguration V V Webhooks
validatingwebhookconfiguration V V Webhooks
storageversion X X  
controllerrevision X X  
daemonset V V Spec
deployment V V Spec, Annotations
replicaset V V Spec
statefulset V V Spec
horizontalpodautoscaler X X  
cronjob V V Spec
job V V Spec
certificatesigningrequest X X  
lease X X  
configmap X X  
endpoint X X  
event X X  
limitrange X X  
namespace X X  
node X X  
persistentvolume X X  
persistentvolumeclaim X X  
pod X X  
podtemplate V V Template
replicationcontroller V V Spec
resourcequota X X  
secret X X  
service X X  
serviceaccount X X  
endpointslice X X  
flowschema V V Spec
prioritylevelconfiguration V V Spec
clustercidrconfig X X  
ingress V V Spec
ingressclass V V Spec
networkpolicy V V Spec
runtimeclass X X  
poddisruptionbudget V V Spec
podsecuritypolicy X X  
clusterrole X X  
clusterrolebinding X X  
role X X  
rolebinding X X  
priorityclass V X  
csidriver X V Spec.TokenRequests, Spec.RequiresRepublish
csinode X X  
csistoragecapacity X X  
storageclass X X  
volumeattachment X X  
customresource V V Everything except Metadata
customresourcedefinition V V Spec
apiservice X X  
fischer X X  
flunder X X  

danaradg avatar Apr 18 '22 13:04 danaradg

@FillZpp pinging here. I want to reach an agreement before implementing a fix (or closing this issue). Can you weigh in or tag relevant people?

danaradg avatar May 02 '22 15:05 danaradg

I'm not yet sure how can I know a resource is custom

We can find out whether a resource is built-in or custom by recognizing it with the original scheme in client-go. You can see how we enable protobuf for built-in resources.

There are several ways to continue:

  • Close the issue. It's too hard to keep track and sync the behavior in this repo to match the one in the kubernetes repo.
  • Implement generation increment only for custom resource objects. I'm not yet sure how can I know a resource is custom, but assuming it's not too difficult it may be a good enough middle ground.
  • Try to somehow depend on kubernetes code or manually match it. This is probably too hard.

I think one of the first two options is probably the way to go, but would appreciate maintainer opinions on this.

I prefer the first two options, too. Any suggestions? @alvaroaleman

FillZpp avatar May 05 '22 07:05 FillZpp

Yeah I agree. Doing this for CRDs (recognized by not being recognized by client-go scheme) seems fine. For everything else, it seems a lot of work and might fall out of sync with what upstream does.

alvaroaleman avatar May 05 '22 13:05 alvaroaleman

Sounds good. It might take me some time to get the patch ready, but I'll get on it.

danaradg avatar May 09 '22 06:05 danaradg

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Aug 07 '22 06:08 k8s-triage-robot

/remove-lifecycle stale

danaradg avatar Aug 07 '22 06:08 danaradg

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Nov 05 '22 06:11 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Dec 05 '22 07:12 k8s-triage-robot

/remove-lifecycle rotten

danaradg avatar Dec 05 '22 08:12 danaradg

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Mar 05 '23 09:03 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Apr 04 '23 09:04 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar May 04 '23 10:05 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar May 04 '23 10:05 k8s-ci-robot