Content type for PATCH sub resource requests is incorrect
Describe the bug Patching status subresource of a CRD always fails with java client.
Client Version
20.0.0
Kubernetes Version
1.29.1
Java Version Java 17
To Reproduce
var fooApi = new GenericKubernetesApi<>(Foo.class, FooList.class, "foo.bar", "v1", "foos", apiClient);
fooApi.updateStatus(foo, Foo::getStatus).throwsApiException().getObject();
Above code always fails with the following error
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"the body of the request was in an unknown format - accepted media types include: application/json-patch+json, application/merge-patch+json, application/apply-patch+yaml","reason":"UnsupportedMediaType","code":415}
Expected behavior This call is expected to succeed.
KubeConfig If applicable, add a KubeConfig file with secrets redacted.
Server (please complete the following information):
- OS: MacOS
- Kind cluster
Additional context The equivalent call from kubectl succeeds -
kubectl patch foos.foo.bar foo1 --subresource=status --type=json -p '[{"op":"replace","path":"/status","value":{"availableReplicas":20}}]'
In kubernetes/api/openapi.yaml, operationId: patchNamespacedCustomObject has the content type defined as application/json, which seem to be incorrect.
Yeah, there's lots of errors in the upstream YAML. We try to fix them when we find them.
Also, see the patch example here:
https://github.com/kubernetes-client/java/blob/master/examples/examples-release-latest/src/main/java/io/kubernetes/client/examples/PatchExample.java
Thanks @brendandburns. This path has worked. Working code snippet following my problem description for posterity
var customObjectsApi = new CustomObjectsApi(apiClient);
PatchUtils.patch(
Foo.class,
() ->
customObjectsApi.patchNamespacedCustomObjectStatus(
API_GROUP, "v1", foo.getMetadata().getNamespace(), "foos",
foo.getMetadata().getName(),
new V1Patch(String.format(JSON_PATH_TEMPLATE, newPodsSize))
).buildCall(null),
V1Patch.PATCH_FORMAT_JSON_PATCH,
customObjectsApi.getApiClient()
)
The GenericKubernetesApi path still need to be fixed.
Using PatchUtils is a workaround at best. This bug also makes GenericKubernetesApi's patch and updateStatus unusable.
The cause is the wrong (static) choice of mimetype.
This is a duplicate of https://github.com/kubernetes-client/java/issues/3106
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
I'm touching this bug as I'm hitting the same issue.
@brendandburns, would you be open to a PR to make GenericKubernetesApi.updateStatus() and patch() use the PatchUtils internally?
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale