Unable to patch custom object with CustomObjectsApi.patchNamespacedCustomObject
Describe the bug Hello,
I'm trying to patch a custom resource following this official example: https://github.com/kubernetes-client/java/blob/master/kubernetes/docs/CustomObjectsApi.md#patchNamespacedCustomObject
Object result = apiInstance.patchNamespacedCustomObject(group, version, namespace, plural, name, body).execute();
No matter what I specify in the body param (json patch string, yaml patch, a Map with CRD attributes), the result is always an HTTP 415 error response from the server:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"the body of the request was in an unknown format - accepted media types include: application/json-patch+json, application/merge-patch+json, application/apply-patch+yaml","reason":"UnsupportedMediaType","code":415}
Java client Version
20.0.0
Kubernetes Version
1.28
Java Version Corretto-21.0.2
To Reproduce Steps to reproduce the behavior:
CustomObjectsApi apiInstance = new CustomObjectsApi(defaultClient);
...
String body = "[\n" +
" {\n" +
" \"op\": \"add\",\n" +
" \"path\": \"/metadata/annotations/new-annotation\",\n" +
" \"value\": \"value of the new annotation\"\n" +
" }\n" +
"]";
Object result = apiInstance.patchNamespacedCustomObject(group, version, namespace, plural, name, body).execute();
Results in:
2024-02-22T14:58:56.876+11:00 INFO 81582 --- [nio-8080-exec-1] okhttp3.OkHttpClient : Content-Length: 215
2024-02-22T14:58:56.876+11:00 INFO 81582 --- [nio-8080-exec-1] okhttp3.OkHttpClient : Accept: application/json
2024-02-22T14:58:56.876+11:00 INFO 81582 --- [nio-8080-exec-1] okhttp3.OkHttpClient : Content-Type: application/json
2024-02-22T14:58:56.876+11:00 INFO 81582 --- [nio-8080-exec-1] okhttp3.OkHttpClient : User-Agent: Kubernetes Java Client/20.0.0-SNAPSHOT
2024-02-22T14:58:56.876+11:00 INFO 81582 --- [nio-8080-exec-1] okhttp3.OkHttpClient :
2024-02-22T14:58:56.876+11:00 INFO 81582 --- [nio-8080-exec-1] okhttp3.OkHttpClient : "[\n {\n \"op\": \"add\",\n \"path\": \"/metadata/annotations/new-annotation\",\n \"value\": \"value of the new annotation\"\n }\n ]"
2024-02-22T14:58:56.876+11:00 INFO 81582 --- [nio-8080-exec-1] okhttp3.OkHttpClient : --> END PATCH (215-byte body)
2024-02-22T14:58:56.881+11:00 INFO 81582 --- [nio-8080-exec-1] okhttp3.OkHttpClient : audit-id: 9c309609-0e88-4ee1-9399-c438249fe8d2
2024-02-22T14:58:56.881+11:00 INFO 81582 --- [nio-8080-exec-1] okhttp3.OkHttpClient : cache-control: no-cache, private
2024-02-22T14:58:56.881+11:00 INFO 81582 --- [nio-8080-exec-1] okhttp3.OkHttpClient : content-type: application/json
2024-02-22T14:58:56.881+11:00 INFO 81582 --- [nio-8080-exec-1] okhttp3.OkHttpClient : x-kubernetes-pf-flowschema-uid: 891cc6ff-2e12-450f-b9c8-94f26b2adda8
2024-02-22T14:58:56.881+11:00 INFO 81582 --- [nio-8080-exec-1] okhttp3.OkHttpClient : x-kubernetes-pf-prioritylevel-uid: dbc7143a-c8fc-4f36-a0b9-451153992497
2024-02-22T14:58:56.881+11:00 INFO 81582 --- [nio-8080-exec-1] okhttp3.OkHttpClient : content-length: 293
2024-02-22T14:58:56.881+11:00 INFO 81582 --- [nio-8080-exec-1] okhttp3.OkHttpClient : date: Thu, 22 Feb 2024 03:58:56 GMT
2024-02-22T14:58:56.881+11:00 INFO 81582 --- [nio-8080-exec-1] okhttp3.OkHttpClient :
2024-02-22T14:58:56.882+11:00 INFO 81582 --- [nio-8080-exec-1] okhttp3.OkHttpClient : {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"the body of the request was in an unknown format - accepted media types include: application/json-patch+json, application/merge-patch+json, application/apply-patch+yaml","reason":"UnsupportedMediaType","code":415}
2024-02-22T14:58:56.882+11:00 INFO 81582 --- [nio-8080-exec-1] okhttp3.OkHttpClient : <-- END HTTP (293-byte body)
2024-02-22T14:58:56.884+11:00 ERROR 81582 --- [nio-8080-exec-1] c.a.a.homa.k8s.api.http.ExceptionHelper : ApiException: Message:
HTTP response code: 415
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"the body of the request was in an unknown format - accepted media types include: application/json-patch+json, application/merge-patch+json, application/apply-patch+yaml","reason":"UnsupportedMediaType","code":415}
Expected behavior The expectation is that the above code would modify the CRD.
Using PatchUtils doesn't seem an alternative for custom resources since the class to be patched (which would be Object for a CRD) must be supplied:
public static <ApiType> ApiType patch(Class<ApiType> apiTypeClass, PatchCallFunc callFunc, String patchFormat, ApiClient apiClient) throws ApiException {
https://github.com/kubernetes-client/java/blob/master/examples/examples-release-18/src/main/java/io/kubernetes/client/examples/PatchExample.java
`
I've tried as well to adapt the examples for PatchUtils, but CustomObjectsApi.patchNamespacedCustomObjectCall is private?
Object patch =
PatchUtils.patch(
Object.class,
() ->
customObjectsApi.patchNamespacedCustomObjectCall(
group, version, namespace, plural, name,
new V1Patch(applyYamlStr),
null,
null,
null
null,
true,
null),
V1Patch.PATCH_FORMAT_APPLY_YAML,
customObjectsApi.getApiClient());
thanks for reporting this, before we release a fix, please use either 19.x or 20.0.0-legacy to workaround the issue
This bug is also in v20-legacy. Using v19 helps, though.
@yue9944882 any fix for this? I am using patchnamespacedCronJob and I get the same error. Because of that, I had to downgrade to 19.0.0 and management all 3rd party upgrades (due to security issues) manually.
Ok ... I changed everything to use PatchUtil and seems like everything is ok now !
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale