415 Unsupported Media Type Error when Patching Node
Hi, I tried to use the patchNode function (https://kubernetes-client.github.io/javascript/classes/corev1api.corev1api-1.html#patchnode) to patch a node. However, I kept getting the 415 error. I have tried the suggested solution to add content-type header as 'application/json-patch+json' or 'application/merge-patch+json', but none of these works.
Could someone help me on this by giving a successful example of using PatchNode? Thank you very much!
The example here: https://github.com/kubernetes-client/javascript/blob/master/examples/patch-example.js should work, it is for a Pod but it should be easy to adapt to nodes instead.
Let us know if that doesn't work.
I am seeing the same issue with patching deployments. Using version 0.16.3 works but on the latest release (0.18.1) I cannot override the content-type header. I have tried following the example stated above and many other.
Right now my only workaround is to use 0.16.3 and override the header like this:
async function annotateDeployment (deploymentName, namespace='default'){
const headers = { 'content-type': 'application/strategic-merge-patch+json' }
const body = {"metadata":{"annotations":{"foo":"bar"}}}
res = await appsApi.patchNamespacedDeployment(deploymentName, namespace, body, undefined, undefined, undefined, undefined, { headers })
return res
}
Thanks to this comment https://github.com/kubernetes-client/javascript/issues/19#issuecomment-582886605 for pinning a working version.
@Inshaal93 You can use 0.18.1. Function patchNamespacedDeployment receive 9 parameters, thus the options param should be the last one. For example:
const headers = { 'Content-type': k8s.PatchUtils.PATCH_FORMAT_STRATEGIC_MERGE_PATCH };
await appsApi.patchNamespacedDeployment(deploymentName, namespace, body, undefined, undefined, undefined, undefined, undefined, {headers})
const headers = { 'Content-type': k8s.PatchUtils.PATCH_FORMAT_STRATEGIC_MERGE_PATCH }; await appsApi.patchNamespacedDeployment(deploymentName, namespace, body, undefined, undefined, undefined, undefined, undefined, {headers})
This did work, thank you @noomz for this!
Does that mean the documentation for the endpoint is incorrect? It clearly states there are only 8 parameters for patchNamespacedDeployment? Not to mention the example referenced above as well.
@Inshaal93 I guess the doc is not updated ;(
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
For those following this issue, I just tried the 1.0.0-rc4 release and it seems this is gonna be fixed.
Also, no more undefined argument to pass to the function:
await appsAPI.patchNamespacedDeployment({
name,
namespace,
body: jsonPatch,
});
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.