Can't patch role using python client kubernetes
What happened (please include outputs or screenshots): I'm using client version 28.1.0 and tried to patch a role, but getting the following error
...rce rules must supply at least one api group","field":"rules[1].apiGroups"},{"reason":"FieldValueRequired","message":"Required value: resource rules must supply at least one api group","field":"rules[2].apiGroups"},{"reason":"FieldValueRequired","message":"Required value: resource rules must supply at least one api group","field":"rules[3].apiGroups"},{"reason":"FieldValueRequired","message":"Required value: resource rules must supply at least one api group","field":"rules[4].apiGroups"},{"reason":"FieldValueRequired","message":"Required value: resource rules must supply at least one api group","field":"rules[5].apiGroups"},{"reason":"FieldValueRequired","message":"Required value: resource rules must supply at least one api group","field":"rules[0].apiGroups"},{"reason":"FieldValueRequired","message":"Required value: resource rules must supply at least one api group","field":"rules[1].apiGroups"},{"reason":"FieldValueRequired","message":"Required value: resource rules must supply at least one api group","field":"rules[2].apiGroups"},{"reason":"FieldValueRequired","message":"Required value: resource rules must supply at least one api group","field":"rules[3].apiGroups"},{"reason":"FieldValueRequired","message":"Required value: resource rules must supply at least one api group","field":"rules[4].apiGroups"},{"reason":"FieldValueRequired","message":"Required value: resource rules must supply at least one api group","field":"rules[5].apiGroups"}]},"code":422}
The role that is passed looks as follows:
{'api_version': 'rbac.authorization.k8s.io/v1', 'kind': 'Role', 'metadata': {'annotations': None, 'creation_timestamp': datetime.datetime(2023, 11, 13, 16, 3, 40, tzinfo=tzlocal()), 'deletion_grace_period_seconds': None, 'deletion_timestamp': None, 'finalizers': None, 'generate_name': None, 'generation': None, 'labels': {'app': 'cmdb-robot-cluster', 'app.kubernetes.io/component': 'admin', 'app.kubernetes.io/instance': 'cmdb-robot-cluster', 'app.kubernetes.io/managed-by': 'Helm', 'app.kubernetes.io/name': 'cmdb-robot-cluster', 'app.kubernetes.io/version': 'cmdb-5.5-2_mariadb-10.6.12', 'cmdb-dbtype': 'mariadb', 'csf-component': 'cmdb', 'csf-subcomponent': 'admin', 'helm.sh/chart': 'cmdb-8.5.3', 'heritage': 'Helm', 'release': 'cmdb-robot-cluster'}, 'managed_fields': [{'api_version': 'rbac.authorization.k8s.io/v1', 'fields_type': 'FieldsV1', 'fields_v1': {'f:metadata': {'f:labels': {'.': {}, 'f:app': {}, 'f:app.kubernetes.io/component': {}, 'f:app.kubernetes.io/instance': {}, 'f:app.kubernetes.io/managed-by': {}, 'f:app.kubernetes.io/name': {}, 'f:app.kubernetes.io/version': {}, 'f:cmdb-dbtype': {}, 'f:csf-component': {}, 'f:csf-subcomponent': {}, 'f:helm.sh/chart': {}, 'f:heritage': {}, 'f:release': {}}}}, 'manager': 'Go-http-client', 'operation': 'Update', 'subresource': None, 'time': datetime.datetime(2023, 11, 13, 17, 0, 24, tzinfo=tzlocal())}, {'api_version': 'rbac.authorization.k8s.io/v1', 'fields_type': 'FieldsV1', 'fields_v1': {'f:rules': {}}, 'manager': 'kubectl-edit', 'operation': 'Update', 'subresource': None, 'time': datetime.datetime(2023, 11, 13, 23, 41, 9, tzinfo=tzlocal())}], 'name': 'cmdb-robot-cluster', 'namespace': 'nc3577-admin-ns', 'owner_references': None, 'resource_version': '1091532703', 'self_link': None, 'uid': '953bc222-0363-45df-9c7b-54e11673dfce'}, 'rules': [{'api_groups': [''], 'non_resource_ur_ls': None, 'resource_names': None, 'resources': ['configmaps', 'pods', 'persistentvolumeclaims', 'secrets'], 'verbs': ['create', 'get', 'list', 'patch', 'update', 'delete']}, {'api_groups': ['apps'], 'non_resource_ur_ls': None, 'resource_names': None, 'resources': ['statefulsets', 'deployments'], 'verbs': ['get', 'list']}, {'api_groups': ['apps'], 'non_resource_ur_ls': None, 'resource_names': None, 'resources': ['statefulsets/status', 'deployments/status'], 'verbs': ['get']}, {'api_groups': [''], 'non_resource_ur_ls': None, 'resource_names': None, 'resources': ['pods/exec'], 'verbs': ['create', 'get']}, {'api_groups': [''], 'non_resource_ur_ls': None, 'resource_names': None, 'resources': ['pods/status'], 'verbs': ['get']}, {'api_groups': ['networking.istio.io'], 'resources': ['destinationrules'], 'verbs': ['create', 'get', 'list', 'delete']}]}
What you expected to happen: Expect the role to be patched
How to reproduce it (as minimally and precisely as possible): I found the following for a java client and I wonder if it's similar
https://github.com/xtf-cz/xtf/commit/bec05244575a91f8dd2776a3b33a0626889e88fe
Anything else we need to know?:
Environment:
- Kubernetes version (
kubectl version): 28.1.0 - OS (e.g., MacOS 10.13.6): Linux
- Python version (
python --version): 3.6.8 - Python client version (
pip list | grep kubernetes) 28.1.0
One observation -- what is returned by the get role has keys api_groups where the role expects when patching apiGroups. The new addition is fine but the existing keys (api_groups) are the one causing the error -- so this is a bug. Any kind of work around?
@amgads Could you provide more information:
- What did you try to achieve with the patch.
- The code you used to perform the patch.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
@amgads Could you provide more information:
- What did you try to achieve with the patch.
- The code you used to perform the patch.
I am in a similar situation,
- With patch I will be able to update container image in my deployment.
- Here is the code used to perform the same
config.load_kube_config_from_dict(config_dict=kubeconfig)
v1_api = client.CoreV1Api() # api_client
apps_v1_api = client.AppsV1Api()
image_path = "xxxxxxxxxx.dkr.ecr.ap-southeast-1.amazonaws.com/ta/api:123"
update_body = {"spec": {"template": {"spec": {"containers": [{"name": "staging-app", "image": image_path}]}}}}
updateResponse = apps_v1_api.patch_namespaced_deployment(name="staging-app", namespace="staging", body=update_body)
Gives me 403 permission error, I have set permissions both in EKS aws-auth config and also in the IAM role. Any ideas ?
I got this fixed by setting up the resource in apiGroups in the kubernetes "Role" in the required namesoace, for the time being I have used it like this
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["get", "watch", "list","update","patch"]
I will remove the "*" and use specific one, but this works for me.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.