unable to switch cluster and create resource
I have my pipeline running on cluster1 . In one of the job I need to switch the cluster to cluster 2. So, I parse the credentials & change the context and try to just create an object.
When I try to create the object with kubectl via subprocess its working fine but when trying with the api it fails with the error of forbidden.
config.in_cluster_config() with this it is not throwing any error but rather failing with 403 error on object creation
config.load_kube_config()
ERROR - exec: process returned 252. usage: aws [options]
How is the kubectl command able to create the object but not the api
2025-02-11 08:22:22,136 - ERROR - Exception when creating Namespace: (403) Reason: Forbidden HTTP response headers: HTTPHeaderDict({'Audit-Id': '111', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Content-Type-Options': 'nosniff', 'X-Kubernetes-Pf-Flowschema-Uid': '111', 'X-Kubernetes-Pf-Prioritylevel-Uid': '11f', 'Date': 'Tue, 11 Feb 2025 08:22:22 GMT', 'Content-Length': '273'}) HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"namespaces is forbidden: User "system:anonymous" cannot create resource "namespaces" in API group "" at the cluster scope","reason":"Forbidden","details":{"kind":"namespaces"},"code":403}
Steps done to switch the cluster
Assume role
result = subprocess.run(['aws', 'sts', 'assume-role', '--role-arn', role_arn, '--role-session-name', 'test'], capture_output=True, text=True, check=True)
credentials = json.loads(result.stdout)['Credentials']
# Export credentials
os.environ['AWS_ACCESS_KEY_ID'] = credentials['AccessKeyId']
os.environ['AWS_SECRET_ACCESS_KEY'] = credentials['SecretAccessKey']
os.environ['AWS_SESSION_TOKEN'] = credentials['SessionToken']
# Update kubeconfig
subprocess.run(['aws', 'eks', '--region', region, 'update-kubeconfig', '--name', cluster_name], check=True)
config.in_cluster_config() /config.load_kube_config()
Could you check if https://github.com/kubernetes-client/python/blob/master/examples/pick_kube_config_context.py helps? I think you need to load the right config.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.