create_namespaced_binding() method shows target.name: Required value
When calling the API create_namespaced_binding() method like so:
config.load_kube_config()
v1 = client.CoreV1Api()
v1.create_namespaced_binding(namespace, body)
The following error is thrown:
Exception when calling CoreV1Api->create_namespaced_binding: (500)
Reason: Internal Server Error
HTTP response headers: HTTPHeaderDict({'Content-Type': 'application/json', 'Date': 'Wed, 06 Jun 2018 20:55:04 GMT', 'Content-Length': '120'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"target.name: Required value","code":500}
Also when I use body = client.V1Binding() following error is thrown:
File "/usr/lib/python2.7/site-packages/kubernetes/client/models/v1_binding.py", line 64, in __init__
self.target = target
File "/usr/lib/python2.7/site-packages/kubernetes/client/models/v1_binding.py", line 156, in target
raise ValueError("Invalid value for `target`, must not be `None`")
ValueError: Invalid value for `target`, must not be `None`
Environment is:
Both Python 2.7 and Python 3.6
Metadata-Version: 2.1
Name: kubernetes
Version: 6.0.0
Full code for the custom scheduler
from kubernetes import client, config, watch
from kubernetes.client.rest import ApiException
config.load_kube_config()
v1 = client.CoreV1Api()
scheduler_name = 'custom-scheduler-test'
def nodes_available():
ready_nodes = []
for n in v1.list_node().items:
for status in n.status.conditions:
if status.status == 'True' and status.type == 'Ready':
ready_nodes.append(n.metadata.name)
return ready_nodes
def scheduler(name, node, namespace='default'):
body = client.V1ConfigMap()
# or this # body = client.V1Binding()
target = client.V1ObjectReference()
target.kind = 'Node'
target.apiVersion = 'v1'
target.name = node
meta = client.V1ObjectMeta()
meta.name = name
body.target = target
body.metadata = meta
return v1.create_namespaced_binding(namespace, body)
def main():
w = watch.Watch()
for event in w.stream(v1.list_namespaced_pod, 'default'):
if event['object'].status.phase == 'Pending' and event['object'
].spec.scheduler_name == scheduler_name:
try:
res = scheduler(event['object'].metadata.name,random.choice(nodes_available()))
except ApiException as e:
print ("Exception when calling CoreV1Api->create_namespaced_binding: %s\n" % e)
if __name__ == '__main__':
main()
Any luck with this?
Also having this problem. Strange thing is the pod runs on the node a the same time I get the error message 🤷♂️
Check out kubernetes-client/gen#52. Like all the situations when the status None is reported, the call is actually performed.
Any workarounds ? My pods remain in pending state and scheduler prints "target.name: Required value" when I run it on the terminal. Can anyone provide me link to a working custom python scheduler example? I was following this article
I'm also having the same issue 😞
I was able to make it work staying with client version 2.0. According to the documentation, they have changed the function name and signatures create_namespaced_binding(body, namespace) while calling the function. However, in the original function, the order of the parameters are not the same as the documentation which produces the above error and confuses everyone. So I decided to use the old Python Client 2.0 with K8 1.7. Please notice that the function name and signatures are different in 2.0 which is create_namespaced_binding_binding(name, namespace, body). Here is my modified code for scheduler function:
def scheduler(name, node, namespace=NAMESPACE):
body = client.V1Binding()
target = client.V1ObjectReference()
target.kind = 'Node'
target.apiVersion = 'v1'
target.name = node
meta = client.V1ObjectMeta()
meta.name = name
body.target = target
body.metadata = meta
try:
# Method changed in clinet v6.0
# return v1.create_namespaced_binding(body, namespace)
# For v2.0
res = v1.create_namespaced_binding_binding(name, namespace, body)
if res:
# print 'POD '+name+' scheduled and placed on '+node
return True
except Exception as a:
print ("Exception when calling CoreV1Api->create_namespaced_binding: %s\n" % a)
return False
Hope it helps. Thanks to jibin parayil Thomas for reaching out to me.
Thanks @abushoeb
I got the same exception as the original poster, but I noticed that the binding actually was successful. The pod get the node assigned, but the exception is thrown anyways. I noticed that all the examples of custom schedulers call an empty constructor to V1Binding(), but in the API it shows target is mandatory now. However, even by adding target in, it still throws the except, but continues to bind it properly. Here is the code I'm using:
target = client.V1ObjectReference()
target.kind = "Node"
target.apiVersion = "v1"
target.name = node
meta = client.V1ObjectMeta()
meta.name = podname
body = client.V1Binding(target=target, metadata=meta)
return self.v1.create_namespaced_binding(namespace=ns, body=body)
@cliffburdick what version of K8 and Python Client you are using?
@cliffburdick what version of K8 and Python Client you are using?
k8s 1.12.1 and client 8.0
I'm running something similar to @cliffburdick on v1.12.2 using client v8.0, and it seems to be working apart from the ValueError getting thrown.
It looks to me that the value error gets raised before the value is set.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Any update on this ? I am still facing this issue with Version: 9.0.0 of the client
Any update on this ? I am still facing this issue with Version: 9.0.0 of the client
I too get an error thrown using 8.0.x, but my pods still get scheduled. I use something similar to what @abushoeb suggested:
def schedule(name, node, namespace='default'):
target = client.V1ObjectReference(kind = 'Node', api_version = 'v1', name = node)
meta = client.V1ObjectMeta(name = name)
body = client.V1Binding(target = target, metadata = meta)
try:
client.CoreV1Api().create_namespaced_binding(namespace=namespace, body=body)
except ValueError:
# PRINT SOMETHING or PASS
@torgeirl , thanks for confirming that issue is still observed. Above looked a bit hackish and hence wanted to be sure that there is no better alternative before moving ahead with what is being suggested.
Any update on this? I am also facing this issue with v9.0.0 of the client.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Any update? The issue has been posted over one year.
To confirm, still seeing this in v 10.0.1, Pod does go on and gets scheduled though.
return v1.create_namespaced_binding(namespace, body)
File "/usr/local/lib/python2.7/dist-packages/kubernetes/client/apis/core_v1_api.py", line 5425, in create_namespaced_binding
(data) = self.create_namespaced_binding_with_http_info(namespace, body, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/kubernetes/client/apis/core_v1_api.py", line 5516, in create_namespaced_binding_with_http_info
collection_formats=collection_formats)
File "/usr/local/lib/python2.7/dist-packages/kubernetes/client/api_client.py", line 334, in call_api
_return_http_data_only, collection_formats, _preload_content, _request_timeout)
File "/usr/local/lib/python2.7/dist-packages/kubernetes/client/api_client.py", line 176, in __call_api
return_data = self.deserialize(response_data, response_type)
File "/usr/local/lib/python2.7/dist-packages/kubernetes/client/api_client.py", line 249, in deserialize
return self.__deserialize(data, response_type)
File "/usr/local/lib/python2.7/dist-packages/kubernetes/client/api_client.py", line 289, in __deserialize
return self.__deserialize_model(data, klass)
File "/usr/local/lib/python2.7/dist-packages/kubernetes/client/api_client.py", line 635, in __deserialize_model
instance = klass(**kwargs)
File "/usr/local/lib/python2.7/dist-packages/kubernetes/client/models/v1_binding.py", line 64, in __init__
self.target = target
File "/usr/local/lib/python2.7/dist-packages/kubernetes/client/models/v1_binding.py", line 156, in target
raise ValueError("Invalid value for `target`, must not be `None`")
ValueError: Invalid value for `target`, must not be `None`
It's been year and half and the issue still here a lot of client versions before :(
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
/remove-lifecycle rotten
I've noticed that another side effect of this issue is you can't tell if a pod was actually bound to a node due to this error. We've seen a couple times where we bind the pod, this error happens, and the pod well actually start up. It's stuck in pending. Re-running the exact same bind command makes it work.
The same behavior has been long documented and the fix is on the server side. I think it’s coming in Kubernetes v1.17.
This is a bug, the root cause is python client fail to deserialize returned data, that is the reason why we see the binding is success even with this exception. The below workaround can igore the step of deserializing the returned data.
v1.create_namespaced_binding(namespace, body, _preload_content=False)
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopen. Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen Does anyone know if the issue was fixed? I still meet the problem in the latest version.
@cofiiwu: just tested* and it doesn't seem to be fixed client version 11.0. :disappointed:
(* I only had a Kubernetes 1.17 cluster at hand; a solid confirmation should probably come from testing with =<1.15 which is the officially supported Kubernetes versions for client v11.0.)
/reopen