python icon indicating copy to clipboard operation
python copied to clipboard

create_namespaced_binding() method shows target.name: Required value

Open abushoeb opened this issue 7 years ago • 79 comments

When calling the API create_namespaced_binding() method like so:

config.load_kube_config()
v1 = client.CoreV1Api()
v1.create_namespaced_binding(namespace, body)

The following error is thrown:

Exception when calling CoreV1Api->create_namespaced_binding: (500)
Reason: Internal Server Error
HTTP response headers: HTTPHeaderDict({'Content-Type': 'application/json', 'Date': 'Wed, 06 Jun 2018 20:55:04 GMT', 'Content-Length': '120'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"target.name: Required value","code":500} 

Also when I use body = client.V1Binding() following error is thrown:

 File "/usr/lib/python2.7/site-packages/kubernetes/client/models/v1_binding.py", line 64, in __init__
    self.target = target
  File "/usr/lib/python2.7/site-packages/kubernetes/client/models/v1_binding.py", line 156, in target
    raise ValueError("Invalid value for `target`, must not be `None`")
ValueError: Invalid value for `target`, must not be `None`

Environment is:

Both Python 2.7 and Python 3.6  
Metadata-Version: 2.1
Name: kubernetes
Version: 6.0.0

Full code for the custom scheduler

from kubernetes import client, config, watch
from kubernetes.client.rest import ApiException
config.load_kube_config()
v1 = client.CoreV1Api()

scheduler_name = 'custom-scheduler-test'

def nodes_available():
    ready_nodes = []
    for n in v1.list_node().items:
        for status in n.status.conditions:
            if status.status == 'True' and status.type == 'Ready':
                ready_nodes.append(n.metadata.name)
    return ready_nodes

def scheduler(name, node, namespace='default'):
    body = client.V1ConfigMap()
    # or this # body = client.V1Binding() 
    target = client.V1ObjectReference()
    target.kind = 'Node'
    target.apiVersion = 'v1'
    target.name = node
    meta = client.V1ObjectMeta()
    meta.name = name
    body.target = target
    body.metadata = meta
    return v1.create_namespaced_binding(namespace, body)

def main():
    w = watch.Watch()
    for event in w.stream(v1.list_namespaced_pod, 'default'):
        if event['object'].status.phase == 'Pending' and event['object'
                ].spec.scheduler_name == scheduler_name:
            try:
                res = scheduler(event['object'].metadata.name,random.choice(nodes_available()))
            except ApiException as e:
                print ("Exception when calling CoreV1Api->create_namespaced_binding: %s\n" % e)

if __name__ == '__main__':
    main()

abushoeb avatar Jun 08 '18 16:06 abushoeb

Any luck with this?

jibinpt avatar Nov 20 '18 08:11 jibinpt

Also having this problem. Strange thing is the pod runs on the node a the same time I get the error message 🤷‍♂️

tomyan avatar Nov 20 '18 10:11 tomyan

Check out kubernetes-client/gen#52. Like all the situations when the status None is reported, the call is actually performed.

micw523 avatar Nov 20 '18 16:11 micw523

Any workarounds ? My pods remain in pending state and scheduler prints "target.name: Required value" when I run it on the terminal. Can anyone provide me link to a working custom python scheduler example? I was following this article

jibinpt avatar Nov 20 '18 16:11 jibinpt

I'm also having the same issue 😞

agassner avatar Nov 20 '18 18:11 agassner

I was able to make it work staying with client version 2.0. According to the documentation, they have changed the function name and signatures create_namespaced_binding(body, namespace) while calling the function. However, in the original function, the order of the parameters are not the same as the documentation which produces the above error and confuses everyone. So I decided to use the old Python Client 2.0 with K8 1.7. Please notice that the function name and signatures are different in 2.0 which is create_namespaced_binding_binding(name, namespace, body). Here is my modified code for scheduler function:

def scheduler(name, node, namespace=NAMESPACE):
    body = client.V1Binding()

    target = client.V1ObjectReference()
    target.kind = 'Node'
    target.apiVersion = 'v1'
    target.name = node

    meta = client.V1ObjectMeta()
    meta.name = name

    body.target = target
    body.metadata = meta

    try:
        # Method changed in clinet v6.0
        # return v1.create_namespaced_binding(body, namespace)
        # For v2.0
        res = v1.create_namespaced_binding_binding(name, namespace, body)
        if res:
            # print 'POD '+name+' scheduled and placed on '+node
            return True

    except Exception as a:
        print ("Exception when calling CoreV1Api->create_namespaced_binding: %s\n" % a)
        return False

Hope it helps. Thanks to jibin parayil Thomas for reaching out to me.

abushoeb avatar Nov 21 '18 04:11 abushoeb

Thanks @abushoeb

jibinpt avatar Nov 21 '18 21:11 jibinpt

I got the same exception as the original poster, but I noticed that the binding actually was successful. The pod get the node assigned, but the exception is thrown anyways. I noticed that all the examples of custom schedulers call an empty constructor to V1Binding(), but in the API it shows target is mandatory now. However, even by adding target in, it still throws the except, but continues to bind it properly. Here is the code I'm using:

            target        = client.V1ObjectReference()
            target.kind   = "Node"
            target.apiVersion = "v1"
            target.name   = node
            
            meta          = client.V1ObjectMeta()
            meta.name     = podname
            body          = client.V1Binding(target=target, metadata=meta)

            return self.v1.create_namespaced_binding(namespace=ns, body=body)

cliffburdick avatar Jan 17 '19 22:01 cliffburdick

@cliffburdick what version of K8 and Python Client you are using?

abushoeb avatar Jan 18 '19 01:01 abushoeb

@cliffburdick what version of K8 and Python Client you are using?

k8s 1.12.1 and client 8.0

cliffburdick avatar Jan 18 '19 03:01 cliffburdick

I'm running something similar to @cliffburdick on v1.12.2 using client v8.0, and it seems to be working apart from the ValueError getting thrown.

It looks to me that the value error gets raised before the value is set.

torgeirl avatar Jan 26 '19 17:01 torgeirl

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot avatar Apr 29 '19 18:04 fejta-bot

/remove-lifecycle stale

torgeirl avatar Apr 30 '19 07:04 torgeirl

Any update on this ? I am still facing this issue with Version: 9.0.0 of the client

spurti-chopra avatar May 20 '19 10:05 spurti-chopra

Any update on this ? I am still facing this issue with Version: 9.0.0 of the client

I too get an error thrown using 8.0.x, but my pods still get scheduled. I use something similar to what @abushoeb suggested:

def schedule(name, node, namespace='default'):
    target = client.V1ObjectReference(kind = 'Node', api_version = 'v1', name = node)
    meta = client.V1ObjectMeta(name = name)
    body = client.V1Binding(target = target, metadata = meta)
    try:
        client.CoreV1Api().create_namespaced_binding(namespace=namespace, body=body)
    except ValueError:
        # PRINT SOMETHING or PASS

torgeirl avatar May 20 '19 11:05 torgeirl

@torgeirl , thanks for confirming that issue is still observed. Above looked a bit hackish and hence wanted to be sure that there is no better alternative before moving ahead with what is being suggested.

spurti-chopra avatar May 20 '19 15:05 spurti-chopra

Any update on this? I am also facing this issue with v9.0.0 of the client.

hirenvadalia avatar Jun 05 '19 04:06 hirenvadalia

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot avatar Sep 03 '19 05:09 fejta-bot

Any update? The issue has been posted over one year.

cofiiwu avatar Sep 15 '19 18:09 cofiiwu

To confirm, still seeing this in v 10.0.1, Pod does go on and gets scheduled though.

return v1.create_namespaced_binding(namespace, body)
  File "/usr/local/lib/python2.7/dist-packages/kubernetes/client/apis/core_v1_api.py", line 5425, in create_namespaced_binding
    (data) = self.create_namespaced_binding_with_http_info(namespace, body, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/kubernetes/client/apis/core_v1_api.py", line 5516, in create_namespaced_binding_with_http_info
    collection_formats=collection_formats)
  File "/usr/local/lib/python2.7/dist-packages/kubernetes/client/api_client.py", line 334, in call_api
    _return_http_data_only, collection_formats, _preload_content, _request_timeout)
  File "/usr/local/lib/python2.7/dist-packages/kubernetes/client/api_client.py", line 176, in __call_api
    return_data = self.deserialize(response_data, response_type)
  File "/usr/local/lib/python2.7/dist-packages/kubernetes/client/api_client.py", line 249, in deserialize
    return self.__deserialize(data, response_type)
  File "/usr/local/lib/python2.7/dist-packages/kubernetes/client/api_client.py", line 289, in __deserialize
    return self.__deserialize_model(data, klass)
  File "/usr/local/lib/python2.7/dist-packages/kubernetes/client/api_client.py", line 635, in __deserialize_model
    instance = klass(**kwargs)
  File "/usr/local/lib/python2.7/dist-packages/kubernetes/client/models/v1_binding.py", line 64, in __init__
    self.target = target
  File "/usr/local/lib/python2.7/dist-packages/kubernetes/client/models/v1_binding.py", line 156, in target
    raise ValueError("Invalid value for `target`, must not be `None`")
ValueError: Invalid value for `target`, must not be `None`

Urvik08 avatar Oct 09 '19 22:10 Urvik08

It's been year and half and the issue still here a lot of client versions before :(

damaca avatar Oct 20 '19 13:10 damaca

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

fejta-bot avatar Nov 19 '19 13:11 fejta-bot

/remove-lifecycle rotten

I've noticed that another side effect of this issue is you can't tell if a pod was actually bound to a node due to this error. We've seen a couple times where we bind the pod, this error happens, and the pod well actually start up. It's stuck in pending. Re-running the exact same bind command makes it work.

cliffburdick avatar Nov 22 '19 15:11 cliffburdick

The same behavior has been long documented and the fix is on the server side. I think it’s coming in Kubernetes v1.17.

micw523 avatar Nov 22 '19 18:11 micw523

This is a bug, the root cause is python client fail to deserialize returned data, that is the reason why we see the binding is success even with this exception. The below workaround can igore the step of deserializing the returned data.

v1.create_namespaced_binding(namespace, body, _preload_content=False)

zhcf avatar Nov 26 '19 07:11 zhcf

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

fejta-bot avatar Dec 26 '19 08:12 fejta-bot

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Dec 26 '19 08:12 k8s-ci-robot

/reopen Does anyone know if the issue was fixed? I still meet the problem in the latest version.

cofiiwu avatar Mar 17 '20 16:03 cofiiwu

@cofiiwu: just tested* and it doesn't seem to be fixed client version 11.0. :disappointed:

(* I only had a Kubernetes 1.17 cluster at hand; a solid confirmation should probably come from testing with =<1.15 which is the officially supported Kubernetes versions for client v11.0.)

torgeirl avatar Mar 17 '20 18:03 torgeirl

/reopen

torgeirl avatar Mar 26 '20 11:03 torgeirl