python icon indicating copy to clipboard operation
python copied to clipboard

exec command is parsing output if output looks like json

Open tkinz27 opened this issue 6 years ago • 21 comments

What happened (please include outputs or screenshots):

Working on creating a scale test where the test orchestrator will exec in each pod and run a curl command to verify the pod is in the expected state. The output of the curl is JSON.

The problem seems to be that the stream client will parse the json as a python dictionary if the output is only json, but return it to the caller as a stringified dictionary. Expecting json, json.loads is failing.

To verify I ran

In [5]: cmd=["echo", '{"example":"json", "with":null, "and":true}']

In [6]: kstream.stream(api_instance.connect_get_namespaced_pod_exec,
   ...:                   name=pod,
   ...:                   namespace=ns,
   ...:                   container=container,
   ...:                   command=cmd,
   ...:                   stderr=True,
   ...:                   stdin=False,
   ...:                   tty=False,
   ...:                   stdout=True)
Out[6]: "{'example': 'json', 'with': None, 'and': True}"

In [7]: cmd=["echo", 'but not actually json {"example":"json", "with":null, "and":true}']

In [8]: kstream.stream(api_instance.connect_get_namespaced_pod_exec,
   ...:                   name=pod,
   ...:                   namespace=ns,
   ...:                   container=container,
   ...:                   command=cmd,
   ...:                   stderr=True,
   ...:                   stdin=False,
   ...:                   tty=False,
   ...:                   stdout=True)
Out[8]: 'but not actually json {"example":"json", "with":null, "and":true}\n'

What you expected to happen:

I really did not expect the response to be different depending on the output of the command being run.

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

We are able to get around the issue by adding _preload_content=False to the stream function call and handling waiting for the response and reading stdout/stderr ourselves.

I wasn't sure where the rogue json parsing is happening, maybe in WSResponse class?

Environment:

  • Kubernetes version (kubectl version):
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.6", GitCommit:"7015f71e75f670eb9e7ebd4b5749639d42e20079", GitTreeState:"clean", BuildDate:"2019-11-19T15:41:24Z", GoVersion:"go1.12.13", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.8-eks-b8860f", GitCommit:"b8860f6c40640897e52c143f1b9f011a503d6e46", GitTreeState:"clean", BuildDate:"2019-11-25T00:55:38Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
  • OS (e.g., MacOS 10.13.6): Ubuntu 19.10
  • Python version (python --version)
python 3.7.5
  • Python client version (pip list | grep kubernetes)
kubernetes==10.0.1

tkinz27 avatar Dec 13 '19 21:12 tkinz27

cc @jiahuif

roycaihw avatar Dec 17 '19 18:12 roycaihw

The json parsing is done by https://github.com/kubernetes-client/python/blob/ccd3ce4fc27ed202864f9b3f971744a5ebf6ad7e/kubernetes/client/api_client.py#L242

Essentially what it does is

data = '{"example":"json", "with":null, "and":true}'
data = json.loads(data)
data = str(data)

and the result becomes a string representation of the dict "{'example': 'json', 'with': None, 'and': True}"

This looks like a bug in the upstream code generator. The generated client should not try parsing the response when the payload isn't JSON and the response is of string type.

@tkinz27 could you file an issue in https://github.com/OpenAPITools/openapi-generator?

      "get": {
        "consumes": [
          "*/*"
        ],
        "description": "connect GET requests to exec of Pod",
        "operationId": "connectCoreV1GetNamespacedPodExec",
        "produces": [
          "*/*"
        ],
        "responses": {
          "200": {
            "description": "OK",
            "schema": {
              "type": "string"
            }
          },
          "401": {
            "description": "Unauthorized"
          }
        },
        "schemes": [
          "https"
        ],

roycaihw avatar Dec 17 '19 20:12 roycaihw

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot avatar Mar 16 '20 21:03 fejta-bot

/remove-lifecycle stale

mitar avatar Mar 16 '20 22:03 mitar

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot avatar Jun 14 '20 23:06 fejta-bot

/remove-lifecycle stale

mitar avatar Jun 14 '20 23:06 mitar

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot avatar Sep 13 '20 00:09 fejta-bot

/remove-lifecycle stale

mitar avatar Sep 13 '20 00:09 mitar

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot avatar Dec 12 '20 01:12 fejta-bot

/remove-lifecycle stale

mitar avatar Dec 13 '20 07:12 mitar

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

fejta-bot avatar Mar 13 '21 08:03 fejta-bot

/remove-lifecycle stale

snstanton avatar Mar 15 '21 13:03 snstanton

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

fejta-bot avatar Jun 13 '21 14:06 fejta-bot

/remove-lifecycle stale

snstanton avatar Jun 14 '21 17:06 snstanton

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Sep 12 '21 18:09 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Oct 12 '21 18:10 k8s-triage-robot

/remove-lifecycle rotten

mitar avatar Oct 18 '21 23:10 mitar

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jan 16 '22 23:01 k8s-triage-robot

/remove-lifecycle stale

mitar avatar Jan 16 '22 23:01 mitar

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Apr 17 '22 00:04 k8s-triage-robot