ValueError when using list_event_for_all_namespaces
What happened (please include outputs or screenshots):
Attempting to use client.EventsV1beta1Api().list_event_for_all_namespaces() results in an Exception
...
File "C:\Users\lameg\miniforge3\lib\site-packages\kubernetes\client\models\v1beta1_event.py", line 290, in event_time
raise ValueError("Invalid value for event_time, must not be None")
ValueError: Invalid value for event_time, must not be None
What you expected to happen: Get a list of events
How to reproduce it (as minimally and precisely as possible): Run the following code snippet targeting a minikube cluster:
from kubernetes import config, client
config.load_kube_config()
api = client.EventsV1beta1Api()
print(api.list_event_for_all_namespaces())
Anything else we need to know?: A workaround was provided at https://stackoverflow.com/a/72591958/401041 .
Environment:
-
Kubernetes version (
kubectl version): Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.0", GitCommit:"4ce5a8954017644c5420bae81d72b09b735c21f0", GitTreeState:"clean", BuildDate:"2022-05-03T13:46:05Z", GoVersion:"go1.18.1", Compiler:"gc", Platform:"windows/amd64"} Kustomize Version: v4.5.4 Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.3", GitCommit:"816c97ab8cff8a1c72eccca1026f7820e93e0d25", GitTreeState:"clean", BuildDate:"2022-01-25T21:19:12Z", GoVersion:"go1.17.6", Compiler:"gc", Platform:"linux/amd64"} -
OS Windows 10
-
Python version (
python --version) Python 3.9.6 -
Python client version (
pip list | grep kubernetes) kubernetes 23.6.0
To add some information to this issue, the behavior in the description is being triggered by events like this, in which eventTime is null:
{
"apiVersion": "v1",
"count": 33623,
"eventTime": null,
"firstTimestamp": "2022-05-28T19:48:56Z",
"involvedObject": {
"apiVersion": "cdi.kubevirt.io/v1beta1",
"kind": "CDI",
"name": "cdi-kubevirt-hyperconverged",
"resourceVersion": "563035442",
"uid": "d2d2434c-c1f6-4f27-98cb-1a69106dd9c1"
},
"kind": "Event",
"lastTimestamp": "2022-06-13T16:54:23Z",
"message": "Successfully ensured SecurityContextConstraint exists",
"metadata": {
"creationTimestamp": "2022-05-28T19:48:56Z",
"name": "cdi-kubevirt-hyperconverged.16f35ca1771fe01e",
"namespace": "default",
"resourceVersion": "604296130",
"uid": "3b21aeb3-2100-4559-8c2f-2064f5af831d"
},
"reason": "CreateResourceSuccess",
"reportingComponent": "",
"reportingInstance": "",
"source": {
"component": "operator-controller"
},
"type": "Normal"
}
I'm not affiliated with @joaompinto, but I see around 1000 events that have null eventTime on our cluster, so presumably this isn't uncommon.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
...because this seems like a real bug.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle stale /remove-lifecycle rotten
This is a genuine bug, it's just got me. Looking for alternatives now. I need to list events, but not watch them which is what the watcher script does.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
@dennislabajo: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
@larsks @joaompinto @Aiky30 can any of you help re-open otherwise I can just create a new one.
/reopen
@joaompinto: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
I'm hitting this bug after creating an event with EventsV1Event. The only way I was able to create the event was with a utc timezone, as suggested in https://github.com/kubernetes-client/python/issues/730:
events_api.create_namespaced_event(
namespace=namespace,
body=EventsV1Event(
kind="Event",
type="Normal",
event_time=datetime.now(timezone.utc),
...
However that datetime does not seem to be correctly seralized and stored even with the workaround. When I query the events I get this traceback:
events = events_api.list_namespaced_event(namespace=namespace)
../.venv/lib/python3.9/site-packages/kubernetes/client/api/events_v1_api.py:807: in list_namespaced_event
return self.list_namespaced_event_with_http_info(namespace, **kwargs) # noqa: E501
../.venv/lib/python3.9/site-packages/kubernetes/client/api/events_v1_api.py:922: in list_namespaced_event_with_http_info
return self.api_client.call_api(
../.venv/lib/python3.9/site-packages/kubernetes/client/api_client.py:348: in call_api
return self.__call_api(resource_path, method,
../.venv/lib/python3.9/site-packages/kubernetes/client/api_client.py:192: in __call_api
return_data = self.deserialize(response_data, response_type)
../.venv/lib/python3.9/site-packages/kubernetes/client/api_client.py:264: in deserialize
return self.__deserialize(data, response_type)
../.venv/lib/python3.9/site-packages/kubernetes/client/api_client.py:303: in __deserialize
return self.__deserialize_model(data, klass)
../.venv/lib/python3.9/site-packages/kubernetes/client/api_client.py:639: in __deserialize_model
kwargs[attr] = self.__deserialize(value, attr_type)
../.venv/lib/python3.9/site-packages/kubernetes/client/api_client.py:280: in __deserialize
return [self.__deserialize(sub_data, sub_kls)
../.venv/lib/python3.9/site-packages/kubernetes/client/api_client.py:280: in <listcomp>
return [self.__deserialize(sub_data, sub_kls)
../.venv/lib/python3.9/site-packages/kubernetes/client/api_client.py:303: in __deserialize
return self.__deserialize_model(data, klass)
../.venv/lib/python3.9/site-packages/kubernetes/client/api_client.py:641: in __deserialize_model
instance = klass(**kwargs)
../.venv/lib/python3.9/site-packages/kubernetes/client/models/events_v1_event.py:112: in __init__
self.event_time = event_time
@event_time.setter
def event_time(self, event_time):
"""Sets the event_time of this EventsV1Event.
eventTime is the time when this Event was first observed. It is required. # noqa: E501
:param event_time: The event_time of this EventsV1Event. # noqa: E501
:type: datetime
"""
if self.local_vars_configuration.client_side_validation and event_time is None: # noqa: E501
> raise ValueError("Invalid value for `event_time`, must not be `None`") # noqa: E501
E ValueError: Invalid value for `event_time`, must not be `None`
../.venv/lib/python3.9/site-packages/kubernetes/client/models/events_v1_event.py:291: ValueError
I'm using kubernetes 25.3.0 on python 3.9.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.