[BUG] Requests to resources before server is ready
What happened:
Logs are full of warnings like
W0903 12:20:21.602833 1864859 patch_genericapiserver.go:123] Request to "/apis/route.openshift.io/v1" (source IP 192.168.178.106:34822, user agent "Go-http-client/2.0") before server is ready, possibly a sign for a broken load balancer setup.
What you expected to happen:
No such warnings.
How to reproduce it (as minimally and precisely as possible):
- build latest MicroShift from source and run
Anything else we need to know?:
Environment:
- Microshift version (use
microshift version): 0.4.7-0.microshift-2021-07-07-002815 - Hardware configuration:
- OS (e.g:
cat /etc/os-release): Fedora 33 - Kernel (e.g.
uname -a): - Others:
Relevant Logs
/assign mangelajo
Problem persists after rebase to OKD 4.8.
From @oglok's analysis in #422:
After a bit of investigation, we have applied the following patch:
https://github.com/redhat-et/microshift/blob/main/scripts/rebase_patches/0004_b301080e0639_UPSTREAM_carry_create-termination-events.patch
which should be used by the root (kube) apiserver, while the openshift-apiserver and oauth-apiserver should be using this patch:
openshift/kubernetes-apiserver@888e3d5
However, go.mod has a replace line to point k8s.io/apiserver to openshift/kubernetes-apiserver. We need to patch of try to split dependencies. (I think)
I am seeing this too. Is this "expected"? Does it cause any issues? Or is it just annoying?
We believe it's a result of the way we merge OpenShift bits without applying any non-OpenShift patches. We haven't observed issues from this. Of course, we want to fix this annoyance and so will revisit this after the rebase to 4.11 (which may already resolve the issue).
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.
If this issue is safe to close now please do so with /close.
/lifecycle stale
This should be resolved by #798