test: passed test suite contains failed test cases
/kind bug
What steps did you take and what happened: [A clear and concise description of what the bug is.]
example: https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/directory/pull-kubernetes-e2e-capz-windows-dockershim/1417250279195152384
The job above has two failed test cases but it's marked as passed:
Test started yesterday at 3:31 PM passed after 1h24m54s. (more info)
What did you expect to happen:
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]
Environment:
- cluster-api-provider-azure version:
- Kubernetes version: (use
kubectl version): - OS (e.g. from
/etc/os-release):
This appears to happen for Linux as well: https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/directory/pull-kubernetes-e2e-capz-conformance/1416217627528794112
/priority important-soon /help
@CecileRobertMichon: This request has been marked as needing help from a contributor.
Please ensure the request meets the requirements listed here.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help command.
In response to this:
/priority important-soon /help
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Looks like this is an issue in CAPI test framework, CAPA has the same problem: https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/periodic-cluster-api-provider-aws-e2e-conformance-with-k8s-ci-artifacts/1422914264003252224
Transferred to CAPI repo as this is an issue in the CAPI test framework and affects CAPI as well https://testgrid.k8s.io/sig-cluster-lifecycle-cluster-api#capi-e2e-main-1-18-1-19.
I think ginkgo does not return a non-zero exit code and that’s why RunContainer does not fail (which is probably good because with our current code we would otherwise not gather the test results, and wouldn’t see them in Prow) Imho, one solution could be to read the test results after: https://github.com/kubernetes-sigs/cluster-api/blob/e0ced24598da02d8d64d93da71acbf86fd6a7e32/test/framework/kubetest/run.go#L193 And if we find failed tests, return an error.
cc @sbueringer @randomvariable
/help
@CecileRobertMichon: This request has been marked as needing help from a contributor.
Please ensure the request meets the requirements listed here.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help command.
In response to this:
/help
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Chat from Slack, looks like we're going to have to parse the JUnit XML and figure out if something failed, which is pretty terrible.
Philosophically in the initial implementation, the current behaviour was intended given that to get any sort of result from conformance was a big success in itself. Taking the next step in making a failed conformance test fail the suite is reasonable.
fyi. Prow is using GCP/testgrid to parse junit reports for the junit lens: https://github.com/kubernetes/test-infra/blob/master/prow/spyglass/lenses/junit/lens.go#L162 https://github.com/GoogleCloudPlatform/testgrid/blob/master/metadata/junit/junit.go#L198-L204
No idea if it's a good idea to use it.
If it does what we want, i don't see why not.
If it does what we want, i don't see why not.
Just thinking about that we might don't want to have a dependency on testgrid and not sure how stable it is as a library (current verison is 0.0.91). But I guess as the worst case is to copy ~200 lines of code, it shouldn't hurt to have a dependency on it.
We have the third_party folder where you could copy code from upstream repositories if needed.
/milestone v1.0 /priority important-soon
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/lifecycle frozen
/triage accepted worth to investigate if this is still the case
Definitely still the case.
This can be reproduced ~ like this:
- Run an upgrade test locally with a breakpoint before we run kubetest
- Break the cluster (e.g. change the image in the CoreDNS Deployment to an invalid name)
- Wait 10m (or add
system-pods-startup-timeout: 1mtoconformance.yaml - Step through the remaining code. I would expect us to copy the junit reports around but based on the code we don't read and parse the results to fail the e2e test if there are any failures in the junit reports
cc @knabben (Just in case you're looking for more work in a bit :), I can help you getting the local e2e test up & running)
This issue is labeled with priority/important-soon but has not been updated in over 90 days, and should be re-triaged.
Important-soon issues must be staffed and worked on either currently, or very soon, ideally in time for the next release.
You can:
- Confirm that this issue is still relevant with
/triage accepted(org members only) - Deprioritize it with
/priority important-longtermor/priority backlog - Close this issue with
/close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted
/triage accepted
@sbueringer Can we close this now that https://github.com/kubernetes-sigs/cluster-api-provider-azure/issues/2265 is solved?
Not sure about CAPZ I think they had other problems, but I think our problem is not solved.
Should be still reproducible via: https://github.com/kubernetes-sigs/cluster-api/issues/5123#issuecomment-1370032836
We didn't make any changes in core CAPI
This issue is labeled with priority/important-soon but has not been updated in over 90 days, and should be re-triaged.
Important-soon issues must be staffed and worked on either currently, or very soon, ideally in time for the next release.
You can:
- Confirm that this issue is still relevant with
/triage accepted(org members only) - Deprioritize it with
/priority important-longtermor/priority backlog - Close this issue with
/close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted
/assign @chrischdi To re-assess if we handle properly the results from conformance test
I tried to reproduce it again and @sbueringer already had the correct idea here:
Could it be that the ContainerInspect is looking into the newly restarted container instead of the "run" which just failed?
xref: fix in CAPI: https://github.com/kubernetes-sigs/cluster-api/pull/7946
source: https://github.com/kubernetes-sigs/cluster-api-provider-azure/issues/2265
So before merging #7946 there was a race condition between reading the exit code and the restart triggered by docker, which cleared the exit code.
So if the ginkgo fails it should always return its non-zero exit code and because of that let the test fail.
One improvement which could maybe be done: still run the framework.GatherJUnitReports function to show the potential failed tests in prow instead of hiding away the junit report in the error case.
I will create a PR for this.
/close
Appendix:
I was able to reproduce the issue by using the instructions from here plus:
- Removing the line at: https://github.com/kubernetes-sigs/cluster-api/blob/main/test/framework/kubetest/run.go#L204
- Having a breakpoint at https://github.com/kubernetes-sigs/cluster-api/blob/main/test/infrastructure/container/docker.go#L536
- and let the debugger continue as soon as docker restarted the container
Note: this means that every usage of RunContainer which does not set RestartPolicy: dockercontainer.RestartPolicyDisabled may not able to rely on the exitcode check.
@chrischdi: Closing this issue.
In response to this:
I tried to reproduce it again and stefan already had the correct idea here:
Could it be that the ContainerInspect is looking into the newly restarted container instead of the "run" which just failed?
xref: fix in CAPI: https://github.com/kubernetes-sigs/cluster-api/pull/7946
source: https://github.com/kubernetes-sigs/cluster-api-provider-azure/issues/2265
So before merging #7946 there was a race condition between reading the exit code and the restart triggered by docker, which cleared the exit code.
So if the ginkgo fails it should always return its non-zero exit code and because of that let the test fail.
One improvement which could maybe be done: still run the
framework.GatherJUnitReportsfunction to show the potential failed tests in prow instead of hiding away the junit report in the error case. I will create a PR for this./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
@chrischdi Just that I got it right. Current behavior:
- If any conformance test fails, the test in Prow will fail as well
- If any conformance test fails, we won't get the junit report in Prow (by which we would see what exactly failed) Is that correct?
@chrischdi Just that I got it right. Current behavior:
- If any conformance test fails, the test in Prow will fail as well
Exactly, the case described in this issue that the test "passed" was because of the race condition because docker restarted the container.
- If any conformance test fails, we won't get the junit report in Prow (by which we would see what exactly failed) Is that correct?
We would get them in prow, but hidden in artifacts as xml file / not visualised by prow. But this will be fixed via #10493
We would get them in prow, but hidden in artifacts as xml file / not visualised by prow. But this will be fixed via https://github.com/kubernetes-sigs/cluster-api/pull/10493
Not sure I got this part. I think we only have one xml file in artifacts, and that is the one that Prow uses to visualize: https://gcsweb.k8s.io/gcs/kubernetes-jenkins/logs/periodic-cluster-api-e2e-conformance-main/1780353942337622016/artifacts/
Are there more? (that are also uploaded to artifacts)
Ah, we move the files (in GatherJUnitReports)? So if the move doesn't happen they are still here? https://gcsweb.k8s.io/gcs/kubernetes-jenkins/logs/periodic-cluster-api-e2e-conformance-main/1780353942337622016/artifacts/kubetest/k8s-conformance-zcv9fy/
Yeah, to make them visible to prow we have to move them up.