Running tests with workers causing incorrect failure / passing count
What are you trying to achieve?
To run a series of test features expect 0 or positive failures, or 100% success across all 78 tests.
What do you get instead?
In the output below, one test failed but the output says all test have passed (although there are 78 total, the count says 77.)
[5] ✔ [testname]
[2] ✔ [testname]
[8] ✖ [testname] 264128ms
[8] [featurename] --
[8] ✔ [testname]
[5] ✔ [testname]
[2] ✔ [testname]
[2] ✔ [testname]
[5] ✔ [testname]
[5] ✔ [testname]
[2] ✔ [testname]
[5] ✔ [testname]
[2] ✔ [testname]
[5] ✔ [testname]
[5] ✔ [testname]
[5] ✔ [testname]
[5] ✔ [testname]
[5] ✔ [testname]
[5] ✔ [testname]
OK | 77 passed // 1106.326s
/tmp/build/3e139e9d
+ '[' 0 -ne 0 ']'
In other instances, with no pipeline changes, it would say all tests passed with -1 failed.
FAIL | 78 passed, -1 failed // 1144.407s
Codecept is being run with:
codeceptjs run-workers --suites 8 --grep @full-suite
These asynchronous tests were passing OK for some time. They are all being ran within a codecept container.
Details
- CodeceptJS version: v2.4.1
- NodeJS Version: v12.10.0
- Operating System: Not sure
- puppeteer || webdriverio || protractor || testcafe version (if related) Not sure
- Configuration file:
exports.config = {
tests: './tests/*.js',
timeout: 10000,
output: './output',
helpers: {
Puppeteer: {
chrome: { args: ['--no-sandbox'] },
url: 'http://localhost:8000',
show: false,
restart: false,
keepBrowserState: true,
windowSize: '1200x800',
waitForNavigation: 'networkidle0',
waitForTimeout: 30000,
},
DomHelpers: { require: './helpers/DomHelpers.js' },
},
include: {
I: './steps/steps_file.js',
[more stuff]
},
plugins: {
retryFailedStep: {
enabled: true,
retries: 5,
minTimeout: 500,
},
[customplugin]
},
bootstrap: false,
// bail will fail hard after one test failure
mocha: { bail: false },
name: '[name]',
};
Hopefully the info I've provided is sufficient -- please let me know if I can provide anything else!
@Brydom I noticed the same thing when trying to use run-workers with our tests -> https://circleci.com/gh/Codeception/CodeceptJS/1874?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link
@koushikmohan1996 I wonder if this PR has anything to do with this? https://github.com/Codeception/CodeceptJS/pull/2141/files#diff-5280701c260a64e6784b06c1e7c60f63R221
Sorry if incorrect, not too familiar with the project. Just digging around in PRs :)
@koushikmohan1996 I wonder if this PR has anything to do with this? https://github.com/Codeception/CodeceptJS/pull/2141/files#diff-5280701c260a64e6784b06c1e7c60f63R221
Sorry if incorrect, not too familiar with the project. Just digging around in PRs :)
Can you check if the test is running correctly while using run command? It will help to narrow the issue. I will also check if this is causing the issue.
@koushikmohan1996 I wonder if this PR has anything to do with this? https://github.com/Codeception/CodeceptJS/pull/2141/files#diff-5280701c260a64e6784b06c1e7c60f63R221 Sorry if incorrect, not too familiar with the project. Just digging around in PRs :)
Can you check if the test is running correctly while using
runcommand? It will help to narrow the issue. I will also check if this is causing the issue.
Yes, tests run fine with run.
@koushikmohan1996 I wonder if this PR has anything to do with this? https://github.com/Codeception/CodeceptJS/pull/2141/files#diff-5280701c260a64e6784b06c1e7c60f63R221 Sorry if incorrect, not too familiar with the project. Just digging around in PRs :)
Can you check if the test is running correctly while using
runcommand? It will help to narrow the issue. I will also check if this is causing the issue.Yes, tests run fine with
run.
Thank you, I'll will look into it
@Brydom Thanks for raising the issue. And yeah! you are correct.
This issue will occur if Scenario is same for two tests. I will think of a way to fix it. Meanwhile, you can test this by changing Scenario for two tests and run test on those alone.
var assert = require('assert');
var tries = 0;
Feature('Retry in workers');
Scenario('Flaky scenario', { retries: 2 }, () => {
setTimeout(() => { tries++ }, 200);
assert.equal(tries, 1);
});
Scenario('Flaky scenario', { retries: 2 }, () => {
setTimeout(() => { tries++ }, 200);
assert.equal(tries, 2);
});
This failed for me.
var assert = require('assert');
var tries = 0;
Feature('Retry in workers');
Scenario('Flaky scenario', { retries: 2 }, () => {
setTimeout(() => { tries++ }, 200);
assert.equal(tries, 1);
});
Scenario('Flaky scenario 1', { retries: 2 }, () => {
setTimeout(() => { tries++ }, 200);
assert.equal(tries, 2);
});
But this passed correctly
An example how to reproduce is in #2790 .
Hello,
Is there any update on this subject ?
I'm facing the same issue when i'm ussing data-driven tests and run-workers.
When i use run if some of my tests fails there is the good exit code and job fails correctly but when i use run-workers, if tests fails i have the message OK | 0 passed and my job doesn't fail at all.
I have activate --debug mode and the title for each tests seems to be unique.
So how i can fix my tests ?
I face similar issue. When the test fails, i see OK | 0 passed. Could you please let me know if there is a workaround for this?
when I am running with workers my test cases under single tag its able to execute my test cases and are passed . But when I try to run with 2 tags with 2 workers then two browsers opening but only one browser action are performing and the other one is in idle state.
can any one help me on this?
Any updates on this issue?
I'm having a similar issue: when running tests with run-workers and some validation on Before fails, then the test is marked as failed, but then the result is OK.
I have the following:
- Feature1
- F1_Test1
- F1_Test2
- F1_Test3
- Festure2
- F2_Test1
- F2_Test2
And when running, I have the following result:
[01] ✖ Feature1 in 0ms
[02] ✖ Feature1 in 0ms
[03] ✖ Feature1 in 0ms
[01] ✖ Feature2 in 0ms
[02] ✖ Feature2 in 0ms
OK | 0 passed
It shows the feature name, not the test name, repeated for how many tests the feature has. All tests are failed. But then the result is OK.
This is a huge problem, because in the pipeline it shows that the tests are passed and we only noticed that the tests were actually failing when we had to debug some tests.