Skipping a test in Pytest with skip decorator does not reflect the same in CTest execution
I had marked a test as skip with the skip decorator available in Pytest refer: https://docs.pytest.org/en/stable/how-to/skipping.html#skipping-test-functions
# test_example.py
import pytest
# Test currently broken
@pytest.mark.skip(reason="Bug: Failing due to config changes")
def test_some_random_feature():
# ...
pass
While running the above snippet example with CTest:
ctest
I am observing the test as being passed with CTest:
Start 1: test_example.test_some_random_feature
1/1 Test #1: test_example.test_some_random_feature ...................... Passed 0.11 sec
Adding the following PROPERTY in pytest_discover_tests seems to be handling it:
pytest_discover_tests(
test_example
DEPENDS test_example.py
PROPERTIES
SKIP_REGULAR_EXPRESSION "SKIPP;skipp"
)
Output:
Start 1: test_example.test_some_random_feature
1/1 Test #1: test_example.test_some_random_feature ......................***Skipped 0.12 sec
A documentation/unit-test update would help with this feature tracking.
That's a tricky one. Pytest still collects the test but skips it during execution. So it appears as an individual CTest test that technically "passes," even though it was actually skipped.
If you run ctest -VV, you'll likely see something like this:
1: ../test_example.py s
1:
1: ============================== 1 skipped in 0.01s ==============================
1/1 Test #1: test_example.test_some_random_feature ......... Passed 0.22 sec
By adding SKIP_REGULAR_EXPRESSION "SKIPP;skipp", as you did, CTest will detect the "1 skipped in 0.01s" header and actively mark the test as skipped at the CTest level, but it doesn't mean that the pytest.mark.skip logic was being ignored without it.
I'm a bit hesitant to add this as a feature—or even document it—since this method could break if there's another test that shouldn’t be skipped but happens to contain keywords like "skipped" in its name, such as test_something_is_skipped.py. This could lead to unintended behavior where a test gets mistakenly filtered out at the CTest level.
I'm a bit hesitant to add this as a feature—or even document it—since this method could break if there's another test that shouldn’t be skipped but happens to contain keywords like "skipped" in its name, such as
test_something_is_skipped.py. This could lead to unintended behavior where a test gets mistakenly filtered out at the CTest level.
Even I did observe this behavior, but I could not figure out the best way to show the same state of the test to CTest. Otherwise getting a passed result in CI with CTest for these Pytests give a false impression the test has been fixed/working. Tracking test skipped can give an indication if the tests is being fixed actively on bug tracker or the feature has been deprecated and it warrants the test removal.
Hopefully, I or anybody else can add comments here in the discussions to come up with a better solution.
For now, I will probably have to set a strict guidelines on not using the keyword skipped in any part of the code/test.