pytest-dependency icon indicating copy to clipboard operation
pytest-dependency copied to clipboard

I want to flag that test_b failed, not skip it

Open q2qiang opened this issue 1 year ago • 2 comments

With pytest-dependency, how do you mark this as a failure if the pre-dependency execution succeeds, but subsequent real use cases fail and are skipped

from pytest_dependency import depends

def test_a(): assert 1 == 1

def test_b(request): depends(request, "test_a") assert 2 != 2

test_1.py::test_a PASSED test_1.py::test_b SKIPPED (test_b depends on t)

q2qiang avatar Mar 05 '25 07:03 q2qiang

test_1.py::test_b is fail ,should flag

q2qiang avatar Mar 05 '25 07:03 q2qiang

I think it's because test_a doesn't have the dependency fixture in use. Either set the automark_dependency CLI argument to mark all test cases or switch to a use with markers so you can mark test_a.

import pytest

def test_a():
    assert True

@pytest.mark.dependency(depends=['test_a'])
def test_b():
    assert False

Run with no additional pytest arguments. test_a passes, test_b is skipped because of missing dependencies. This doesn't work because test_b can't find the dependency test_a. Because we didn't use the automark_dependency argument to mark all test cases as possible dependencies and we didn't explicitly mark test_a, it's completion status isn't stored.

import pytest

@pytest.mark.dependency()
def test_a():
    assert True

@pytest.mark.dependency(depends=['test_a'])
def test_b():
    assert False

test_b fails, test_a passes. Run with no additional pytest arguments.

import pytest

def test_a():
    assert True

@pytest.mark.dependency(depends=['test_a'])
def test_b():
    assert False

test_b fails, test_a passes. Run with the automark_dependency argument from the documentation should result in test_b fails, test_a passes.

tplunkett-trimac avatar Apr 15 '25 22:04 tplunkett-trimac