subtests not itemized in Junit XML
Testing with tip-of-main to try out new subtests integration. Simple subtest example
def test_subtests(subtests):
"""Simple subtest example"""
for i in range(2):
with subtests.test(msg="is_even", i=i):
assert i % 2 == 0, f"{i} is odd"
I expect each subtest to appear as a
<testcase classname="samples.test_sample" name="test_subtests" time="0.006">
<failure message="AssertionError: 1 is odd assert (1 % 2) == 0">subtests = <_pytest.subtests.Subtests object at 0x103b1d580>
def test_subtests(subtests):
"""Simple subtest example"""
for i in range(2):
with subtests.test(msg="is_even", i=i):
> assert i % 2 == 0, f"{i} is odd"
E AssertionError: 1 is odd
E assert (1 % 2) == 0
samples/test_sample.py:87: AssertionError</failure><system-out>--------------------------------- Captured Log ---------------------------------
</system-out>
<failure message="contains 1 failed subtest">contains 1 failed subtest</failure>
<system-out>--------------------------------- Captured Log ---------------------------------
</system-out>
</testcase>
- [x] a detailed description of the bug or problem you are having
- [ ] output of
pip listfrom the virtual environment you are using - [x] pytest and operating system versions
- [x] minimal example if possible
I can see in junitxml, there is some caching by nodeid
def node_reporter(self, report: TestReport | str) -> _NodeReporter:
nodeid: str | TestReport = getattr(report, "nodeid", report)
# Local hack to handle xdist report order.
workernode = getattr(report, "node", None)
key = nodeid, workernode
if key in self.node_reporters:
# TODO: breaks for --dist=each
return self.node_reporters[key]
A SubtestReport will have the same nodeid as the parent. So when we call "_opentestcase" for each subtest, they all use the same cached NodeReporter
Hi @codambro,
Indeed, junitxml currently does not report subtests.
I wonder how would we report the subtests considering the schema? https://github.com/jenkinsci/xunit-plugin/blob/master/src/main/resources/org/jenkinsci/plugins/xunit/types/model/xsd/junit-10.xsd
https://github.com/pytest-dev/pytest/blob/76910ca669594849278a30bf6a943b4cd465fa05/src/_pytest/junitxml.py#L2-L9
Since this should be a drop-in replacement for unittest, I reviewed the unittest-xml-reporting plugin to see how it handles subtests in their JUnit XML:
<testcase classname="samples.test_sample_unittest.SampleTestSuite" name="test_unittest_subtests (number=0)" time="0.000" timestamp="0001-01-01T00:00:00" file="samples/test_sample_unittest.py" line="59">
<!--Simple test with subtests-->
<failure type="AssertionError" message="0 is not true : 0 should be odd"><![CDATA[Traceback (most recent call last):
File "/Users/dambrosi/Documents/GitHub/hpc-shs-qa/samples/test_sample_unittest.py", line 64, in test_unittest_subtests
self.assertTrue(number % 2, f"{number} should be odd")
AssertionError: 0 is not true : 0 should be odd
]]></failure>
</testcase>
I don't love their handling. They only list subtests if there is a failed one. So either:
- If 1 or more subtests failed, ONLY failed subtests are reported in the XML. The top-level test is NOT reported.
- If all subtests passed, NO subtests are reported in the XML, only the top-level test is reported as passing.
xmlrunner README does call out that it is considered "limited support of subtests". So IMHO, this is a chance for pytest to improve, since there are open issues related to xmlrunner showing community desire for it. By reporting ALL subtests AND the parent test, pass or fail. This way reports are consistent between runs.
So if I understand correctly, it reports failed subtests as normal testcase failures? I guess that's the best you can do that still fits the schema.
currently, yes. I'd advocate reporting ALL subtests as normal "testcase" entries, pass or fail.