pytest icon indicating copy to clipboard operation
pytest copied to clipboard

subtests not itemized in Junit XML

Open codambro opened this issue 2 months ago • 5 comments

Testing with tip-of-main to try out new subtests integration. Simple subtest example

def test_subtests(subtests):
    """Simple subtest example"""
    for i in range(2):
        with subtests.test(msg="is_even", i=i):
            assert i % 2 == 0, f"{i} is odd"

I expect each subtest to appear as a in a generated JUnit XML. Instead, only the top level is:

<testcase classname="samples.test_sample" name="test_subtests" time="0.006">
<failure message="AssertionError: 1 is odd&#10;assert (1 % 2) == 0">subtests = &lt;_pytest.subtests.Subtests object at 0x103b1d580&gt;

    def test_subtests(subtests):
        """Simple subtest example"""
        for i in range(2):
            with subtests.test(msg="is_even", i=i):
&gt;               assert i % 2 == 0, f"{i} is odd"
E               AssertionError: 1 is odd
E               assert (1 % 2) == 0

samples/test_sample.py:87: AssertionError</failure><system-out>--------------------------------- Captured Log ---------------------------------

</system-out>
<failure message="contains 1 failed subtest">contains 1 failed subtest</failure>
<system-out>--------------------------------- Captured Log ---------------------------------

</system-out>
</testcase>
  • [x] a detailed description of the bug or problem you are having
  • [ ] output of pip list from the virtual environment you are using
  • [x] pytest and operating system versions
  • [x] minimal example if possible

codambro avatar Nov 04 '25 18:11 codambro

I can see in junitxml, there is some caching by nodeid

    def node_reporter(self, report: TestReport | str) -> _NodeReporter:
        nodeid: str | TestReport = getattr(report, "nodeid", report)
        # Local hack to handle xdist report order.
        workernode = getattr(report, "node", None)

        key = nodeid, workernode

        if key in self.node_reporters:
            # TODO: breaks for --dist=each
            return self.node_reporters[key]

A SubtestReport will have the same nodeid as the parent. So when we call "_opentestcase" for each subtest, they all use the same cached NodeReporter

codambro avatar Nov 04 '25 18:11 codambro

Hi @codambro,

Indeed, junitxml currently does not report subtests.

I wonder how would we report the subtests considering the schema? https://github.com/jenkinsci/xunit-plugin/blob/master/src/main/resources/org/jenkinsci/plugins/xunit/types/model/xsd/junit-10.xsd

https://github.com/pytest-dev/pytest/blob/76910ca669594849278a30bf6a943b4cd465fa05/src/_pytest/junitxml.py#L2-L9

nicoddemus avatar Nov 04 '25 18:11 nicoddemus

Since this should be a drop-in replacement for unittest, I reviewed the unittest-xml-reporting plugin to see how it handles subtests in their JUnit XML:

	<testcase classname="samples.test_sample_unittest.SampleTestSuite" name="test_unittest_subtests (number=0)" time="0.000" timestamp="0001-01-01T00:00:00" file="samples/test_sample_unittest.py" line="59">
		<!--Simple test with subtests-->
		<failure type="AssertionError" message="0 is not true : 0 should be odd"><![CDATA[Traceback (most recent call last):
  File "/Users/dambrosi/Documents/GitHub/hpc-shs-qa/samples/test_sample_unittest.py", line 64, in test_unittest_subtests
    self.assertTrue(number % 2, f"{number} should be odd")
AssertionError: 0 is not true : 0 should be odd
]]></failure>
	</testcase>

I don't love their handling. They only list subtests if there is a failed one. So either:

  • If 1 or more subtests failed, ONLY failed subtests are reported in the XML. The top-level test is NOT reported.
  • If all subtests passed, NO subtests are reported in the XML, only the top-level test is reported as passing.

xmlrunner README does call out that it is considered "limited support of subtests". So IMHO, this is a chance for pytest to improve, since there are open issues related to xmlrunner showing community desire for it. By reporting ALL subtests AND the parent test, pass or fail. This way reports are consistent between runs.

codambro avatar Nov 10 '25 14:11 codambro

So if I understand correctly, it reports failed subtests as normal testcase failures? I guess that's the best you can do that still fits the schema.

nicoddemus avatar Nov 10 '25 23:11 nicoddemus

currently, yes. I'd advocate reporting ALL subtests as normal "testcase" entries, pass or fail.

codambro avatar Dec 02 '25 14:12 codambro