pytest-testmon icon indicating copy to clipboard operation
pytest-testmon copied to clipboard

testmon validation tool (running all tests but big warning in case testmon would miss changes)

Open notestaff opened this issue 8 years ago • 4 comments

@tarpas Can you add an option where testmon determines which tests it would skip, but still runs all tests, and if any tests fails that testmon would have skipped, prints a big alarm message? Then I could integrate testmon into our test suite and try it for a while without the risk of missing any test failures due to a testmon bug. This would help convince my team that we can trust testmon's implementation.

It'd also be nice to print how much time testmon would have saved had it been allowed to actually skip the tests it determined are skippable.

notestaff avatar Feb 21 '18 20:02 notestaff

The default behavior of testmon considers all test outcomes equal (passed, failed, skipped), so your proposal is a little confusing to me.

What I would like to have is a tool which runs all tests and warns in case a test outcome changed between two runs, but testmon predicted that it would not change (and therefore would have deselected it). A little automation on top of that could take a repository and run this process on a range of commits and report inconsistencies. In real world most of the inconsistencies would be dependencies between tests and changes which testmon doesn't detect yet (build environment/scripts, requirements.txt change, etc). To confirm or rule out a dependency between tests, some smart process could also be programmed...

@notestaff Do you use --testmon in a local development environment? Or were you only considering it for CI?

I'm working on some innovations in the local development environment (GUI), so I'll not have time to work on this proposal for a couple of months, sorry.

tarpas avatar Feb 22 '18 06:02 tarpas

I want to use it for CI. But that requires also implementing the saving of coverage (saving per-test coverage, then merging saved coverage of skipped tests, after mapping of corresponding lime numbers in chamged files using something like https://github.com/AndersDJohnson/diff-map/blob/master/README.md ; and also reliable compatibility with xdiff...

notestaff avatar Feb 22 '18 11:02 notestaff

What’s your motivation? Earlier reporting of failures? Or lower demand for CI servers capacity?

tarpas avatar Feb 22 '18 12:02 tarpas

Lower demand for CI servers. It's non-trivial to run our test suite locally, so I run most tests by pushing to the CI servers. But I guess I can change CI settings on a branch to use testmon, then change them back before merging.

"automation on top of that could take a repository and run this process on a range of commits and report inconsistencies" would be great. Even better if you could take from github/travis the history of test results that have already been run, and use that. You could report whether any test suite failures would have been missed, and how much time would have been saved with testmon.

Separately, I'm trying to repurpose testmon to save recomputation during production use. We have a repository with a bunch of scripts for bioinformatics pipelines. I've added automated logging so that when a script is run, we log its command line and the hashes of all inputs. If we also gather coverage while running the script, we can then detect when a script is being re-run on the same inputs with the same implementation, even if other parts of the repo have changed, and reuse cached results.

notestaff avatar Feb 23 '18 16:02 notestaff