feat(toolchain) Add coveragepy configuration attribute
- Why this change is being made, briefly.
The PR makes it possible to configure python code coverage through
.coveragercfile or any other accepted config file - Before and after behavior, as applicable Before this PR it was not possible to configure code coverage since the configuration was hardcoded within the ruleset. After the PR, users can pass the label to a configuration file Reference issue close https://github.com/bazelbuild/rules_python/issues/1434
@rickeylev @aignas PTAL, and please advise how this can be probably tested.
I know there are analysis tests for the python rules and toolchains, but I am not sure how easy that would be to implement that.
Ideally we would like to move to a direction that would allow different test runners to be used in the py_test rule and them being able to override the coverage generation. This is not something I know how to achieve right now, but a way for a test runner middleware to override the coverage or other things would be definitely more maintainable long term than adding more things to the py_runtime/toolchains.
See cc_test notes of a similar feature are here, maybe we could make use of them: https://github.com/trybka/scraps/blob/master/cc_test.md
Ideally we would like to move to a direction that would allow different test runners to be used in the
py_testrule and them being able to override the coverage generation. This is not something I know how to achieve right now, but a way for a test runner middleware to override thecoverageor other things would be definitely more maintainable long term than adding more things to thepy_runtime/toolchains.See
cc_testnotes of a similar feature are here, maybe we could make use of them: https://github.com/trybka/scraps/blob/master/cc_test.md
This looks like a really great approach. Does it require changing the py_test rule to accept something similar to the CcTestRunnerInfo
Yeah, most likely, but I haven't done a deep dive about it yet.
passing an rc file; override coverage generation
Just thinking out loud; some quick thoughts.
On the rule side, the config file has to be passed in to something, somewhere. Naively, something in the test rule has a reference to the config as a File somehow (though I suppose not necessary a File, could be something more complex, like the cc_test runner provider with a callback etc) -- directly via an attribute? directly via some provider in deps? indirectly through toolchains? Once it has that file reference, it can then...pass it...somewhere, somehow...so the next step can happen.
Next step: the runtime side.
So, some way that py_test's stage2 boot can be hooked into? Fundamentally, what this needs to do is change some of the calls stage2 bootstrap makes to the coverage APIs. Another template variable in stage2 bootstrap? A special module that gets import with a hook function? Something I keep coming back to is, if dependencies had some way to more directly hook into the startup/bootstrap process, stuff like this would be easier.
Let me try to recapitulate @aignas Your suggestion will make it possible to use different runners (pytest, unnittes, green, coverage-py) to run test since we can configure a corresponding executable. e.g https://github.com/bazelbuild/rules_python/compare/main...ewianda:rules_python:feat-test-toolchain-poc?expand=1
@rickeylev Your suggestion will just make it possible to provide a configuration file to coverage-py e.g
https://github.com/bazelbuild/rules_python/compare/main...ewianda:rules_python:feat-configure-coverage-rc-env-variable?expand=1
The approach discussed sugested in https://github.com/bazelbuild/rules_python/pull/2224#issuecomment-2357318988 is implemented in https://github.com/bazelbuild/rules_python/pull/2246