scikit-learn-intelex icon indicating copy to clipboard operation
scikit-learn-intelex copied to clipboard

[On hold] TEST: Produce JSON logs of executed examples

Open david-cortes-intel opened this issue 1 year ago • 2 comments

Description

This PR adds an additional log file from the script run_examples.py that outputs the results from all example executions in JSON format, including details such as time they took to execute and outcome (success / failure).

The JSON files will contain a list with all the examples that were executed, adding the variants (e.g. nodist, nostream) as suffix to the names, and including examples that were skipped too.


Checklist to comply with before moving PR from draft:

PR completeness and readability

  • [x] I have reviewed my changes thoroughly before submitting this pull request.
  • [ ] I have commented my code, particularly in hard-to-understand areas.
  • [ ] I have updated the documentation to reflect the changes or created a separate PR with update and provided its number in the description, if necessary.
  • [x] Git commit message contains an appropriate signed-off-by string (see CONTRIBUTING.md for details).
  • [x] I have added a respective label(s) to PR if I have a permission for that.
  • [x] I have resolved any merge conflicts that might occur with the base branch.

Testing

  • [x] I have run it locally and tested the changes extensively.
  • [ ] All CI jobs are green or I have provided justification why they aren't.
  • [ ] I have extended testing suite if new functionality was introduced in this PR.

Performance

  • [ ] I have measured performance for affected algorithms using scikit-learn_bench and provided at least summary table with measured data, if performance change is expected.
  • [ ] I have provided justification why performance has changed or why changes are not expected.
  • [ ] I have provided justification why quality metrics have changed or why changes are not expected.
  • [ ] I have extended benchmarking suite and provided corresponding scikit-learn_bench PR if new measurable functionality was introduced in this PR.

david-cortes-intel avatar Oct 07 '24 10:10 david-cortes-intel

@david-cortes-intel is it possible just to rewrite the runner with using simple parametrized pytest test? In this case more consistent statistics will be reproduced

samir-nasibli avatar Oct 08 '24 09:10 samir-nasibli

Yes, it looks like these examples also get executed as part of unit tests.

If PR https://github.com/intel/scikit-learn-intelex/pull/2090 switches those runs to pytest, then it'd be better to get JSON logs from a pytest plugin instead, as they will be more detailed and would follow the same format as other tests.

So this PR might not be needed in the end. It wouldn't hurt to merge it, but it shouldn't be a priority.

david-cortes-intel avatar Oct 08 '24 09:10 david-cortes-intel