dvc-bench
dvc-bench copied to clipboard
Benchmarks for DVC
updates: - [github.com/psf/black: 24.4.0 → 24.4.2](https://github.com/psf/black/compare/24.4.0...24.4.2)
Bumps [actions/configure-pages](https://github.com/actions/configure-pages) from 4 to 5. Release notes Sourced from actions/configure-pages's releases. v5.0.0 Changelog Attempt to auto-detect configuration files with varying file extensions @JamesMGreene (#139) Convert errors into Actions-compatible logging...
data status messed up the table and plots: ``` --------------------------------------------------------------------------------------- benchmark 'test_data_status-data-changed': 4 tests -------------------------------------------------------------------------------------- Name (time in s) Min Max Mean StdDev Median IQR Outliers OPS Rounds Iterations ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------...
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 3 to 4. Release notes Sourced from actions/download-artifact's releases. v4.0.0 What's Changed The release of upload-artifact@v4 and download-artifact@v4 are major changes to the backend architecture of Artifacts....
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 3 to 4. Release notes Sourced from actions/upload-artifact's releases. v4.0.0 What's Changed The release of upload-artifact@v4 and download-artifact@v4 are major changes to the backend architecture of Artifacts....
The benchmarks are now all focused on data management. We need a use case focusing on experiments. This should have many revisions and exp refs and include: * `dvc exp...
Need a benchmark to test `repro` performance, including: - Multi-stage pipelines - Large deps and outputs in terms of size and num files - Multiple pipeline iterations - With and...
Add benchmarks for cloud versioning remotes
E.g. 1M files dataset, and 10M (maybe more as well?) dataset would be great to have.