Codspeed - performance benchmarks
[!NOTE]
- [ ] Annotation needed for
benchmark(BenchmarkFixture)
Problem
It's difficult to catch performance degradation or improvements over time, in a PR, etc.
Changes
Add performance benchmarks
TBD
Setup codpseed
Configure on website, set secret, etc.
py(deps[test]) Add pytest-codspeed
See also:
- https://pypi.org/project/pytest-codspeed/
- https://docs.codspeed.io/
- https://codspeed.io/
Summary by Sourcery
Add performance benchmarks using Codspeed. Integrate Codspeed into the CI workflow to automatically run performance tests and report results.
CI:
- Integrate Codspeed into the CI workflow to trigger performance tests on pull requests and pushes, and on demand via manual dispatch.
Tests:
- Add
pytest-codspeedto enable performance testing.
Codecov Report
Attention: Patch coverage is 83.64780% with 26 lines in your changes missing coverage. Please review.
Project coverage is 64.09%. Comparing base (
bc6e897) to head (4ac41d3).
| Files with missing lines | Patch % | Lines |
|---|---|---|
| src/libvcs/pytest_plugin.py | 80.30% | 16 Missing and 10 partials :warning: |
Additional details and impacted files
@@ Coverage Diff @@
## master #471 +/- ##
==========================================
+ Coverage 63.85% 64.09% +0.24%
==========================================
Files 40 40
Lines 3591 3724 +133
Branches 774 790 +16
==========================================
+ Hits 2293 2387 +94
- Misses 772 800 +28
- Partials 526 537 +11
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
@sourcery-ai review
Reviewer's Guide by Sourcery
This pull request introduces performance benchmarks using CodSpeed. It sets up CodSpeed configuration, adds the pytest-codspeed dependency, and integrates it into the testing workflow. A benchmark is added to the test_repo_git_obtain_initial_commit_repo test function.
Sequence diagram for benchmark execution flow
sequenceDiagram
participant Dev as Developer
participant CI as CI Pipeline
participant CS as CodSpeed
Dev->>CI: Push code changes
activate CI
CI->>CI: Run tests with pytest
CI->>CI: Execute benchmarks
CI->>CS: Send benchmark results
activate CS
CS->>CS: Analyze performance
CS-->>Dev: Report performance changes
deactivate CS
deactivate CI
File-Level Changes
| Change | Details | Files |
|---|---|---|
| Set up CodSpeed to collect performance benchmarks. |
|
.github/workflows/tests.ymlpyproject.toml |
| Added a performance benchmark to an existing test. |
|
tests/sync/test_git.py |
Tips and commands
Interacting with Sourcery
-
Trigger a new review: Comment
@sourcery-ai reviewon the pull request. - Continue discussions: Reply directly to Sourcery's review comments.
- Generate a GitHub issue from a review comment: Ask Sourcery to create an issue from a review comment by replying to it.
-
Generate a pull request title: Write
@sourcery-aianywhere in the pull request title to generate a title at any time. -
Generate a pull request summary: Write
@sourcery-ai summaryanywhere in the pull request body to generate a PR summary at any time. You can also use this command to specify where the summary should be inserted.
Customizing Your Experience
Access your dashboard to:
- Enable or disable review features such as the Sourcery-generated pull request summary, the reviewer's guide, and others.
- Change the review language.
- Add, remove or edit custom review instructions.
- Adjust other review settings.
Getting Help
- Contact our support team for questions or feedback.
- Visit our documentation for detailed guides and information.
- Keep in touch with the Sourcery team by following us on X/Twitter, LinkedIn or GitHub.