piccolo icon indicating copy to clipboard operation
piccolo copied to clipboard

Creating benchmarks for piccolo ORM

Open aminalaee opened this issue 4 years ago • 7 comments

I'm usually not a fan of benchmarks but I think it'd be a good idea to have some benchmarks comparing piccolo with other sync/async ORMs. It will also help with cases like #143 comparing piccolo with itself with different configurations.

aminalaee avatar Jul 31 '21 11:07 aminalaee

@aminalaee I like the idea of having benchmarks to catch performance regressions made by changes to the Piccolo codebase.

I find them quite tricky to implement though, as there's no guarantees around the performance of the CI infrastructure, so one build might be slower than another, without it being anything to do with the code being tested.

As for testing it against other frameworks, it's hard to know what to test. Piccolo is fastest in this situation:

# `freeze` caches a lot of the work in generating the SQL:
QUERY = MyTable.select().output(as_json=True).freeze()

async def some_endpoint(request):
    # Letting Piccolo serialise the JSON means orjson will be used if available, which is super fast.
    data = await QUERY.run()
    return Response(data, content_type="application/json")

Other frameworks might not have comparable features, or might have their own performance optimisations we're not aware of.

What do you think?

dantownsend avatar Jul 31 '21 17:07 dantownsend

@dantownsend I think for testing different frameworks benchmarking basic INSERT/SELECT/UPDATE/DELETE without any configuration can be a good start. I agree that optmizing each framework can be complicated and probably unfair to others.

For regression testing Piccolo I think we can try pytest-benchmark and if the load of Github server affects the numbers go for dedicated hardware for testing. But I guess increasing number of queries and maybe getting average values from the tests can minimize that effect.

aminalaee avatar Jul 31 '21 21:07 aminalaee

@aminalaee Yeah, that makes sense. Do you think the benchmarks should be part of this repo, or a separate repo?

dantownsend avatar Jul 31 '21 22:07 dantownsend

@dantownsend I think for comparing different frameworks we can have a separate repo so anyone can see how that works and run them locally. We would show benchmarks in Piccolo docs then. That would keep Piccolo's history clean of benchmarking commits.

And for Piccolo regression tests I think pytest-benchnark can do a good job with a github workflow but as you said this is a bit tricky so needs more testing.

If you think we need both of them we can do them separately.

aminalaee avatar Aug 01 '21 05:08 aminalaee

Sounds like a good plan. It would be nice to have both, but having either of them would be useful.

dantownsend avatar Aug 03 '21 19:08 dantownsend

@dantownsend If we want the extra repository for comparisons, then please create the repo and I'll start an MR to get it started

aminalaee avatar Aug 04 '21 06:08 aminalaee

Here's a repo in case you feel like doing some performance testing:

https://github.com/piccolo-orm/piccolo_performance

dantownsend avatar Aug 06 '21 14:08 dantownsend