feature-request: Extend Benchmarking
Does this project have specific goals around performance that are maintained or monitored over time?
Something like what Deno does, where performance would be tracked and maintained over time.
Recently I have found this github actions based tool that seems to be in the right vein, though its tied to gh-pages.
I am definitely down put together a PR for it if there is interest. I will need some input into what would be of the most benefit to benchmark though.
hello thanks for suggestions
i will try to connect https://github.com/Prozi/detect-collisions#benchmark with some changes with what you linked
I updated the benchmark to be more deterministic and
used it inside the circle ci build process https://app.circleci.com/pipelines/github/Prozi/detect-collisions
example:
https://app.circleci.com/pipelines/github/Prozi/detect-collisions/156/workflows/61b5a6fd-9947-4443-9517-f4a28251cd05/jobs/130

I tried reading about what you pasted but had no success after first try - it seems so complicated
have you tried using this tool maybe and can provide some help if needed?
I have, and it only works well with github pages.
The trick is that the build agent is only there to
- Pull down the previous run
- Append the latest
- Truncate the results to the window of results you are interested in
- Commit & push the results back into the gh-pages branch
In looking at the stress script I think the better route would be to define some boundaries/cases and then build out the suites to support them:
- Insert 100 bodies, Non Overlapping
- Insert 100 bodies, Overlapping
- Update 100 bodies, Non Overlapping
- Update 100 bodies, Overlapping
- Remove 100 bodies, Non Overlapping
- Remove 100 bodies, Overlapping
Repeat the above for each shape type, then mixed
- What are the expectations for the ray casts?
- What is the upper bound for the system and how many entities it should be supporting collisions with?
- Are you open to leveraging something like tinybench to help offload the processing/statistical side of this?
I think I have some time tomorrow and may be able to get you a PR to help set the direction for this, assuming my comments above align with the vision for the library. I have sync'd up my fork and will try to get you a PR later in the afternoon, assuming time permits.
What are the expectations for the ray casts?
TBH the goal was they should just work, on all types of bodies, which can be seen in tank demo https://prozi.github.io/detect-collisions/demo/
What is the upper bound for the system and how many entities it should be supporting collisions with?
I would say based on benchmark that a rough 1500 constantly moving bodies to keep 60 fps updates
Are you open to leveraging something like tinybench to help offload the processing/statistical side of this?
yes
I think I have some time tomorrow and may be able to get you a PR to help set the direction for this, assuming my comments above align with the vision for the library. I have sync'd up my fork and will try to get you a PR later in the afternoon, assuming time permits.
I would love a merge request with such changes
Alrighty!
I am going to take a stab at it in the morrow. Started nosing around already and I think my schedule this week is open enough to get this moving.
Now that we have the baseline in here, is there a segment you would like to focus our efforts on?
Now that we have the baseline in here, is there a segment you would like to focus our efforts on?
hello
I think the speed of testing for
- collision between convex and non-convex polygons can be tested (we are already testing convex by circle, triangle, and box I believe, we could reuse the non-convex polygon from tests - tank demo - or use another one)
- static and non-static bodies, like does adding X static bodies influence collision detection, how does inserting another X? (creating body with isStatic flag option or setting it later will make an object static)
- using zero and nonzero paddings, how does it influence the speed when using N bodies
- updating bodies - this is a very important one in my view, the slowest part of the system is reinsertion to rbush tree (the second slowest I believe is sat collision checking - but I dont think we can go much faster) the reinsertion happens when an object is updateBody'd and its new bbox has moved outside it's bbox with padding
those are from the top of my head that could use a proper benchmark
also thinking of:
- moving from circleci to github workflows altogether
- merging the stress test benchmark (npm run benchmark) into the workflow (separate workflow? benchmark workflow?)
- running tests in separate job as a github workflow
what are your opinions?
maybe divide the benchmarks into long running and fast running, put both on different workflows/pipelines
and add the fast running to pre commit hook
??
those insertion benchmarks and collision ones are fast ones I guess and the stress benchmark (which can be improved) is long running because it has 10x 1000ms scenarios (we can make them shorter but even then it will be 10 x N ms and results for FPS measuring for ms < 1000 are not precise)
This library looks awesome. Appreciate the streamlined focus on doing one thing and doing it well.
Have you done any benchmark comparisons between this library and more general physics libraries like Rapier, Jolt, matter.js, Planck.js?
This library looks awesome. Appreciate the streamlined focus on doing one thing and doing it well.
Have you done any benchmark comparisons between this library and more general physics libraries like Rapier, Jolt, matter.js, Planck.js?
no but basically if they don't use webasm I doubt something what implements THE SAME THING + physics would be any faster than my library
also I have more features than some because of:
- padding
- offset
- rotation
- scale
- polygons
- raycasting
- concave and convex polygons both
@bfelbo thanks