Support benchmark parameters
If benchmarks can be parameterized, once can run the same code over inputs of different sizes, for example, and check how some algorithm scales with input.
Hi @rmartinho! Since this feature would be very useful for the work I am doing at the moment, I would like to start adding it myself. Do you have any specific concerns or opinions on the design of this feature that I should consider before starting the work?
Thanks!
Hello @arximboldi. I've recently been working on this, so if you can wait until the weekend perhaps, I can clean up and push some code I have and then you we can improve on it together if you want.
Thanks for your interest, btw!
@rmartinho yeah, I can wait until the weekend.
FYI, yesterday I made this commit sketching a little bit what I thought the API could look like: https://github.com/arximboldi/nonius/commit/fc3c6a6e8d507a56b2d6a2631b12f6276af91ae1
Sorry I got started so quick, I took a look at your 36_parameters branch and since it was empty and I kind of need this feature soon I thought... "ok whatever I'll just do it" hehe :-)