Collection runner parallelism should be optional
I have checked the following:
- [X] I've searched existing issues and found nothing related to my issue.
Describe the feature you want to add
The iteration count on the collection runner could be used for performance testing if it had an option to disable running the tests in parallel. For instance, I'd like to run a certain request 10 times so that I can figure out the average timing. Running the tests in parallel shows how it will perform under high load, but running them serially would be more useful for evaluating whether a specific change makes things better or worse.
Also, on the Runner page, it would be very helpful if the results table included timing information broken down per request. You can dig this information out by drilling into each iteration and finding the request you're interested in.
And finally, some way to run multiple iterations of a single request would be very helpful, especially when I'm trying to tune an individual operation. For now, I can just move the request to a folder and run that “collection”, but a more official way would be better.
My use case is testing whether a query optimization is actually making things better or worse. I want to run an individual query 10 times, and gather the average time for the whole run before and after each tweak I make.
Mockups or Images of the feature
@lohxt1 Assigning this to you.
Please work on it when you find some time. I believe this should be a relatively straightforward implementation. Showing the time info would also be really helpful.
Instead of having a flag to activate/deactivate parallelism i would prefer to have a numeric input for parallel request count. If i feed a csv file with 200 rows into the runner, it seems like it does them all at once. But completely deactivating parallelism would also not be efficient. For example i would like to configure a parallel request count of 10.
Hi @helloanoop @lohxt1 , any updates on this? It would be better that we can run requests in parallel.
Parallel run at scale is a massive issue if you have multiple requests in a collection where you might be writing output from request one, saving the output using bruno.setvar() to be used in a second request there is a risk for the values to overwritten by any of the parallel requests. Secondly its been observed when many parallel requests are executed from a csv import. each row initiates a port for each series of requests. This blew up the memory on my local machine to 5GB before fully terminating and partially completing the run. for performance testing parallel is great but in many scenarios serial submission is needed.
Support for Serial execution will be available in the upcoming v1.35.0 release which is set to be released tomorrow.
@sreelakshmi-bruno @lohxt1 Can you create a separate github ticket to add support for specifying the parallelism count as suggested by @ganomi