KATT for load testing?!
after a failed rollercoaster in finding a load-testing tool (ab, httperf, siege, tsung, loader.io, loadimpact, ... , some others still to be investigated), I was thinking "how hard can it be" (tm) to turn KATT into a "simple" load-testing mode
- new params
- number of total runs
- number of workers (runs in parallel)
- load-test timeout
- run 1 scenario, confirm that it passes
- start the load-test
- output statistics; for each transaction latency (min, max, average, std dev)
pinging @sstrigler @dmitriid @isakb (either to laugh at me, or to bring in some constructive criticism :) )
😻
Maybe in some later iteration of this: how about adding some weight and classes (tags) to certain operations. More weight, so more likely they are to be called. Class/tags can be used to group metrics. Both could be done by annotations (comments) in the blueprint.
Metrics could be collected by exometer or such, so that you can send them directly to graphana and Co.
+1
Sent from my phone.
On 4 Nov 2016 12:26 p.m., "Andrei Neculau" [email protected] wrote:
after a failed rollercoaster in finding a load-testing tool (ab, httperf, siege, tsung, loader.io, loadimpact, ... , some others still to be investigated), I was thinking "how hard can it be" (tm) to turn KATT into a "simple" load-testing mode
- new params
- number of total runs
- number of workers (runs in parallel)
- load-test timeout
- run 1 scenario, confirm that it passes
- start the load-test
- output statistics; for each transaction latency (min, max, average, std dev)
pinging @sstrigler https://github.com/sstrigler @dmitriid https://github.com/dmitriid @isakb https://github.com/isakb (either to laugh at me, or to bring in some constructive criticism :) )
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/for-GET/katt/issues/52, or mute the thread https://github.com/notifications/unsubscribe-auth/AAYscQRMkzXkeDPEdx_0hE1ARHp5Zep4ks5q6xZMgaJpZM4Kpb5X .
started work in the metrics branch https://github.com/for-GET/katt/compare/metrics
- [x] report start, end, latency per transaction
- [ ] concurrent workers
- [ ] worker constraints
NOTE: KISS for now (ever?). Briefly looked at hackney's metrics (which interface folsom, exometer, grapherl), mxbench, tsung. Timewise, I cannot afford a more in-depth analysis and I'd need that in order to go down that path.
@sstrigler I don't think weights is smth possible since KATT runs scenarios, not standalone requests. So the goal that I have is to take 1 scenario, run it n times, in m parallel workers. Am I missing smth?
Tagging requests is of course possible on the other hand. Even without a designed mechanism for that, one can still do it via a custom HTTP request header, given that the transport overhead is negligible X-My-Custom-Tags: tag1, tag2, tag3. But maybe in the context of metrics, tagging is more useful, though as I'm doing it now, statistics is up to the consumer to produce based on the metrics.
Ok, yeah, I see now that was stupid in the context of individual requests. But one could weight individual scenarios. Like tsung does. A test run runs different scenarios with a given percentage. They have to sum up to 100% of course.
6 nov. 2016 kl. 21:52 skrev Andrei Neculau [email protected]:
@sstrigler I don't think weights is smth possible since KATT runs scenarios, not standalone requests. So the goal that I have is to take 1 scenario, run it n times, in m parallel workers. Am I missing smth?
Tagging requests is of course possible on the other hand. Even without a designed mechanism for that, one can still do it via a custom HTTP request header, given that the transport overhead is negligible X-My-Custom-Tags: tag1, tag2, tag3. But maybe in the context of metrics, tagging is more useful, though as I'm doing it now, statistics is up to the consumer to produce based on the metrics.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.