George Maltsev
George Maltsev
Ok, thanks for the quick response. I'll try to find out where the difference is coming from and will be looking forward to the fix you mentioned.
Yeah, sure. \>if that's single series or multiple series Multiple series. \>does any of the series have NaNs Only in the very first data point. I'm tracking this in the...
I was able to pinpoint the function causing this, here's the query: ``` averageSeriesWithWildcards(basename.*.*.*.service.type.metric.*.time.p95, 6) ``` And it's very likely that in the previous example the culprit was the sumSeriesWithWildcards...
Here's another simplified query causing the same error: ``` asPercent(timeShift(groupByNode(node1.node2.node3.*.*.node4.*.node5.node6.node7.node8.node9.*.count, 11, 'sum'), '5min'), timeShift(groupByNode(node1.node2.node3.*.*.node4.*.node5.node6.node7.node8.node9.*.count, 11, 'sum'), '20min')) ``` The error: ``` 2021-08-24T16:53:31.941Z ERROR access request failed {"data": {"handler":"render","carbonapi_uuid":"f78cf345-7108-4034-b413-90c2da07abaf","url":"/render","peer_ip":"","host":":8182","format":"json","use_cache":true,"targets":[""],"cache_timeout":60,"runtime":11.388484146,"http_code":500,"reason":"runtime error: index...
Also, querying carbonapi directly, without Grafana, doesn't make any difference.
Maybe I don't understand what API carbonapi uses to fetch the data from VM, but the Prometheus API always returns trimmed timestamps for sure, I checked that. If carbonapi relies...
Ok, I've tried querying VM directly using this API and passing in non-trimmed start and end parameters, and I see that the backend returns timestamps proportional to the values of...
Unfortunately, I had to terminate my setup because the debug mode quickly filled up the root disk, so I can't pull the logs anymore, sorry. I'm sure that it is...
In the end, I decided to give it another try with a single-node setup. And the weirdest thing with this bug is that everything works fine on the single-node version....
And now I was able to reproduce it on the single-node version too. It appears not so often, but still. Maybe flags -selfScrapeInterval=60s and -search.disableCache play some role here because...