bobykus31

Results 18 comments of bobykus31

If it is not possible to remove metrics from downsampled data, is it even possible to recreate downsampled data from raw metrics where unwanted metrics are deleted?

is it really a resolution to disable -query-scheduler.use-scheduler-ring?

It looks like all rings are empty. while ``` memberlist: node_name: "" randomize_node_name: false stream_timeout: 1m0s retransmit_factor: 4 pull_push_interval: 2m0s gossip_interval: 200ms gossip_nodes: 3 gossip_to_dead_nodes_time: 2m30s dead_node_reclaim_time: 5m0s compression_enabled: true...

ah, it does not work with `target=read`!

Wonder why it is always "**cannot populate chunk 8 from block**"... We have such errors in compactor periodically when object storage load/latency is high, like `Apr 02 05:41:54 thanos-compact1 thanos[1556]:...

I wish but don't think I am allowed to, sorry. You mean a content, right?

I even dunno if the block itself can be any help, but ... Here is an output from `promtool tsdb analyze ./data 01HTDQCE6GV770YD8PSB6B9TMK ` ``` Block ID: 01HTDQCE6GV770YD8PSB6B9TMK Duration: 1h59m59.998s...

So I turned on a debug for s3 by trace.enable: true. Here is what I can see ``` grep 01HTF7EGX696B0A0K3X9JY7987 /srv/logs/thanos-compact1/*-20240402 | grep GET | grep -v mark.json 2024-04-02T14:03:17.147505152Z thanos-compact1...

Yes, re download seems to make the compaction going well. `groupKey=0@5322589354663189029 msg="compaction available and planned" plan="[01HTF7EGY6WQ5VSG6CER14WVJD (min time: 1712044800014, max time: 1712052000000) 01HTF7EHFG3BXV05EY2Z3E55SF (min time: 1712044800054, max time: 1712052000000)]"`

Seems to be an s3 issues itself. I was able to reproduce it with s3cmd for some blocks. May be a _--consistency-delay_ setting can help (currently it is 2h)