wkshare

Results 16 comments of wkshare

@wonderflow 原来是建波 大神 这个问题,最终解决了吗?我最近也遇到这个问题,grafana从5.1.3升级到6.7.1之后,也是无法正常登录,提示成功,但是依然跳转login页面。

> This error is an expected log. When the limit is 1000, the context will automatically cancel after 1000 logs are queried. Where is such an instruction?

i change replication_factor from 3 to 1. the question is fixed. but i don't know why.

I added the configuration of compactor ring. ``` compactor: ring: kvstore: store: memberlist instance_interface_names: - ens160 compaction: compaction_window: 1h # blocks in this time window will be compacted together max_block_bytes:...

I enabled debug log. The debug logs don't look very normal either ``` Aug 11 09:26:41 ech-10-157-155-210 grafana_tempo[497223]: level=info ts=2022-08-11T01:26:41.073429597Z caller=compactor.go:64 msg="starting compaction cycle" tenantID=single-tenant offset=0 Aug 11 09:26:41 ech-10-157-155-210...

Check Status: ``` GET /status/services +----------------+---------+--------------+ | SERVICE NAME | STATUS | FAILURE CASE | +----------------+---------+--------------+ | compactor | Running | | | distributor | Running | | | ingester...

last logs of Nginx: ``` { "msec": "1660181015.580", "connection": "29168178", "connection_requests": "3", "pid": "1438335", "request_id": "4a8a5897b412863b98e26e7ec6e61050", "request_length": "507", "remote_addr": "10.157.155.210", "remote_user": "", "remote_port": "53590", "time_local": "11/Aug/2022:09:23:35 +0800", "time_iso8601": "2022-08-11T09:23:35+08:00", "request":...