Sniper91
Sniper91
## Which problem is this PR solving? Resolves #3342 ## Short description of the changes - add mutex between closing and enqueuing channel operations.
**Describe the bug** BoundedQueue.Produce may panic because of sending on closed channel after BoundedQueue.Stop is called. When q.stopped.Store(1) is called in method BoundedQueue.Stop, some producers may run in code below...
use gin.Context.FullPath method instead
@bwplotka please review the PR field ```sameSeriesChunks``` slice is reused in every iterator loop ```chunkSetToSeriesSet.Next```, https://github.com/prometheus/prometheus/blob/6bdecf377cea8e856509914f35234e948c4fcb80/storage/series.go#L182-L187 This may cause ```chunkSetToSeriesSet.At``` returns identical data sample but different labels. https://github.com/prometheus/prometheus/blob/6bdecf377cea8e856509914f35234e948c4fcb80/storage/series.go#L198-L200 ``` func...
## Proposal I deploy a remote-write adapter service before remote storage. Every prometheus instance writes local data to the adapter.But the load among different remote-write adapter pod is unbalanced especially...
# reproduce procedure 1. run kafka exporter with race detector ``` go run -race ./ --kafka.server localhost:9092 ``` 2. request kafka metrics frequently ``` curl http://localhost:9308 ``` ## output ```...
Closing TCP socket after every probe may cause large numbers of connections sitting in the TIME_WAIT state Relevant Issue: #794 set SO_LINGER=0 to avoid problem above Reference: https://stackoverflow.com/questions/3757289/when-is-tcp-option-so-linger-0-required