cortex icon indicating copy to clipboard operation
cortex copied to clipboard

update ring with new ip address when instance is lost, rejoins, but heartbeat is disabled

Open CharlieTLe opened this issue 1 year ago • 2 comments

What this PR does: Updates the ring with new ip address when instance is lost, rejoins, but heartbeat is disabled.

An instance can become lost, for example, when it does not announce that it is leaving the ring before exiting. When this happens and a new instance is created with a new ip address but the same name, the instance will rejoin the ring and reclaim the tokens it once had. However, it will not update the ring with the new ip address it can be located at. This can cause problems for services that use the ring to find where it can reach a member of the ring.

Which issue(s) this PR fixes: Fixes #

Checklist

  • [x] Tests updated
  • [ ] Documentation added
  • [x] CHANGELOG.md updated - the order of entries should be [CHANGE], [FEATURE], [ENHANCEMENT], [BUGFIX]

CharlieTLe avatar Oct 16 '24 02:10 CharlieTLe

@CharlieTLe Is there a specific use case or component you are looking at for this feature? We can persist tokens to a file for instances to re-join the ring with the same token by reusing the token file (stored in a PVC)

yeya24 avatar Oct 16 '24 19:10 yeya24

@yeya24

Yeah, it's to handle the case where an ingester changes its IP address but doesn't go through the typical lifecycle of leaving the ring while doing so. Then there's a discrepancy between the ring description and the actual description of the ingester.

This specific issue isn't about reclaiming the tokens.

Here's what can happen:

  1. An ingester (id=ingester-0, addr=1.1.1.1) has a heartbeat_period=0 and is in the memberlist ring with the state=ACTIVE
  2. The node that the ingester is running on is abruptly terminated
  3. A new node is created for the ingester (id=ingester-0, addr=2.2.2.2) to run on
  4. The ingester (id=ingester-0, addr=2.2.2.2) joins the ring and reclaims its token
  5. The ring description still using the ingester's old address information (id=ingester=0, addr=1.1.1.1)

A symptom of the problem would be on the distributor logging the error that it is unable to reach the ingester because it's using the ingester's old address:

{
  "addr": "1.1.1.1:9095",
  "caller": "pool.go:184",
  "level": "warn",
  "msg": "removing ingester failing healthcheck",
  "reason": "rpc error: code = DeadlineExceeded desc = context deadline exceeded"
}

CharlieTLe avatar Oct 16 '24 20:10 CharlieTLe