pekko icon indicating copy to clipboard operation
pekko copied to clipboard

Clustering issues leading to all nodes being downed

Open fredfp opened this issue 2 years ago • 25 comments

I'm reopening here an issue that I reported at the time under the akka repo.

We had a case where an issue on a single node lead to the whole akka-cluster being taken down.

Here's a summary of what happened:

  1. Healthy cluster made of 20ish nodes, running on k8s
  2. Node A: encounters issues, triggers CoordinatedShutdown
  3. Node A: experiences high CPU usage, maybe GC pause
  4. Node A: sees B as unreachable, broadcasts it (B is certainly reachable, but detected as such because of high CPU usage, GC pause, or similar issues)
  5. Cluster state: A Leaving, B seen unreachable by A, all the other nodes are Up
  6. Leader can currently not perform its duties (remove A), reachability status (B seen unreachable by A)
  7. Node A: times out some coordinated shutdown phases. Hypothesis: timed out because leader could not remove A.
  8. Node A: finishes coordinated shutdown nonetheless.
  9. hypothesis - Node A: quarantined associations to other cluster nodes
  10. Nodes B, C, D, E: SBR took decision DownSelfQuarantinedByRemote and is downing [...] including myself
  11. hypothesis - Node B, C, D, E: quarantined associations to other cluster nodes
  12. in a few steps, all remaining cluster nodes down themselves: SBR took decision DownSelfQuarantinedByRemote
  13. the whole cluster is down

Discussions, potential issues:

Considering the behaviour of CoordinatedShutdown (phases can time out and shutdown continues), shouldn't the leader ignore unreachabilities added by a Leaving node and be allowed to perform its duties? At step 6 above, the Leader was blocked from removing A, but A still continued its shutdown process. The catastrophic ending could have been stopped here.

DownSelfQuarantinedByRemote: @patriknw 's comment seems spot on. At step 9, nodes B, C, D, E should probably not take into account the Quarantined from a node that is Leaving.

DownSelfQuarantinedByRemote: another case where Patrik's comment also seems to apply, Quarantined from nodes downing themselves because of DownSelfQuarantinedByRemote should probably not be taken into account.

At steps 10 and 12. Any cluster singletons running on affected nodes wouldn't be gracefully shutdown using the configured termination message. This is probably the right thing to do but I'm adding this note here nonetheless.

fredfp avatar Aug 17 '23 13:08 fredfp

I have extra logs that may be useful:

Remote ActorSystem must be restarted to recover from this situation. Reason: Cluster member removed, previous status [Down]

fredfp avatar Aug 17 '23 13:08 fredfp

I also encountered the same problem, which caused my cluster to keep restarting. Is there a plan to fix it? When is it expected to be repaired?

zhenggexia avatar May 16 '24 07:05 zhenggexia

@fredfp Can you give us more info on this - https://github.com/akka/akka/issues/31095#issuecomment-1682261286

On the Apache Pekko side, we can read the Akka issues but not the Akka PRs (due to the Akka license not being compatible with Apache Pekko).

The issue appears to be with split brain scenarios from my reading of https://github.com/akka/akka/issues/31095 - specifically DownSelfQuarantinedByRemote events. Is it possible that we should just ignore DownSelfQuarantinedByRemote events when it comes to deciding to shut down the cluster?

pjfanning avatar May 16 '24 11:05 pjfanning

@pjfanning I think the issue can happen when a node shutsdown during a partition.

Still, DownSelfQuarantinedByRemote events cannot be ignored. The root cause is that nodes should not know they were quanrantined by others in some harmless cases.

Indeed, some quarantines are harmless (as indicated by the method argument: https://github.com/apache/pekko/blob/main/remote/src/main/scala/org/apache/pekko/remote/artery/Association.scala#L534). And the issue is that such harmless quarantine should not be be communicated to the other side i.e., the quarantined association. However, they currently always are: https://github.com/apache/pekko/blob/main/remote/src/main/scala/org/apache/pekko/remote/artery/InboundQuarantineCheck.scala#L47

fredfp avatar May 16 '24 17:05 fredfp

@pjfanning Is there a repair plan for this issue? When is it expected to be repaired?

zhenggexia avatar May 21 '24 08:05 zhenggexia

I also experienced the same issue, leading to continuous restarts of my cluster. Is there a scheduled resolution for this? When can we anticipate a fix?

CruelSummerday avatar May 21 '24 09:05 CruelSummerday

@pjfanning Can you suggest a way to fix this bug as soon as possible, thank you very much.

ZDevouring avatar May 21 '24 10:05 ZDevouring

This bug should hit quite seldom, if it happens often it most likely means something is not right with your cluster and you should fix that first in all cases. Especially, make sure:

  • there's always available CPU for the cluster managment duties (this means GC pauses need to be short)
  • not to use pekko's internal thread pool for your own workloads
  • make rolling update slower so that cluster is less unstable during rolling updates.

fredfp avatar May 21 '24 10:05 fredfp

This bug should hit quite seldom, if it happens often it most likely means something is not right with your cluster and you should fix that first in all cases. Especially, make sure:

  • there's always available CPU for the cluster managment duties
  • not to use pekko's internal thread pool for your own workloads
  • make rolling update slower so that cluster is less unstable during rolling updates.

The issue appear also in systems with heavy memory usage and long GC pauses. It is worth to check gc strategy, gc settings, gc metrics etc

mmatloka avatar May 21 '24 10:05 mmatloka

how about use the classical transport for now? seems the issue in lives in artery only

He-Pin avatar May 21 '24 10:05 He-Pin

how about use the classical transport for now? seems the issue in lives in artery only

  1. Running Akka 2.8.5 earlier on k8s resulted in a single node restart leading to cluster down (high memory and CPU)
  2. The above issues did not occur when running Akka 2.8.5 on the k8s cluster
  3. The above issues did not occur when using Akka to access the Nacos registration cluster
  4. Running Pekko 1.0.2 on k8s resulted in a single node restart causing cluster down

zhenggexia avatar May 21 '24 12:05 zhenggexia

IIRC, Akka 2.8.x requires an BSL :) I don't have an env to reproduce the problem, maybe you can work out a multi-jvm test for that? and still super busy at work:(

He-Pin avatar May 21 '24 13:05 He-Pin

目前我的k8s集群有26个pod运行,当其中某一个pod因为资源不足重启的时候,常常会导致整个集群挂调,我们处理数据量比较大,资源占用比较高,目前在其他集群上(比如docker运行注册到nacos上),暂时没有出现这个问题

zhenggexia avatar May 22 '24 04:05 zhenggexia

Hello, has there been any progress on this issue? Is there a plan for when it will be fixed?😀

zhenggexia avatar Jul 31 '24 08:07 zhenggexia

For Kubernetes users, we would suggest using the Kubernetes Lease described here: https://pekko.apache.org/docs/pekko/current/split-brain-resolver.html#lease

Pekko Management 1.1.0-M1 has a 2nd implementation of the Lease - the legacy one is CRD based while the new one uses Kubernetes native leases. https://github.com/apache/pekko-management/pull/218

pjfanning avatar Aug 08 '24 13:08 pjfanning

For Kubernetes users, we would suggest using the Kubernetes Lease described here: https://pekko.apache.org/docs/pekko/current/split-brain-resolver.html#lease

That's what we use already and it didn't help in the current case. Do you expect it resolves (or avoids) this issue? I think the lease helps the surviving partition confirm it can indeed stay up, it hoever doesn't help the nodes downing themselves, which is the observed behaviour described above.

Pekko Management 1.1.0-M1 has a 2nd implementation of the Lease - the legacy one is CRD based while the new one uses Kubernetes native leases. apache/pekko-management#218

Thank you for pointing it out, looking forward to it!

fredfp avatar Aug 08 '24 14:08 fredfp

@fredfp It's good to hear that using the Split Brain Resolver with a Kubernetes Lease stops all the nodes from downing themselves. When you lose some of the nodes, are you finding that you have to manually restart them or can Kubernetes handle automatically restarting them using liveness and/or readiness probes?

pjfanning avatar Aug 08 '24 14:08 pjfanning

@fredfp It's good to hear that using the Split Brain Resolver with a Kubernetes Lease stops all the nodes from downing themselves.

Sorry, let me be clearer: using the SBR with a Kubernetes Lease does not stop all the nodes from downing themselves.

When you lose some of the nodes, are you finding that you have to manually restart them or can Kubernetes handle automatically restarting them using liveness and/or readiness probes?

When a node downs itself, the java process (running inside the container) terminates. The container is then restarted by k8s as usual, the liveness/readiness probes do not play a part in that. Does that answer your question?

fredfp avatar Aug 08 '24 15:08 fredfp

I think the main issue is Association.quarantine where the harmless flag is not passed on here: https://github.com/apache/pekko/blob/726ddbfd43cf1e1f81254df2f5b715ace0a817cf/remote/src/main/scala/org/apache/pekko/remote/artery/Association.scala#L552

Since GracefulShutdownQuarantinedEvent only appears to be used for harmless=true quarantine events, we might be able to find the event subscribers and have them handle GracefulShutdownQuarantinedEvent in a different way to standard QuarantinedEvent instances. For example, https://github.com/apache/pekko/blob/726ddbfd43cf1e1f81254df2f5b715ace0a817cf/remote/src/main/scala/org/apache/pekko/remote/artery/InboundQuarantineCheck.scala#L31

I found 3 places where harmless=true quarantine events can be kicked off - but there could be more.

https://github.com/search?q=repo%3Aapache%2Fpekko%20%22harmless%20%3D%20true%22&type=code

pjfanning avatar Oct 16 '24 13:10 pjfanning

I tried yesterday to write a unit test that does artificially causes a harmless quarantine and that examines the results but so far, I haven't reproduced the issue with the cluster shut down. I think having a reproducible case is the real blocker on this issue.

pjfanning avatar Oct 17 '24 10:10 pjfanning

Here's my understanding:

  1. When initially marking an association as quarantined, the Quarantined control message is not sent to the remote when harmless is true: https://github.com/apache/pekko/blob/8cb7d256dcc1498b79a9fff815146fb5b1f451f0/remote/src/main/scala/org/apache/pekko/remote/artery/Association.scala#L569-L572
  2. now comes InboundQuarantineCheck into play (used in ArteryTransport.inboundSink and .inboundControlSink), it serves 2 purposes: a) drop messages incoming through a quarantined association and b) telling again the remote node it is quarantined using inboundContext.sendControl(association.remoteAddress, Quarantined(...)) in case it somehow didn't already get the message sent in 1.
  3. when a node learns it is quarantined as a result of 2.b above, it will trigger the SBR to down itself via ThisActorSystemQuarantinedEvent, and this is what brings the whole cluster down.
  4. the problem is triggered by 2.b above, which sends Quarantined control message also for harmless quarantines, when this case is carefully avoided in 1. We see in InboundQuarantineCheck that it doesn't rely on the quarantined status being passed via an event, but instead it is accessed directly via env.association and association.associationState.isQuarantined(). At this stage however, we lost whether the quarantine was harmless or not. This extra flag should be kept in the Association state so that it can be recovered in InboundQuarantineCheck.

About reproducing, I'm not sure because it's not clear to me how a node, when shutting down, can quarantine associations to others with harmless=true. However, if that can be done I'd suggest:

  • start a cluster with 2 nodes A, B.
  • shutdown A such that it quarantines the association to B with harmless=true
  • send messages from B to A, this should trigger InboundQuarantineCheck in A to send Quarantined to B (and B shutting down as a result), leading the whole cluster to be down.

fredfp avatar Nov 07 '24 11:11 fredfp

@fredfp I haven't had much time to look at reproducing the issue - I checked in my initial attempt - see https://github.com/apache/pekko/pull/1555

I found an existing test that did quarantining and added a new test. If you have time, would you be able to look at extending that test to cause the shutdown issue?

pjfanning avatar Nov 08 '24 12:11 pjfanning

Seem the problem is that harmless=true/false is not taken into account, I'm not using Cluster at work, so need more time for me to workout the problem.

He-Pin avatar Dec 28 '24 19:12 He-Pin

@zhxiaogg 自己维护集群不错,比如 我们内部也可以用 VipServer 之类的,去中心化的这种感觉很容易出问题,毕竟内部经常搞断网演练。

He-Pin avatar Dec 28 '24 19:12 He-Pin

An experimental fix is in 1.2 snapshots - #1555

pjfanning avatar Jan 04 '25 18:01 pjfanning