Clustering issues leading to all nodes being downed
I'm reopening here an issue that I reported at the time under the akka repo.
We had a case where an issue on a single node lead to the whole akka-cluster being taken down.
Here's a summary of what happened:
- Healthy cluster made of 20ish nodes, running on k8s
- Node A: encounters issues, triggers CoordinatedShutdown
- Node A: experiences high CPU usage, maybe GC pause
- Node A: sees B as unreachable, broadcasts it (B is certainly reachable, but detected as such because of high CPU usage, GC pause, or similar issues)
- Cluster state: A Leaving, B seen unreachable by A, all the other nodes are Up
- Leader can currently not perform its duties (remove A), reachability status (B seen unreachable by A)
- Node A: times out some coordinated shutdown phases. Hypothesis: timed out because leader could not remove A.
- Node A: finishes coordinated shutdown nonetheless.
- hypothesis - Node A: quarantined associations to other cluster nodes
- Nodes B, C, D, E: SBR took decision DownSelfQuarantinedByRemote and is downing [...] including myself
- hypothesis - Node B, C, D, E: quarantined associations to other cluster nodes
- in a few steps, all remaining cluster nodes down themselves: SBR took decision DownSelfQuarantinedByRemote
- the whole cluster is down
Discussions, potential issues:
Considering the behaviour of CoordinatedShutdown (phases can time out and shutdown continues), shouldn't the leader ignore unreachabilities added by a Leaving node and be allowed to perform its duties? At step 6 above, the Leader was blocked from removing A, but A still continued its shutdown process. The catastrophic ending could have been stopped here.
DownSelfQuarantinedByRemote: @patriknw 's comment seems spot on.
At step 9, nodes B, C, D, E should probably not take into account the Quarantined from a node that is Leaving.
DownSelfQuarantinedByRemote: another case where Patrik's comment also seems to apply, Quarantined from nodes downing themselves because of DownSelfQuarantinedByRemote should probably not be taken into account.
At steps 10 and 12. Any cluster singletons running on affected nodes wouldn't be gracefully shutdown using the configured termination message. This is probably the right thing to do but I'm adding this note here nonetheless.
I have extra logs that may be useful:
Remote ActorSystem must be restarted to recover from this situation. Reason: Cluster member removed, previous status [Down]
I also encountered the same problem, which caused my cluster to keep restarting. Is there a plan to fix it? When is it expected to be repaired?
@fredfp Can you give us more info on this - https://github.com/akka/akka/issues/31095#issuecomment-1682261286
On the Apache Pekko side, we can read the Akka issues but not the Akka PRs (due to the Akka license not being compatible with Apache Pekko).
The issue appears to be with split brain scenarios from my reading of https://github.com/akka/akka/issues/31095 - specifically DownSelfQuarantinedByRemote events. Is it possible that we should just ignore DownSelfQuarantinedByRemote events when it comes to deciding to shut down the cluster?
@pjfanning I think the issue can happen when a node shutsdown during a partition.
Still, DownSelfQuarantinedByRemote events cannot be ignored. The root cause is that nodes should not know they were quanrantined by others in some harmless cases.
Indeed, some quarantines are harmless (as indicated by the method argument: https://github.com/apache/pekko/blob/main/remote/src/main/scala/org/apache/pekko/remote/artery/Association.scala#L534). And the issue is that such harmless quarantine should not be be communicated to the other side i.e., the quarantined association. However, they currently always are: https://github.com/apache/pekko/blob/main/remote/src/main/scala/org/apache/pekko/remote/artery/InboundQuarantineCheck.scala#L47
@pjfanning Is there a repair plan for this issue? When is it expected to be repaired?
I also experienced the same issue, leading to continuous restarts of my cluster. Is there a scheduled resolution for this? When can we anticipate a fix?
@pjfanning Can you suggest a way to fix this bug as soon as possible, thank you very much.
This bug should hit quite seldom, if it happens often it most likely means something is not right with your cluster and you should fix that first in all cases. Especially, make sure:
- there's always available CPU for the cluster managment duties (this means GC pauses need to be short)
- not to use pekko's internal thread pool for your own workloads
- make rolling update slower so that cluster is less unstable during rolling updates.
This bug should hit quite seldom, if it happens often it most likely means something is not right with your cluster and you should fix that first in all cases. Especially, make sure:
- there's always available CPU for the cluster managment duties
- not to use pekko's internal thread pool for your own workloads
- make rolling update slower so that cluster is less unstable during rolling updates.
The issue appear also in systems with heavy memory usage and long GC pauses. It is worth to check gc strategy, gc settings, gc metrics etc
how about use the classical transport for now? seems the issue in lives in artery only
how about use the classical transport for now? seems the issue in lives in artery only
- Running Akka 2.8.5 earlier on k8s resulted in a single node restart leading to cluster down (high memory and CPU)
- The above issues did not occur when running Akka 2.8.5 on the k8s cluster
- The above issues did not occur when using Akka to access the Nacos registration cluster
- Running Pekko 1.0.2 on k8s resulted in a single node restart causing cluster down
IIRC, Akka 2.8.x requires an BSL :) I don't have an env to reproduce the problem, maybe you can work out a multi-jvm test for that? and still super busy at work:(
目前我的k8s集群有26个pod运行,当其中某一个pod因为资源不足重启的时候,常常会导致整个集群挂调,我们处理数据量比较大,资源占用比较高,目前在其他集群上(比如docker运行注册到nacos上),暂时没有出现这个问题
Hello, has there been any progress on this issue? Is there a plan for when it will be fixed?😀
For Kubernetes users, we would suggest using the Kubernetes Lease described here: https://pekko.apache.org/docs/pekko/current/split-brain-resolver.html#lease
Pekko Management 1.1.0-M1 has a 2nd implementation of the Lease - the legacy one is CRD based while the new one uses Kubernetes native leases. https://github.com/apache/pekko-management/pull/218
For Kubernetes users, we would suggest using the Kubernetes Lease described here: https://pekko.apache.org/docs/pekko/current/split-brain-resolver.html#lease
That's what we use already and it didn't help in the current case. Do you expect it resolves (or avoids) this issue? I think the lease helps the surviving partition confirm it can indeed stay up, it hoever doesn't help the nodes downing themselves, which is the observed behaviour described above.
Pekko Management 1.1.0-M1 has a 2nd implementation of the Lease - the legacy one is CRD based while the new one uses Kubernetes native leases. apache/pekko-management#218
Thank you for pointing it out, looking forward to it!
@fredfp It's good to hear that using the Split Brain Resolver with a Kubernetes Lease stops all the nodes from downing themselves. When you lose some of the nodes, are you finding that you have to manually restart them or can Kubernetes handle automatically restarting them using liveness and/or readiness probes?
@fredfp It's good to hear that using the Split Brain Resolver with a Kubernetes Lease stops all the nodes from downing themselves.
Sorry, let me be clearer: using the SBR with a Kubernetes Lease does not stop all the nodes from downing themselves.
When you lose some of the nodes, are you finding that you have to manually restart them or can Kubernetes handle automatically restarting them using liveness and/or readiness probes?
When a node downs itself, the java process (running inside the container) terminates. The container is then restarted by k8s as usual, the liveness/readiness probes do not play a part in that. Does that answer your question?
I think the main issue is Association.quarantine where the harmless flag is not passed on here: https://github.com/apache/pekko/blob/726ddbfd43cf1e1f81254df2f5b715ace0a817cf/remote/src/main/scala/org/apache/pekko/remote/artery/Association.scala#L552
Since GracefulShutdownQuarantinedEvent only appears to be used for harmless=true quarantine events, we might be able to find the event subscribers and have them handle GracefulShutdownQuarantinedEvent in a different way to standard QuarantinedEvent instances. For example, https://github.com/apache/pekko/blob/726ddbfd43cf1e1f81254df2f5b715ace0a817cf/remote/src/main/scala/org/apache/pekko/remote/artery/InboundQuarantineCheck.scala#L31
I found 3 places where harmless=true quarantine events can be kicked off - but there could be more.
https://github.com/search?q=repo%3Aapache%2Fpekko%20%22harmless%20%3D%20true%22&type=code
I tried yesterday to write a unit test that does artificially causes a harmless quarantine and that examines the results but so far, I haven't reproduced the issue with the cluster shut down. I think having a reproducible case is the real blocker on this issue.
Here's my understanding:
- When initially marking an association as quarantined, the
Quarantinedcontrol message is not sent to the remote when harmless is true: https://github.com/apache/pekko/blob/8cb7d256dcc1498b79a9fff815146fb5b1f451f0/remote/src/main/scala/org/apache/pekko/remote/artery/Association.scala#L569-L572 - now comes
InboundQuarantineCheckinto play (used inArteryTransport.inboundSinkand.inboundControlSink), it serves 2 purposes: a) drop messages incoming through a quarantined association and b) telling again the remote node it is quarantined usinginboundContext.sendControl(association.remoteAddress, Quarantined(...))in case it somehow didn't already get the message sent in 1. - when a node learns it is quarantined as a result of 2.b above, it will trigger the SBR to down itself via
ThisActorSystemQuarantinedEvent, and this is what brings the whole cluster down. - the problem is triggered by 2.b above, which sends
Quarantinedcontrol message also for harmless quarantines, when this case is carefully avoided in 1. We see inInboundQuarantineCheckthat it doesn't rely on the quarantined status being passed via an event, but instead it is accessed directly viaenv.associationandassociation.associationState.isQuarantined(). At this stage however, we lost whether the quarantine was harmless or not. This extra flag should be kept in theAssociationstate so that it can be recovered inInboundQuarantineCheck.
About reproducing, I'm not sure because it's not clear to me how a node, when shutting down, can quarantine associations to others with harmless=true. However, if that can be done I'd suggest:
- start a cluster with 2 nodes A, B.
- shutdown A such that it quarantines the association to B with
harmless=true - send messages from B to A, this should trigger
InboundQuarantineCheckin A to sendQuarantinedto B (and B shutting down as a result), leading the whole cluster to be down.
@fredfp I haven't had much time to look at reproducing the issue - I checked in my initial attempt - see https://github.com/apache/pekko/pull/1555
I found an existing test that did quarantining and added a new test. If you have time, would you be able to look at extending that test to cause the shutdown issue?
Seem the problem is that harmless=true/false is not taken into account, I'm not using Cluster at work, so need more time for me to workout the problem.
@zhxiaogg 自己维护集群不错,比如 我们内部也可以用 VipServer 之类的,去中心化的这种感觉很容易出问题,毕竟内部经常搞断网演练。
An experimental fix is in 1.2 snapshots - #1555