Only download kubectl and provide volume if graylog version is < 4.2.0-0
Only download kubectl and provide volume if graylog version is < 4.2.0-0
I recognized that the kubectl is always downloaded. But it is only used in the lifecycle whe tne graylog version is < 4.2.0-0
Which issue this PR fixes
Special notes for your reviewer
Checklist
- [x ] DCO signed
- [ x] Chart Version bumped
The kubectl is using for setting the label to master node. The chart uses label mechanism to choose the master node.
Aaah okay sorry have overseen this
@KongZ do we really need this mechamism ?
As stated in https://go2docs.graylog.org/5-0/downloading_and_installing_graylog/docker_installation.htm#KubernetesAutomaticMasterSelection we only need to set the POD_NAME env var.
The /docker-entrypoint.sh has a part like this
# check if we are inside kubernetes, Graylog should be run as statefulset and $POD_NAME env var should be defined like this
# env:
# - name: POD_NAME
# valueFrom:
# fieldRef:
# fieldPath: metadata.name
# First stateful member is having pod name ended with -0, so
if [[ ! -z "${POD_NAME}" ]]
then
if echo "${POD_NAME}" | grep "\\-0$" >/dev/null
then
export GRAYLOG_IS_LEADER="true"
else
export GRAYLOG_IS_LEADER="false"
fi
fi
The solution described in the doc above will hard-code pod-0 to be the master. The solution in this chart will automatically set the other pod to be the master if pod-0 is unable to run. Yes, it is not necessary if pod-0 is always running. The chart was originally created from Graylog version 2 where the container and application may not be stable. The mechanism to elect the master from the other pod was necessary.
I believe the recent Graylog versions are stable and we should be able to use pod-0 as the master but I do not have a chance to test them.
@KongZ i commented your script out which determines which is the master node and added the suggested env var POD_NAME
It works on my installation :)
I'm using this change as well and it works much better, especially with the pod security settings (chown is disallowed) but I get an error that there are multiple leaders in the cluster.
@KongZ can close this out now