[Bug] - 3.32.0+ - Helm Chart generates 2 'affinity' blocks in the same YAML 'spec' block
Hey,
I tested and this issue appears on both 3.32.0 and 3.33.1, which looks like because you have added another nodeAffinity rule for the daemonsets.
The problem is when we set the platform to eks:
Then, it wants to add such block to the daemonset yaml:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: eks.amazonaws.com/compute-type
operator: NotIn
values:
- fargate
However, your recent changes add this to the same block:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- arm64
- amd64
So here how the relevant YAML looks:
# Source: cloudguard/templates/imagescan/daemon/daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: release-name-imagescan-daemon
namespace: argocd
labels:
helm.sh/chart: cloudguard-2.33.1
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/version: 2.33.1
app.created.by.template: "true"
app.kubernetes.io/name: release-name-imagescan-daemon
app.kubernetes.io/instance: release-name
spec:
selector:
matchLabels:
app.kubernetes.io/name: release-name-imagescan-daemon
app.kubernetes.io/instance: release-name
updateStrategy:
rollingUpdate:
maxUnavailable: 50%
template:
metadata:
annotations:
checksum/cgsecret:
checksum/config:
checksum/regsecret:
labels:
helm.sh/chart: cloudguard-2.33.1
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/version: 2.33.1
app.created.by.template: "true"
app.kubernetes.io/name: release-name-imagescan-daemon
app.kubernetes.io/instance: release-name
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- arm64
- amd64
priorityClassName: system-node-critical
securityContext:
runAsUser: 17112
runAsGroup: 17112
seccompProfile:
type: RuntimeDefault
serviceAccountName: release-name-imagescan-daemon
nodeSelector:
kubernetes.io/os: linux
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: eks.amazonaws.com/compute-type
operator: NotIn
values:
- fargate
We can see it happen in those 2 templates:
- cloudguard/templates/imagescan/daemon/daemonset.yaml
- cloudguard/templates/flowlogs/daemon/daemonset.yaml
To replicate it, you need to enable those addons and set the platform to eks in the values.yaml:
platform: eks
addons:
flowLogs:
enabled: true
imageScan:
enabled: true
Looks like it's because the addition of the "affinity:" in the template itself, though it's also coming from the _helpers.tpl "common.pod.properties" function :
affinity:
{{ include "common.node.affinity.multiarch" $config | indent 8 }}
{{ include "common.pod.properties" $config | indent 6 }}
Tried to see if I can fix it in the relevant templates but it's related to the common.pod.properties function which is used a lot across the chart.
Hi @talron23, Let me see if I got this correctly: you see that affinities are not merged, thus only the last one takes place. I assume you didn't define custom affinity, did you? We will check it and update
Yes, affinities are not merged and we use kustomize and it fails to parse it, this is the error from kustomize: Error: map[string]interface {}(nil): yaml: unmarshal errors: line 57: mapping key "affinity" already defined at line 38
No custom affinity.
I see. I will update on our progress. It's weird kustomize doesn't ignore this, like K8s does
@talron23 apparently I forgot to update that is issue was fixed few months ago in our chart 2.34.0
@talron23 can you please close the issue unless you have any other questions?