Using deployment.*.kubectl.patches with kustomize creating invalid YAML
What happened? When using kustomize in DevSpace, multiline strings in YAML can become incorrectly formatted, causing an error when applying the manifests.
What did you expect to happen instead? DevSpace should produce correctly formatted YAML.
How can we reproduce the bug? (as minimally and precisely as possible)
My devspace.yaml:
version: v2beta1
name: edgefarm-applications
pipelines:
deploy-vela-core: |-
#!/bin/bash
set -e
create_deployments vela-system
deployments:
vela-system:
kubectl:
kustomize: true
kustomizeArgs: ["--enable-helm"]
manifests:
- ./manifests/vela-system
namespace: vela-system
manifests/vela-system/kustomization.yaml
resources:
- namespace.yaml
helmCharts:
- name: vela-core
repo: https://charts.kubevela.net/core
version: 1.8.0
releaseName: vela-core
namespace: vela-system
includeCRDs: true
manifests/vela-system/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
creationTimestamp: null
name: vela-system
spec: {}
status: {}
Local Environment:
- DevSpace Version: [use
devspace --version] - Operating System: windows | linux | mac
- ARCH of the OS: AMD64 | ARM64 | i386 Kubernetes Cluster:
- Cloud Provider: google | aws | azure | other
- Kubernetes Version: [use
kubectl version]
Anything else we need to know?
kustomize renders the helm chart with a multiline string | getting converted to |2, which is correct. DevSpace then attempts to apply patches and marshals / unmarshals using gopkg.in/yaml.v3. This version of go-yaml converts to 4 spaces by default and incorrectly converts |2 to |4 (a bug), which is incorrect. kubectl then throws an error.
FWIW, kustomize created a fork to deal with the issue.