Flagger Canary not sync-ing KEDA ScaledObject from original to primary
Describe the bug
I am not sure this is not yet a feature or a bug.
While i try to create flagger canary with autoscalerRef.Kind=ScaledObject. The ScaledObject-primary successfully created. This is describe in the documentation https://docs.flagger.app/tutorials/keda-scaledobject
Then the problem, while i updating original ScaledObject settings doesn't get copy to the ScaledObject-primary.
I have tail the controller log and not action is taken while original ScaledObject changes.
To Reproduce
- Follow https://docs.flagger.app/tutorials/keda-scaledobject guide to create resource.
apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
name: podinfo
namespace: test
spec:
provider: kubernetes
# deployment reference
targetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
# Scaler reference
autoscalerRef:
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
# ScaledObject targeting the canary deployment
name: podinfo-so
.
.
- Change original scaleobject podinfo-so to some other values. cooldownPeriod from 20 to 15, minReplicaCount from 1 to 2.
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: podinfo-so
namespace: test
spec:
scaleTargetRef:
name: podinfo
pollingInterval: 10
cooldownPeriod: 20 -> 15
minReplicaCount: 1 -> 2
maxReplicaCount: 3
triggers:
- type: prometheus
metadata:
name: prom-trigger
serverAddress: http://flagger-prometheus.flagger-system:9090
metricName: http_requests_total
query: sum(rate(http_requests_total{ app="podinfo" }[30s]))
threshold: '5'
Expected behavior
The flagger controller shall monitor podinfo-so and update podinfo-so-primary. Why? This is simply more elegant and less moving parts to maintain when we thinking to use flagger.
Keda is a controller, Flagger is also a controller. So in the above cases, Flagger actually miss the "original deployment" replicas count, it is now control by keda
podinfo -> count = 2 (very unexpected), expected 0 podinfo-primary -> count = 1
In facts, all settings shall be deep copy including deployment/service/ingress/ScaledObject and then overwrite values if needed.
Additional context
- Flagger version: 1.27.0
- Kubernetes version: 1.22
- Service Mesh provider: na
- Ingress provider: nginx-ingress
seems like the pod count = 2 issue is solved in 1.28
found the annotation
$ kubectl get scaledobject/podinfo-so -o yaml -n test
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
annotations:
autoscaling.keda.sh/paused-replicas: "0"
$ kubectl get -n test hpa/podinfo-so -o yaml
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
annotations:
autoscaling.keda.sh/paused-replicas: "0"
The new changes in the primary ScaledObject will be reflected after you start a new canary analysis. This is documented here for HPAs, we should probably change the language to imply scalers in general.
does it work on Keda? I am not directly using HPA but Keda/HPA.