scale-server-slots on Ingress resource is ignored
appVersion: 3.0.1
Using the following Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-thanos
namespace: monitoring
annotations:
haproxy.org/scale-server-slots: "4"
spec:
ingressClassName: haproxy-thanos
rules:
- host: "..."
http:
...
the backend still scales to 42.
Hi @mike-code ,
Could you check if you have any of the legacy annotation servers-increment or server-slots in your Ingress Controller Configmap ?
Hi @mike-code ,
Could you check if you have any of the legacy annotation
servers-incrementorserver-slotsin your Ingress Controller Configmap ?
The ConfigMap (created by helm chart) is empty
Hi @mike-code Can you also check the same thing on the Service ?
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
@hdurand0710
Both Service (k8s Service) and Ingress has haproxy.org/scale-server-slots: "4" annotation, yet the servers reported in haproxy stats shows 42 entries.
I don't have any "legacy" configuration, ie. this is a fresh haproxy ingress instance from latest helm chart.
btw. Is is possible to reduce the default http backends from 42 to 1? The excessive number of "virtual" backends pollutes the metrics with backend that'll never be running.
@mike-code
I did try to reproduce with the latest helm chart and having haproxy.org/scale-server-slots: "4" both in the Ingress and the Service (deployment/ingress/service http-echo). Nothing in the Configmap
Scale the http-echo to 9.
Here is what I have for the stats for http-echo
Info:
- app.kubernetes.io/version: 3.0.1
- helm.sh/chart: kubernetes-ingress-1.41.0
I was not able to reproduce.
So, in order to try to reproduce and solve your issue, could you send me:
- a screenshot of your stats ?
- What sequence did you do on the faulty deployment (scale from 1 to x? then back to y.... ). What is the current number of pod ? Did you at some point scale to 40 pods ?
- the content of the
haproxy.cfgfile (/etc/haproxy/haproxy.cfg` on the ingress controller pod)
Thanks for your help.
@hdurand0710 hm, it worked now but there are two things that are not right:
- scale-server-slots on
Ingresshas no effect - On
Serviceresource however, if I sethaproxy.org/scale-server-slots: "2"annotation and restart the haproxy pod, I will see 2 SRV_x (as expected). Then if change Service resource and scale from "2" -> "5", the stats page will show 5 SRV_x (as expected). But, if I now downscale to "3" ("5" -> "3") the stat page will show 6(!) SRV_x. Now if I downscale further ("3" -> "2") the stats page won't change at all (it won't reload the config because I can seestatusnot being reset).
Is this expected?
@mike-code ,
- scale-server-slots on Ingress has no effect
I could not reproduce this.
When I set scale-server-slots on Ingress (and not on Service, neither on ConfigMap), it's working.
I guess that you did remove the scale-server-slots on the Service then. If you did not, then the annotations on the Service is taken over the one on the Ingress.
- On Service resource however, ...
When you scale from "2" to "5", it's expected that you get an number of SRV_X >= number of pods, with the number of SRV_X being a multiple of the scale-server-slots. So you should get at least 6 SRV_X. Which is what I get when reproducing the same scaling.
If you downscale, it should remain the same number of SRV_X. It only increases, never decreases.The number of SRV_X UP should be the number of pods, the other ones should be in MAINT.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.