kubernetes-ingress icon indicating copy to clipboard operation
kubernetes-ingress copied to clipboard

scale-server-slots on Ingress resource is ignored

Open mike-code opened this issue 1 year ago • 8 comments

appVersion: 3.0.1

Using the following Ingress

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-thanos
  namespace: monitoring
  annotations:
    haproxy.org/scale-server-slots: "4"
spec:
  ingressClassName: haproxy-thanos
  rules:
  - host: "..."
    http:
      ...

the backend still scales to 42.

mike-code avatar Aug 26 '24 22:08 mike-code

Hi @mike-code ,

Could you check if you have any of the legacy annotation servers-increment or server-slots in your Ingress Controller Configmap ?

hdurand0710 avatar Aug 27 '24 07:08 hdurand0710

Hi @mike-code ,

Could you check if you have any of the legacy annotation servers-increment or server-slots in your Ingress Controller Configmap ?

The ConfigMap (created by helm chart) is empty

mike-code avatar Aug 27 '24 13:08 mike-code

Hi @mike-code Can you also check the same thing on the Service ?

hdurand0710 avatar Aug 29 '24 08:08 hdurand0710

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

stale[bot] avatar Sep 29 '24 14:09 stale[bot]

@hdurand0710 Both Service (k8s Service) and Ingress has haproxy.org/scale-server-slots: "4" annotation, yet the servers reported in haproxy stats shows 42 entries.

I don't have any "legacy" configuration, ie. this is a fresh haproxy ingress instance from latest helm chart.

btw. Is is possible to reduce the default http backends from 42 to 1? The excessive number of "virtual" backends pollutes the metrics with backend that'll never be running.

mike-code avatar Sep 30 '24 00:09 mike-code

@mike-code I did try to reproduce with the latest helm chart and having haproxy.org/scale-server-slots: "4" both in the Ingress and the Service (deployment/ingress/service http-echo). Nothing in the Configmap

Scale the http-echo to 9. Here is what I have for the stats for http-echo Screenshot from 2024-09-30 08-39-08

Info:

  • app.kubernetes.io/version: 3.0.1
  • helm.sh/chart: kubernetes-ingress-1.41.0

I was not able to reproduce.

So, in order to try to reproduce and solve your issue, could you send me:

  • a screenshot of your stats ?
  • What sequence did you do on the faulty deployment (scale from 1 to x? then back to y.... ). What is the current number of pod ? Did you at some point scale to 40 pods ?
  • the content of the haproxy.cfg file (/etc/haproxy/haproxy.cfg` on the ingress controller pod)

Thanks for your help.

hdurand0710 avatar Sep 30 '24 06:09 hdurand0710

@hdurand0710 hm, it worked now but there are two things that are not right:

  1. scale-server-slots on Ingress has no effect
  2. On Service resource however, if I set haproxy.org/scale-server-slots: "2" annotation and restart the haproxy pod, I will see 2 SRV_x (as expected). Then if change Service resource and scale from "2" -> "5", the stats page will show 5 SRV_x (as expected). But, if I now downscale to "3" ("5" -> "3") the stat page will show 6(!) SRV_x. Now if I downscale further ("3" -> "2") the stats page won't change at all (it won't reload the config because I can see status not being reset).

Is this expected?

mike-code avatar Oct 02 '24 00:10 mike-code

@mike-code ,

  1. scale-server-slots on Ingress has no effect

I could not reproduce this. When I set scale-server-slots on Ingress (and not on Service, neither on ConfigMap), it's working. I guess that you did remove the scale-server-slots on the Service then. If you did not, then the annotations on the Service is taken over the one on the Ingress.

  1. On Service resource however, ...

When you scale from "2" to "5", it's expected that you get an number of SRV_X >= number of pods, with the number of SRV_X being a multiple of the scale-server-slots. So you should get at least 6 SRV_X. Which is what I get when reproducing the same scaling. If you downscale, it should remain the same number of SRV_X. It only increases, never decreases.The number of SRV_X UP should be the number of pods, the other ones should be in MAINT.

hdurand0710 avatar Oct 02 '24 07:10 hdurand0710

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

stale[bot] avatar Nov 04 '24 01:11 stale[bot]