cloud-provider-openstack icon indicating copy to clipboard operation
cloud-provider-openstack copied to clipboard

Single LoadBalancer with multiple ingress resources not supported?

Open nobodyAtall opened this issue 11 months ago • 5 comments

According to the docs: https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/octavia-ingress-controller/using-octavia-ingress-controller.md

The octavia-ingress-controller could solve all the above problems in the OpenStack environment by creating a single load balancer for multiple NodePort type services in an Ingress.

However this doesn't appear to be possible. I.e. each ingress resource will create an Octavia lb in Openstack at the moment.

nobodyAtall avatar Mar 06 '25 17:03 nobodyAtall

One way to workaround this is have multiple hosts in ONE ingress but this is hardly a solution since that means that all your resources need to be on the same namespace for this to work.

nobodyAtall avatar Mar 06 '25 17:03 nobodyAtall

/kind enhancement

AmarNathChary avatar Mar 16 '25 11:03 AmarNathChary

@AmarNathChary: The label(s) kind/enhancement cannot be applied, because the repository doesn't have them.

In response to this:

/kind enhancement

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar Mar 16 '25 11:03 k8s-ci-robot

can you share the logs from the Octavia Ingress Controller and Kubernetes version and Octavia setup details

AmarNathChary avatar Mar 16 '25 12:03 AmarNathChary

One way to workaround this is have multiple hosts in ONE ingress but this is hardly a solution since that means that all your resources need to be on the same namespace for this to work.

I just ran into this issue, it feels kinda pointless trying to deploy argocd and kargo since they appear to both need their own namespaces.

blbergo avatar Mar 27 '25 04:03 blbergo

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jun 25 '25 07:06 k8s-triage-robot

/remove-lifecycle stale

kayrus avatar Jun 25 '25 07:06 kayrus

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Sep 23 '25 07:09 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Oct 23 '25 08:10 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Nov 22 '25 09:11 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar Nov 22 '25 09:11 k8s-ci-robot