[occm] Support `loadbalancer.openstack.org/flavor-name` instead of only `loadbalancer.openstack.org/flavor-id`
Is this a BUG REPORT or FEATURE REQUEST?:
/kind feature
What happened:
Reading the documentation for the cloud-provider-openstack I found it is possible to specify a loadbalancer flavor by id, although it would be handy and simpeler if it is possible to also create one by using the flavor name (if no flavor-id is specified).
What you expected to happen:
That a loadbalancer flavor is selected based on the name when the annotation loadbalancer.openstack.org/flavor-name is used
I think Octavia allows to have multiple flavors using the same name. We could probably just fail when there is more than one flavor of such a name, but that's increasing the complexity of debugging a bit.
Anyway I'm not totally opposed to this.
Hi @dulek
It is indeed possible to have the same flavor although an OpenStack admin creates those, so I would guess that the change of having duplicated is not that high. Failing the creation of the loadbalancer would be a good idea then or to fallback to the default by not setting any of the config (so using not the name as none could be found).
at least in our openstack, these flavors are not visible for normal users. How to solve this issue?
at least in our openstack, these flavors are not visible for normal users. How to solve this issue?
Do you currently provide them with an ID? Then a name would make it easier.
yes, we provide them with id. I do not see how name could make it easier from the api perspective. https://docs.openstack.org/api-ref/load-balancer/v2/#create-a-load-balancer takes flavor_id NOT flavor_name. So it means that somehow name should first be converted to id, and at least we do not have API which provides this information.
yes, we provide them with id. I do not see how
namecould make it easier from the api perspective. https://docs.openstack.org/api-ref/load-balancer/v2/#create-a-load-balancer takesflavor_idNOTflavor_name. So it means that somehownameshould first be converted toid, and at least we do not have API which provides this information.
Is it possible in your environment to list the loadbalancer flavors (through this endpoint: https://docs.openstack.org/api-ref/load-balancer/v2/#list-flavors), in our environment this is supported. So this would be my best guess to make it work. Using names would make it easier and understandable for the people that are defining the loadbalancer.
Right, I did not know about that endpoint. Seems to work for me as well. So it should be open for normal openstack users as well.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.