Role based nodes
Currently a host can have many roles (run the framework, arakoon , asds, voldrv) but assigning these roles isn't very flexible. With this feature request we want to be assign roles more flexible.
This feature should tackle a few items:
-
Don't automatically deploy the GUI on all master nodes. We could f.e. run the GUI in a container which can be moved or deployed across the cluster.
- Use a keep alive IP for the GUI https://github.com/openvstorage/framework/issues/1116
-
Allow multiple storagedrivers for the same vpool to be deployed on a single host https://github.com/openvstorage/framework/issues/1290
-
Allow to specify which nodes to be used for the DTL https://github.com/openvstorage/framework/issues/993
-
Allow to specify which nodes should be used for arakoon/metadata https://github.com/openvstorage/framework/issues/1712
- allow to set preferred master
- allow to set a standby arakoon host https://github.com/openvstorage/framework/issues/1596
- allow to spin up new nsm cluster, don't allow to remove arakoon clusters (f.e. nsm cluster removal as requested in https://github.com/openvstorage/alba/issues/713 is out of scope)
-
Allow to specify which nodes should be used Iscsi servers
-
Allow to specify which nodes should be used for rabbitMQ
-
Proxies deployment https://github.com/openvstorage/home/issues/38
-
Placement of maintenance processes https://github.com/openvstorage/framework-alba-plugin/issues/200
-
(Optionally) monitoring stack
-
deploy the asd packages on storage node only nodes without having to copy the cacc https://github.com/openvstorage/framework/issues/631
@saelbrec feel free to add your input
Allow to specify which nodes should be used for arakoon/metadata #1712
What is the granularity of these roles? Is there 1 role for all arakoon clusters? Or subdivided by type of arakoon clusters (e.g. ovsdb/voldrv/abm/nsms)? Or for the alba specific ones it can depend on which backend the abm/nsm belongs too?
Placement of maintenance processes is one to add to your list too.
I would allow this to be as specific as possible even up the the NSM backend level. It should allow you to have full control over where which Arakoon cluster lives so the external Arakoon cluster 'hack' is no longer needed.
I might have misclicked