Evacuate storage router per vpool
As part of Fargo we added the option to move volumes away from a node/storagedriver to another node.
If you want to update or remove a node with a storagedriver it can be quite a task to move all volumes. hence let's introduce a maintenance mode
- API
- vdisksMoveAway()
- on storagedriver, moves the vDisks away to other storage drivers
- parameters
- dict of storage drivers where to move the volumes to (default all storagedrivers serving the vpool)
- how to move the volumes (currently 1 option: roundrobin)
- returns taskid
- vdisksMoveBack()
- on storagedriver, moves the vDisks back the storage driver
- parameters
- Dict of vDisks
- returns taskid
- setMaintance()
- Moves all vDisks away, moves as DTL targets away, moves all MDS away. Set a flag so you can't create new vDisks on the storagedriver. Once everything is moved set the status of the storage driver to maintenance (while moving all away status is "going into maintenance")
- parameters
- state: boolean
- checkState()
- to check if a storagedriver is in maintenance
- vdisksMoveAway()
- GUI
- on the the storage router detail page, add an action Maintenance (call the setmaintenance api for each vpool exposed on the storage router). Icon: http://fontawesome.io/icon/cogs/ .
- When in maintenance mode status of storage router should be in orange and clearly labelled on the detail page.
Question:
- maybe add domain as parameter to limit the selected storage drivers to a certain domain?
- Should we introduce something on the voldrv?
If the volumedriver should block creating new vDisks, I think the flag should be supported by it.
Volumedriver indeed needs to know the maintenance state as well, as actions coming in via the Edge client(create of empty disk, qemu import image, HA, ...) are not controlled by the FWK and these need to be blocked also.
@khenderick: when preparing maintenance mode, the volume_router/vrouter_*_thresholds should be set to 0 to prevent automigration back to that node.
@saelbrec Please advice if ok for you
vDisksMoveBack() needs some more thinking, at present we don't keep track of the previous storagedriver owner.
@redlicha, What about DTL and MDS, is there on the volumedriver side a way to prevent these from being used or is that completely up to the FWK?
MDS / DTL (unless DTLConfigMode.Automatic is configured globally) configuration is completely up to the framework.
@saelbrec should this ticket still be in state_question?
vDisksMoveBack() is not needed. For mockups & details check https://docs.google.com/document/d/1A5atdDML4W5Oao7G7L0Jhh7PfRx6i911LWGIqiuJHEA/edit .