glusterfs icon indicating copy to clipboard operation
glusterfs copied to clipboard

Documentation doubts about bricks replacement

Open vice opened this issue 3 years ago • 2 comments

Description of problem: Documentation doubts about bricks replacement. gluster volume replace-brick versus gluster volume reset-brick in two steps. Both methods seemed valid and would like to know is there is any difference. Those differences could be low level as brick id preservation or some other detail. gluster volume reset-brick <VOLNAME> <SOURCE-BRICK> start seems to stop the brick so maybe could replace de volume administration documentation where its says to stop de process looking for its PID (https://docs.gluster.org/en/latest/Administrator-Guide/Managing-Volumes/#replace-brick). It seems to be a simpler more direct method and foolproof to stop the correct glusterfsd process.

volume brick-reset is only mentioned in the manual entry of the gluster command. It is going to be deprecated or it is too new?

The exact command to reproduce the issue: gluster volume replace-brick versus gluster volume reset-brick

The full output of the command that failed: No command failed.

Expected results: Bricks replacement.

Mandatory info: - The output of the gluster volume info command:

Volume Name: test-volume
Type: Distributed-Replicate
Volume ID: 602022cb-ccc0-4692-a275-68c3d6e5eb18
Status: Started
Snapshot Count: 0
Number of Bricks: 4 x (2 + 1) = 12
Transport-type: tcp
Bricks:
Brick1: node1:/srv/bricks/test1
Brick2: node2:/srv/bricks/test2
Brick3: node3:/srv/bricks/testarbiter (arbiter)
Brick4: node2:/srv/bricks/test1
Brick5: node3:/srv/bricks/test2
Brick6: node4:/srv/bricks/testarbiter (arbiter)
Brick7: node3:/srv/bricks/test1
Brick8: node4:/srv/bricks/test2
Brick9: node1:/srv/bricks/testarbiter (arbiter)
Brick10: node4:/srv/bricks/test1
Brick11: node1:/srv/bricks/test2
Brick12: node2:/srv/bricks/testarbiter (arbiter)
Options Reconfigured:
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
storage.fips-mode-rchecksum: on
cluster.granular-entry-heal: on

- The output of the gluster volume status command:

Status of volume: test-volume
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick node1:/srv/bricks/test1            49152     0          Y       2628 
Brick node2:/srv/bricks/test2            49152     0          Y       3189 
Brick node3:/srv/bricks/testarbiter      49152     0          Y       2565 
Brick node2:/srv/bricks/test1            49153     0          Y       3198 
Brick node3:/srv/bricks/test2            49153     0          Y       2574 
Brick node4:/srv/bricks/testarbiter      49152     0          Y       2837 
Brick node3:/srv/bricks/test1            49154     0          Y       2583 
Brick node4:/srv/bricks/test2            49153     0          Y       2846 
Brick node1:/srv/bricks/testarbiter      49155     0          Y       1572841
Brick node4:/srv/bricks/test1            49154     0          Y       2853 
Brick node1:/srv/bricks/test2            49156     0          Y       2533737
Brick node2:/srv/bricks/testarbiter      49155     0          Y       2179955
Self-heal Daemon on localhost               N/A       N/A        Y       1572852
Self-heal Daemon on node2.private         N/A       N/A        Y       2304394
Self-heal Daemon on node3.private         N/A       N/A        Y       4066296
Self-heal Daemon on node4.private         N/A       N/A        Y       2878 
 
Task Status of Volume test-volume
------------------------------------------------------------------------------
Task                 : Rebalance           
ID                   : 425ba8d3-8393-4996-8260-eb9174e4c695
Status               : completed      

- The output of the gluster volume heal command:

Launching heal operation to perform index self heal on volume test-volume has been successful 
Use heal info commands to check status.

Additional info: From the gluster command manual entry:

volume reset-brick <VOLNAME> <SOURCE-BRICK>  {{start}  |  {<NEW-BRICK> commit}}
      Brings  down or replaces the specified source brick with the new brick.
volume replace-brick <VOLNAME> <SOURCE-BRICK> <NEW-BRICK> commit force
      Replace the specified source brick with a new brick.

Is volume replace-brick "Only for distributed-replicate or pure replicate volumes" as stated in https://docs.gluster.org/en/latest/Administrator-Guide/Managing-Volumes/#replace-brick ?

- The operating system / glusterfs version: 9.2

vice avatar Feb 03 '23 12:02 vice

Maybe brick-replace is when both bricks are alive in replicated or distributed-replicated volumes and it starts data healing (replication). And brick-reset for any other case and it do not start any data healing as its expects to be the same brick but in another hosts or path.

vice avatar Feb 03 '23 12:02 vice

Thank you for your contributions. Noticed that this issue is not having any activity in last ~6 months! We are marking this issue as stale because it has not had recent activity. It will be closed in 2 weeks if no one responds with a comment here.

stale[bot] avatar Sep 17 '23 06:09 stale[bot]