Questions about replacing bricks in Replicate/Distributed Replicate volumes
Hi I am running glusterFS service among two servers .
root@server1:~# gluster --version
glusterfs 3.7.6 built on Dec 25 2015 20:50:46
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
I am reading https://docs.gluster.org/en/latest/Administrator-Guide/Managing-Volumes/#replace-brick, and have the following questions :
-
Using the gluster volume fuse mount (In this example: /mnt/r2) set up metadata so that data will be synced to new brick
Does it mean running the command "root@server1:~# mount -t glusterfs server1:/r2 /mnt/r2" ?
-
(In this case it is from Server1:/home/gfs/r2_1 to Server1:/home/gfs/r2_5)
Should it be Server2:/home/gfs/r2_1 instead of Server1:/home/gfs/r2_1 ?
Because Server2:/home/gfs/r2_1 is the mirror of the faulty brick Server1:/home/gfs/r2_0
-
Is step "set up metadata"(i.e. setfattr /getfattr) necessary? I have made a test , do not take this step , and finally it work.
-
After excuting
replace-brickcommand , there is no data in new brickServer1:/home/gfs/r2_5. But after excutinggluster volume heal r2 full, it does have data . So , Isgluster volume heal r2 fullnecessary? But docs.gluster.org does not mention it , or do I miss something?
Thank you for your reply .