glusterfs icon indicating copy to clipboard operation
glusterfs copied to clipboard

How can one get list of subvolumes where file is laying?

Open olegkrutov opened this issue 3 years ago • 3 comments

According a warning after "gluster volume remove-brick start" command, files those are affected can be damaged during migration if there is active i/o to these files. So, I'd like to stop all VMs whose vdisks are on bricks that are to remove. But I can't find a way to get subvolumes where files or their fragments are laying. Is there a way to get such a list? Is gfid2path somehow that I need?

olegkrutov avatar Jul 05 '22 12:07 olegkrutov

You can run this command on any file of the volume:

# getfattr -n glusterfs.pathinfo <file>

It will return (in a slightly scrambled way) the bricks and path where the file is stored.

However this would need to be done file by file. If you just want to know all files stored on a particular brick, you can just search for files directly inside the brick:

# find <path/to/brick> -type f -ls

(ignore any files inside .glusterfs and files whose rights are ---------T. These are internal gluster things)

Do you have sharding enabled ?

xhernandez avatar Jul 08 '22 07:07 xhernandez

Thank you for your reply. Yes, I see that things are simpler than I thought. And yes, I'd like to properly process volumes where sharded files present.

olegkrutov avatar Jul 08 '22 12:07 olegkrutov

If you have sharding enabled then pretty much all files of a significant size (i.e. all VM disks) will have many shards on each subvolume.

Why do you need to stop the VM's before removing a brick ? using gluster's remove-brick command, files or shards present on the sobvolume to be removed will be automatically moved to other subvolumes even if the VM is online, before finally detaching the bricks once it's safe.

xhernandez avatar Aug 02 '22 16:08 xhernandez

Thank you for your contributions. Noticed that this issue is not having any activity in last ~6 months! We are marking this issue as stale because it has not had recent activity. It will be closed in 2 weeks if no one responds with a comment here.

stale[bot] avatar Mar 19 '23 22:03 stale[bot]

Closing this issue as there was no update since my last update on issue. If this is an issue which is still valid, feel free to open it.

stale[bot] avatar May 21 '23 18:05 stale[bot]