Dangling repetitive SSH mounts?
Not sure this would be explicitly a btrfs-backup issue but my remote server just crashed w/OOM and the culprit from the best we can tell was a ton of ssh mounts for our offsite backups via /tmp/mountPaths
fusermount -o rw,nosuid,nodev,fsname=backups:/,auto_unmount,subtype=sshfs -- /tmp/tmp83c_e6yb/mnt
sshfs -o auto_unmount -o reconnect -o cache=no backups:/ /tmp/tmp83c_e6yb/mnt
Taking advantage of this downtime to do OS updates but will be happy to provide more details once I have backups up and running again (and hopefully not eating memory)
Hmm, the auto_unmount flag should normally unmount the fs when the parent process exits... Would be nice if you could verify that this didn't happen in your case.
I can definitely see a growing collection of /tmp/(mktemp)/mnt directories that are still active sshfs mounts on the system.
Does this only happen after failed transfers or always?
Seems to be every time. I can run my script manually after cleaning them up and end up with new ones after that run completed.
Then the auto_unmount option doesn't seem to work reliably... For me it does, strange.
I'll remove it and run a fusermount -u instead as part of endpoint destruction, but I can't tell when I will have time to implement it, yet.
Yeah I have been running a find . -maxdepth 2 -name "mnt" -exec fusermount -u -z {} \; from /tmp and running fusermount -u -z on them, this doesnt clean up the /tmp/(maketmp) spam but eventually it gets cleaned up.
Its oddly slow to iterate over, considering maxdepth is preventing it from looking at the actual mounted remote systems files...
For comparison sake, this problem was present on Ubuntu 16.04 and on 18.04 with app versions:
SSHFS version 2.8 FUSE library version: 2.9.7 fusermount version: 2.9.7 using FUSE kernel interface version 7.19