ents-hqx

Results 9 comments of ents-hqx

Only one volume server. So filer.sync default 32 is too high for one volume server?

Did a new test. I set concurrency to 8 and synced from another filer(four s3 buckets) and I still get lot of orphans. Added 3 new volume servers(same server, different...

weed -logdir=/var/log/seaweedfs/ filer -defaultStoreDir=/seaweedfs/filer -ui.deleteDir=false and weed -logdir=/var/log/seaweedfs/ filer.sync -a filer1:8888 -b filer2:8888 -concurrency=8 -isActivePassive

filer1 and filer2 are connected to different masters.

Filer1 and Filer2 machines have identical hardware and zfs conf Filer1 /usr/local/bin/weed -logdir=/var/log/seaweedfs/ master -disableHttp -volumeSizeLimitMB=10240 -volumePreallocate=false -metrics.address=127.0.0.1:9091 -mdir=/seaweedfs/mdir /usr/local/bin/weed -logdir=/var/log/seaweedfs/ volume -dataCenter=dc -dir=/seaweedfs/volume -dir.idx=/seaweedfs/index -max=0 /usr/local/bin/weed -logdir=/var/log/seaweedfs/ filer -defaultStoreDir=/seaweedfs/filer...

Don't know if most but I was using rclone to move localfs files to seaweedfs filer1 s3 bucket. Rclone default upload cutoff is 200Mi and if I remember correctly then...

Is there anything I can do to help solve this problem? Right now beacuse of lot of orphans, disc usage is lot higher in filer2 than in filer1

I can reproduce with this Machine where You have rclone cd /path/to/weed/files and do dd if=/dev/urandom bs=1024 count=10000000 | split -a 4 -b 600M - file. Then use rclone to...

You can temporarily solve it with lua :) Last year I was in the same place and in the end I used lua-cjson