Support multi-threaded downloads when downloading a file to the cache
What is your current rclone version (output from rclone version)?
rclone v1.54.0-beta.4889.45e8bea8d
- os/arch: linux/amd64
- go version: go1.15.3
What problem are you are trying to solve?
With rclone copy, and --multi-thread-streams set to 4, I get speeds of up to 40 MB/s whereas the same with rclone mount gives me only 10 MB/s (which is equal to the value obtained via rclone copy with the number of streams set to 1).
How do you think rclone should be changed to solve that?
Add support for multi-threaded downloads when downloading a file to the vfs cache. Per @ncw and this thread (https://forum.rclone.org/t/multi-threaded-downloads-comments-and-testers-needed/9721/18), this was present in the previous iteration of the full-mode cache but was removed to add support for partial downloads/streams.
How to use GitHub
- Please use the 👍 reaction to show that you are affected by the same issue.
- Please don't comment if you have no relevant information to add. It's just extra noise for everyone subscribed to this issue.
- Subscribe to receive notifications on status change and new comments.
I too am very much interested in this feature. Any chance we can have this implemented?
Here here, this would be a very welcome addition to rclone.
I concur as well. Please add this asap.
another vote :D
+1
Is there any current solution for this?
This feature is under consideration at the moment - watch this space!
Is this still planned for 1.56?
How to use GitHub
- Please use the 👍 reaction to show that you are affected by the same issue.
- Please don't comment if you have no relevant information to add. It's just extra noise for everyone subscribed to this issue.
- Subscribe to receive notifications on status change and new comments.
Marked as important after 18 total thumbs up
I ran across this issue as well and noted that the rclone documentation mentions "Multi thread downloads will be used with rclone mount and rclone serve if --vfs-cache-mode is set to writes or above" here.
Is this issue still actual or is the documentation outdated as it appears (according to the comments in this issue) that multi-threaded downloads used to work before. In case the documentation is outdated it might help other users to mention that multi-thread downloads do not work for mount/serve operations in the documentation.
I see the documentation now. It doesn't appear to be correct from my testing.
I could really use this if it's implemented.
Edit: I was thinking about it for a few hours now, since rclone can read a file from the remote using chunks why not allow the chunk itself to be downloaded using multiple threads?
this would greatly improve the performance when streaming directly a remote and would solve this current problem (at the expense of API requests) and if one doesn't want to increase the API requests, they can simply fine-tune the --vfs-read-chunk-size flag to better suit their needs
one feature i would love to see is the ability for multi download of different parts of the file based on the requests coming into the VFS eg. if say a file is being read from in 2 different places the read-ahead would run for both locations in the file. Currently it sort of feels like it cancels 1 read-ahead to go to the other location then reads a bit there and then flips back. That said i could be wrong as i am still investigating my issue.
up
I would like to use VFS CACHE instead of the OLD CACHE, But the multithread download is very important.
Hi, after seeing many forum posts on the forum, and this thread, I see you were interested in the use cases for this to be implemented.
After a lot of testing and trying to compare some else mount with mine, I see a massive issue with peering. In my particular example, I am using Dropbox and connecting to my files through Europe. After asking Dropbox about where servers are located, they are primarily in the US, the article can be found here.
I'm referring to this ticket here where it shows more information about speeds. Thanks again animosity22 for the support!
Multi-threading is a lot closer to utilising the speed of transfers, in my instance, this is a Gbit server and doesn't have I/O blockers. I've not had issues with Google (but they have a beefier backbone with more geo-located servers).
I'd also seen a few remarks of bringing this back, and would be keen to know if this is the aim :)
Thanks in advance!
I would also love to see this feature. Currently I dont get fast transferes using the StorJ hosted Gateway as backend for rclone... If I'm lucky I get 15 MB/s per connection ... yeah... The server has 5 Gbit/s ...
I can only refere to this thread: https://forum.rclone.org/t/rclone-mount-random-slow-speeds/31417/9
Or highly voted at stackoverflow:
https://stackoverflow.com/questions/4794420/i-need-multi-part-downloads-from-amazon-s3-for-huge-files
I would also love to see this feature. Downloading large files is just a pain.
One more vote for this.
I'm running a VPS overseas, and would like to use 'rclone mount' on that VPS to an SFTP server at home. The VPS has a >10Gbps connection, my home connection is about 1.4 Gbps, but the latency between the two is ~150ms. So a classic 'high latency fat pipe' where the latency constrains the throughput, not the bandwidth.
Pulling from home, a single-threaded 'move' of a single large file manages about 25-30 GB/sec, a multi-threaded 'move' can manage around 150 MB/sec on average (one file or many). Being able to obtain the latter throughput would make it practical to move to a less expensive VPS with SSD storage, which I expect will be well-suited for 'vfs-cache-mode full'
- Paul
I would also love to see this feature. After mounting a web disk as a webdav, using rclone to mount the webdav local disk, copying files can be slow , especially for some rate-limiting web disk mounts that download at woefully low speeds, but use multithreaded downloaders like bitcomet
Internet Download Manager, these two programs are very fast when downloading files under webdav, because they both support multi-threaded, chunked downloads.
Now using rclone to mount webdav with block multithreading enabled still has no effect, hope to support cache multithreading download soon.
/extdisk01/software/rclone/rclone mount thunderData: /extdisk02/minioData/thunderdata --config /extdisk01/software/rclone/rclone.conf --vfs-cache-mode writes --allow-non-empty --no-modtime --umask 0000 --default-permissions --dir-perms 777 --file-perms 777 --allow-other --multi-thread-streams 32 --multi-thread-cutoff 128M --buffer-size 2G --transfers 32 --checkers 64 --cache-chunk-size 300M --cache-chunk-total-size 4G --vfs-read-chunk-size 300M --vfs-read-chunk-size-limit 2G --vfs-cache-max-size 2G --vfs-disk-space-total-size 6G --log-file=/extdisk01/software/rclone/log/rclone-thunderdata02.log &
Think I am also running into this limitation.
rclone mount - cp command will hit 20-30Mbps rclone copy command for the same file will hit 100Mbps (connection limit)
+1 vote for this to be added
I also feel it's an impacting issue for users wanting to download large files via rclone mount. when are we planning this? Hope this supports multithreading download soon.
I know that for others @ncw can ask for sponsorship in order to tackle (or prioritise) one particular feature (especially if this is a big one).
Given the number of thumbs up on this issue (and it's age) would it be acceptable for you @ncw to open a crowdfunding sponsorship for this issue ? I'm willing to participate but I can not say I am able to sponsor the whole thing on my own :)
Edit: if others are willing to participate, thumbs up this comment this will help @ncw to consider it (or not).
+1 would love this to be fixed.
Yes please, I'd love to have this feature
Feature request since 2020, this is ridicules, start a funding process, and I'll donate ... On the one side the devs are asking for a payment, on the other side they don't offer you a way to donate. Clever ...
This feature is planned for 1.67. we have a sponsor for the work!
Can't wait for it :D 👯 👍 👍 👍
This feature is planned for 1.67. we have a sponsor for the work!
+1 👍👍👍🤞🙏
@ncw is there an pr/branch where we can follow progress on this