Partial files are read from the beginning every time a resume is tried
Describe the bug Whenever I try to resume a partially downloaded file megadl reads every single byte from the beginning.
This become unbearable when the download is multi-gigabyte file residing on a slow storage (with few MB/sec it can take hours per file). It is especially painful when the resume operation fails due to quota errors and file must be re-read at the each invocation of megadl again and again.
To Reproduce
Start to download a file and quit megadl before the file was finished. Then launch the same megadl command and let it resume the download of the same file.
Expected behavior
I can't say I understand how does resume function work but from what I gather download_from is found and cbc-mac value must be calculated up to the point the partial file is downloaded, right?
Would it be possible to store whatever necessary along with the partial file and use that information if its available. Ideally it should fail gracefully and read from start if that extra info is not available (to accommodate downloads started in older versions).
Environment:
- OS and version: Debian 9
- Megatools version: git revision 2ca92 (but it's an old behavior so I don't think the version is relevant)
Yes, that's how it works and how it can be optimized.
so it does support resuming downloads? i thought that it just corrupts anything that cant be completed because it triggered the 509 paywall error.
It supports resume, and does not corrupt anything. Megatools ensures the file is not corrupted by checking the already downloaded data first, before continuing the download. It makes is slower to resume, but makes sure the data is correctly downloaded even when resuming.