Redownloads the page instead of using the content currently loaded
Wallabagger re-downloads the page instead of using the content currently loaded, making it unable to save paywalled articles.
It should probably parse the DOM instead of re-downloading, at least by default.
While I can see how this could be helpful, Wallabag already manages credentials for paywalls (https://doc.wallabag.org/en/user/articles/restricted.html). I know the list of compatible sites isn't big.
On a technical note I would guess this wouldn't be as much a Wallabagger issue as a Wallabag issue to store text and/or HTML instead of fetching an URL. I would guess that this would require a rewrite of part of the engine.
On the other hand, by re-fetching, wallabag is more able to bypass useless content, rather than a browser where you need countless clicks to get rid of those annoying GDPR screens/"subscribe to our newsletter" popups.
Although the motivation is different, I think the technical implementation for this request would be similar to #105 (here the motivation is to avoid double download / to capture paywalled content while there the motivation is to clip a subsection of the content or save extra content like comments).