Arun Srinivasan
Arun Srinivasan
At [this](https://github.com/HenrikBengtsson/future.apply/blob/dc81adae768b78726684edddbd77b38fe3126eb6/R/future_lapply.R#L269) point, the chunk `ii` is entirely run on the same node. Since all the entries within the chunk never gets to see if other nodes are free. You...
The problem is that as you increase `future.scheduling` value, the overhead seems to be increasing by huge amounts as well. In the same example as above, having `x
(I noticed that I didn't have doFuture installed. Just installed CRAN version) Aha.. good catch. The one with `doFuture::registerDoFuture()` takes 12s. ### registerDoSNOW() ```r require(parallel) require(doSNOW) require(foreach) require(future) require(future.apply) nodes
@HenrikBengtsson is there a timeline on when this would be done? The problem is this: The chunks are pre-assigned as to which node they'll be run. Imagine there are about...
Just setting `options(future.wait.interval=0L)` reduces the runtime from 21s to 3.8s (approximately the same as that of `foreach`: ```r require(parallel) require(doSNOW) require(foreach) require(future) require(future.apply) options(future.wait.interval=0L) ##
Passing `future.globals="MY_GLOBALS"` works. But I've a feeling that this is a case where it should work out-of-the-box..
Apologies for the late reply. I agree the design of function argument with default like `MY_GLOBALS$x` is very unusual. And I'm working on fixing such code that I inherited. It'd...
Mark, the explanation is quite clear, and makes sense. It's also quite nice that new algorithms can be just plugged in, and that different compressors can be used on different...
> On the other hand, because the user already had the intention of overwriting the previous file, that might not be a very big issue for the user, what do...