Amit Murthy
Amit Murthy
I noticed that the default `show` of `DArray` (i.e., it defaults to the one in Base) results in 4 round trip calls for each element. We ought to define `...
Need to implement the equivalent of https://github.com/JuliaLang/julia/pull/6768 and https://github.com/JuliaLang/julia/pull/10073 when using MPI for transport
Currently we do a busy-wait with `MPI.IProbe` - `yield()` loop, which consumes CPU cycles unnecessarily.
A few related issues: - warnings printed when using MPI transport - different finalization procedures between when using MPI_ON_WORKERS, MPI_TRANSPORT_ALL, and TCP_TRANSPORT_ALL. Implement a standard "close" function. - warning printed...
`pmap` in batch mode uses a local asyncmap to process a batch - https://github.com/JuliaLang/julia/blob/9e3318c9840e7a9e387582ba861408cefe5a4f75/base/distributed/pmap.jl#L198 Considering that each computation in `pmap` is fairly large, and batch sizes small, an asyncmap would...
The default `all_to_all` topology connects all processes to each other. While this is fine for small clusters, the total number of TCP connections increases rapidly as (N^2)/2. Considering that a...
``` julia> rr=RemoteRef(2) RemoteRef(2,1,1) julia> take!(rr) ^CERROR: InterruptException: in process_events at ./stream.jl:642 in wait at ./task.jl:302 in wait at ./task.jl:228 in wait_full at ./multi.jl:631 in remotecall_fetch at multi.jl:731 in call_on_owner...
Starting a discussion to cleanly specify a Julia message. This will help in - swappable transports at a "message" level. For example, 0MQ or MPI. See https://github.com/JuliaParallel/MPI.jl/issues/60 - messages that...
Scenario: - Master creates a RemoteRef on worker 2 - RemoteRef is sent to Worker 3 as part of message, but not used/assigned/stored on 3 - Reference to RemoteRef is...
The following sequence resulted in the worker segfaulting and terminating. - using PTools - addprocs(1) - call a method in PTools. The method creates an anonymous function (in PTools code)...