Mathias Louboutin
Mathias Louboutin
#### Problem Description Some environment variables are hard-coded for intel mpi (on supported VMs HC and HB). I ran in this issue running MPI jobs on 2 nodes with intel...
the `glb_to_rank` function in `distributed` is one of the remaining computational bottleneck for sparse objects coordinates distribution,
The factorizer `collect_nested` lacks test for the new "robust" factorization.
Operator cannot handle multiple data carriers with same name. For example ``` from devito import Grid, inner, Function grid = Grid((10, 10)) u = Function(name="u", grid=grid) u.data[:] = np.random.rand(*u.shape) u1...
The AlltoAll calls for MPI make Devito crash for large problems.
move aperture combination to PhysicalParameter arithmetic to avoid large zero padding and communication Now the arithmetic of `m1 + m2` for two different `PhysicalParameter` process different size and origin to...
Most tests pass locally, opening to test all CI. TODO: - Add tests - Getting messy try to cleanup - Add nice tuto
WHile `remove_out_of_bounds_reciever` works correctly, the resulting data has missing traces instead of zero out traces leading to incorrect sized output with regard to the linear operator size. Some post-process step...
Re-oppening once again
The `print_Add` in `printing/str.py` is making very strong and incorrect assumptions leading to incorrectly printed string (and c/latex/....) in a lot of cases (and this breaks our code generation in...