Craig Warren

Results 75 comments of Craig Warren

@Mark-Dunscomb I've only had a chance to have a quick look at this so far, but my suspicion is that this is some sort of issue with Paraview reading the...

@Mark-Dunscomb so a quick test suggests that the material IDs from the Peplinski model and fractal distribution (#fractal_box) are being correctly written to the geometry file. Do you have an...

@Mark-Dunscomb quick check, the spatial resolutions you are using on the geometry views do not match the spatial resolutions of the models. Was this intended? Whilst, in theory, this should...

@LaanstraGJ interesting....I assumed (perhaps incorrectly) that the Slurm scheduler set $CUDA_VISIBLE_DEVICES to whatever GPUs were available solely for that users job. Therefore the GPU resource couldn't be in conflict with...

@LaanstraGJ pycuda.driver.Device(number) just takes the PCI bus ID of the device you want to run on. This should be what is given in CUDA_VISIBLE_DEVICES. I am still confused by what...

@LaanstraGJ I don't think so, pycuda.driver.Device will take whatever integer number you give it (but should be a valid PCI bus ID) see https://documen.tician.de/pycuda/driver.html#pycuda.driver.Device & https://github.com/gprMax/gprMax/blob/856afd4689c6a169fe1eb160c6c88ed13c158d63/gprMax/model_build_run.py#L501

Yes, I think we are saying the same thing here. So I'm wondering if the problem you are seeing is related to the following: > When possible, Slurm automatically determines...

@LaanstraGJ I've been thinking about this some more. A couple of questions: 1. Are you using the MPI task farm with GPUs? i.e. requesting multiple GPUs? 2. How are you...

When I checked how I'd been using gprMax on a HPC with GPUs I found the code below (for 8 traces and 2 GPUs): ``` devIDs="$(srun echo $CUDA_VISIBLE_DEVICES | sed...

@Mark-Dunscomb Thanks for the info. So, I don't think it relates to the initial reading of the file (lines 41-42), as this method should read the file line-by-line, so the...