PLM:SLURM doesn't work right for HPE Sling Shot when VNI enabled
The slurm PLM component sets --mpi=none as part of the srun command used to launch the prted daemons.
On HPE Slingshot 11 networks where VNI credentials are enforced, this ends up in, effectively, a failure to launch for multi-node jobs.
Turning on FI_LOG_LEVEL=debug shows a characteristic signature for this:
Request dest_addr: 32 caddr.nic: 0X19D1 caddr.pid: 1 rxc_id: 0 error: 0x26c450f0 (err: 5, VNI_NOT_FOUND)
This addition to the srun command line options for prted launch needs suppressed for systems using HPE Sling shot.
Problematic: the issue here is that specifying an MPI for srun will automatically make Slurm think that the daemons are MPI procs, which has implications for how they are run. What "mpi" option are you thinking of trying?
Bottom line is that the VNI allocation system is broken for indirect launch - been hearing that from other libraries. Only thing I can come up with is to find a non-srun solution, though I'm open to hearing how to get around it.
Problematic: the issue here is that specifying an MPI for
srunwill automatically make Slurm think that the daemons are MPI procs, which has implications for how they are run. What "mpi" option are you thinking of trying?
just not specifying anything about mpi.
I plan to open a PR to not insert this option into the srun cmd line.
An easy workaround for a user that finds this problematic will be to set
SLURM_MPI_TYPE=none
in their shell before using mpirun.
Ah, but it is necessary to have that option in non-HPE systems, especially when they set a default MPI type. You could wind up breaking all the non-HPE installations, and the HPE installations that have disabled VNI. Requiring everyone in those situations (which greatly outnumber those with Slingshot) to set a fix seems backwards to me. Perhaps finding a more generalized solution might be best?
Also, remember that Slurm now injects their own cmd line options, so need to figure out a solution that accounts for that as well.
Looking back, it appears we may have had to add this option to avoid having the daemon automatically bound, which then forced the procs it started to share that binding. Probably other options could also be used for that purpose. However, there may be additional reasons why we added it, so one might need some further investigation to be sure we don't cause problems.
The real issue isn't caused by the VNI itself - that's just an integer that is easily generated. The problem is the requirement that the VNI be "loaded" into CXI at privilege, which the PRRTE daemon isn't running at and thus is blocked from doing.
One solution is to create a setuid script that takes only one argument (the VNI) and executes the required operation at the CXI user's level. You might check and see if anyone has an issue with that, and what can be done to minimize any concerns. Ultimately, that's probably the correct solution - if one can make it acceptable.
@hppritcha what version of SLURM are you using on this machine that experiences the issue?
This came up on the PMIx call today, and I'm a bit lost on how --mpi=none in the Slurm PLM might be improving anything? The switch plugin in Slurm will kick in regardless, and should be setting up the VNIs.
@hppritcha I'm guessing that this resolved itself - perhaps some odd situation that generated a foobar result? In absence of any further input, I'll just close this issue as "not reproducible", so please let us know if this is real.
Let's chat about this on the next devel call. From what I can recall, this was observed on some new system that may not have been in a stable state, or may be in some unusual configuration. Might be that a simple MCA param is the right solution so that the rest of the world is fine - having the odd system set a param in the default MCA param file seems a reasonable burden.
first let's see if i can still reproduce...
I can't reproduce this problem now on the system that exhibited this VNI_NOT_FOUND issue.
i'm fine with closing this issue.
Okay - thanks!