STAT icon indicating copy to clipboard operation
STAT copied to clipboard

<LMON BE API> (ERROR): read_lmonp return a neg value

Open antonl321 opened this issue 3 years ago • 58 comments

Hi,

I did some tests with the latest stat-cl.

It works file on runs with 45 nodes x 16 MPI but it fails with the error from bellow on a run with 880 nodes x 8 MPI

[atosla@ac1-1001 ~]$ stat-cl  `pgrep -o srun` 2>~/stat-cl.err
STAT started at 2022-07-08-13:44:15
Attaching to job launcher (null):690121 and launching tool daemons...
<Jul 08 13:44:15> <Launchmon> (INFO): Just continued the RM process out of the first trap
<Jul 08 13:44:18> <LMON BE API> (ERROR): read_lmonp return a neg value

In stat-cl.err I see the following

graph is not connected, found 0 potential roots
<Jul 08 13:44:18> <STAT_FrontEnd.C: 642> STAT returned error type STAT_MRNET_ERROR: MRNet reported a Network error with message: MRNet: Topology error: file has incorrect format
<Jul 08 13:44:18> <STAT.C: 181> STAT returned error type STAT_MRNET_ERROR: Failed to launch MRNet tree()
<Jul 08 13:44:18> <STAT_lmonBackEnd.C:146> ac3-2041.bullx: STAT returned error type STAT_LMON_ERROR: Failed to receive data from FE
<Jul 08 13:44:18> <STATD.C:230> ac3-2041.bullx: STAT returned error type STAT_LMON_ERROR: Failed to connect BE
<Jul 08 13:44:18> <STAT_lmonBackEnd.C:146> ac3-1087.bullx: STAT returned error type STAT_LMON_ERROR: Failed to receive data from FE
<Jul 08 13:44:18> <STATD.C:230> ac3-1087.bullx: STAT returned error type STAT_LMON_ERROR: Failed to connect BE
<Jul 08 13:44:18> <STAT_lmonBackEnd.C:146> ac1-4064.bullx: STAT returned error type STAT_LMON_ERROR: Failed to receive data from FE
<Jul 08 13:44:18> <STATD.C:230> ac1-4064.bullx: STAT returned error type STAT_LMON_ERROR: Failed to connect BE
<Jul 08 13:44:18> <STAT_lmonBackEnd.C:146> ac3-2012.bullx: STAT returned error type STAT_LMON_ERROR: Failed to receive data from FE
<Jul 08 13:44:18> <STATD.C:230> ac3-2012.bullx: STAT returned error type STAT_LMON_ERROR: Failed to connect BE
<Jul 08 13:44:18> <STAT_lmonBackEnd.C:146> ac1-4008.bullx: STAT returned error type STAT_LMON_ERROR: Failed to receive data from FE
<Jul 08 13:44:18> <STATD.C:230> ac1-4008.bullx: STAT returned error type STAT_LMON_ERROR: Failed to connect BE
<Jul 08 13:44:18> <STAT_lmonBackEnd.C:146> ac3-2013.bullx: STAT returned error type STAT_LMON_ERROR: Failed to receive data from FE
<Jul 08 13:44:18> <STATD.C:230> ac3-2013.bullx: STAT returned error type STAT_LMON_ERROR: Failed to connect BE
<Jul 08 13:44:18> <STAT_lmonBackEnd.C:146> ac3-3013.bullx: STAT returned error type STAT_LMON_ERROR: Failed to receive data from FE
<Jul 08 13:44:18> <STATD.C:230> ac3-3013.bullx: STAT returned error type STAT_LMON_ERROR: Failed to connect BE
<Jul 08 13:44:18> <STAT_lmonBackEnd.C:146> ac3-2033.bullx: STAT returned error type STAT_LMON_ERROR: Failed to receive data from FE
<Jul 08 13:44:18> <STATD.C:230> ac3-2033.bullx: STAT returned error type STAT_LMON_ERROR: Failed to connect BE
...

antonl321 avatar Jul 08 '22 13:07 antonl321

Can you attach the .top file in your stat_results directory?

lee218llnl avatar Jul 08 '22 16:07 lee218llnl

I got the following, obviously an incomplete list of hosts

$ cat ~/stat_results/ifsMASTER.sp.x.0016/*top
ac1-1001.bullx:0 =>
	ac1-1001:0
	ac1-1002:0
	ac1-1003:0
	ac1-1006:0
	ac1-1007:0
	ac1-1008:0
	ac1-1009:0
	ac1-1012:0
	ac1-1013:0
	ac1-1014:0
	ac1-1016:0
	ac1-1017:0
	ac1-1018:0
	ac1-1019:0
	ac1-1020:0
	ac1-1021:0
	ac1-1022:0
	ac1-1023:0
	ac1-1030:0
	ac1-1031:0
	ac1-1032:0
	ac1-1033:0
	ac1-1034:0
	ac1-1035:0
	ac1-1036:0
	ac1-1037:0
	ac1-1038:0
	ac1-1039:0
	ac1-1040:0
	ac1-1041:0;

antonl321 avatar Jul 09 '22 07:07 antonl321

as a workaround, you can set the topology to "depth" with a value of "1". This will connect the STAT FE directly to all the STAT BEs. The topology file is actually just MRNet communication processes that STAT uses to make it's communication more scalable. The number of hosts you see in the topology file is probably the square root of the number of nodes of your job.

That aside, I will have to do some debugging to see why it doesn't like the topology file. It might not like that your hostnames have "-" characters in it.

Anyway, let me know if the depth 1 workaround suffices until I can fix this.

lee218llnl avatar Jul 11 '22 16:07 lee218llnl

I'm actually thinking that MRNet does not like the fact that there is the ".bullx" in your hostname. It may be the case that when it sees ".something" it thinks "something" is your domain name and then tries to resolve the network address using that. I'm guessing that in your case, ".bullx" is not the domain, but rather just part of the node name. Is this accurate?

lee218llnl avatar Jul 11 '22 16:07 lee218llnl

I think so. We have discussed this a while back in https://github.com/LLNL/STAT/issues/33#issuecomment-919433850

antonl321 avatar Jul 12 '22 13:07 antonl321

depth 1 option works (no crash) but the stat-cl stays for ever in "Sampling traces..." when I try to sample a 880 nodes x 8 MPI/node job. I have a suggestion here, if it were easy to implement it would be useful to see the progress of sampling every second or so. In this way one can decide quickly if it worth to wait or to cancel

antonl321 avatar Jul 21 '22 13:07 antonl321

how long did you wait? It could take a few minutes when running with this flat topology rather than using a scalable tree.

Regarding progress sampling, there is no easy way to poll the daemons for progress, so unfortunately you suggestion cannot be implemented.

Ultimately, I'm hoping that we can get a fix in MRNet so you can use a scalable tree and then in theory the operation should be fairly quick.

lee218llnl avatar Jul 21 '22 17:07 lee218llnl

Up to 30 minutes or so. If I use a very simple test code (a few lines) I can get a stack trace in a couple of minutes even at a larger scale. As I said in my previous message stat-cl gets stuck in "Sampling traces...". The launch of the monitors goes ok. The app I'm interested in is rather complex with deep stack trace. I wonder if some timeouts block the whole process.

antonl321 avatar Jul 21 '22 20:07 antonl321

can you try running the same complex app with the deep stack at a smaller scale? One potential issue is contention on the file system since all the STATD's will be trying to parse you executable's debug information.

lee218llnl avatar Jul 21 '22 20:07 lee218llnl

I have run it with 300 nodes, but still it takes a long time. I think that you are right about STATD being stuck waiting for the debug info from the file system. I looked on one node with bpf profile and offcpu tools and most of the time STATD is off cpu when stack traces are being collected. Also la large number of lustre rpc are around. Can this be improved? I need to think and ask around.

antonl321 avatar Jul 25 '22 15:07 antonl321

I have run stac-cl with -l BE -l SW -l CP on the case that hangs ( 880 nodes x 8 MPI/node) I have noticed that after some progress reported all log files stop being updated at about the same time (to the second). The last line in *.STATD.log is <STAT_FilterDefinitions.C:661> File request down stream filter end. EXCEPT for one file which has <STAT_BackEnd.C:1268> sending ackowledgement with tag 103 (first = 100) and return value 0

I wonder if a hang appears in STAT. Can one run stat on stat ? :)

One more note: after I kill the hanging job I found in stat_results subfolder 880 files with this name pattern ifsMASTER.sp.x.0050.BE__.dot

antonl321 avatar Jul 27 '22 16:07 antonl321

Yes, you can run STAT on STAT, but the latter instance is likely to hang for the same reason. I think the problem is that without the MRNet tree, the STAT FE is trying to connect to 880 BEs and send data. This is clearly not scalable and will either take too long or hang. I'm hoping we can get the fix in MRNet to tolerate the "." in your system's nodes hostnames so we can let STAT/MRNet deploy a more scalable tree.

lee218llnl avatar Jul 27 '22 16:07 lee218llnl

Perhaps this helps, internally in our cluster the short node names are enough for comms. So if you can get them with something like hostname -s it should be a quick fix.

antonl321 avatar Jul 27 '22 16:07 antonl321

I looked at the STAT source code and found that we already have a STAT_FE_HOSTNAME environment variable. @antonl321 in your case from above, can you set this variable to "ac1-1001" or whatever the short hostname is from where you invoke STAT? If you look at the .top file that it produces, the value that you specify should be on the first line, rather than the hostname with ".bullx". You might want to test at a smaller scale, say 4 nodes and then when you run stat-cl, also add "-d 2" to the stat-cl args.

lee218llnl avatar Jul 27 '22 17:07 lee218llnl

I'm a bit confused, what exactly should be in STAT_FE_HOSTNAME? The name of the head node, or hostname pattern, or ...?

antonl321 avatar Jul 27 '22 17:07 antonl321

It should be the short hostname of the node from where you run stat-cl or stat-gui. Does that make sense?

lee218llnl avatar Jul 27 '22 17:07 lee218llnl

Yes, it does. I'll test this when I get some free nodes on the system. Hopefully this evening or tomorrow morning.

antonl321 avatar Jul 27 '22 18:07 antonl321

With STAT_PE_HOSTNAME defined and without -d option stat-cl fails quickly with this sort of message ( see below). Note STATD reports that it has a -d 1 flag although I didn't use one. Without -d 1 stat-cl hangs as before.

<Jul 27 20:45:09> <STAT_BackEnd.C:655> registering signal handlers <Jul 27 20:45:09> STATD.C:196 STATD invoked with STAT invoked with: /lus/h2resw01/hpcperm/atosla/Tools/spack/opt/spack/linux-rhel8-zen/gcc-8.4.1/stat-develop-xq35kogkta6vrg4jonmm4sblhegoxpf3/bin/STATD -d 1 -L /home/atosla/R3/tmp/stat-logs/42873620 -l BE -l SW --lmonsharedsec=393846926 --lmonsecchk=1241494145 <Jul 27 20:45:09> <STAT_lmonBackEnd.C:50> Registering unpack function with Launchmon <Jul 27 20:45:09> <STAT_lmonBackEnd.C:58> Launchmon handshaking <Jul 27 20:45:09> <STAT_lmonBackEnd.C:66> Waiting for launchmon to be ready <Jul 27 20:45:09> <STAT_lmonBackEnd.C:74> Allocating process table <Jul 27 20:45:09> <STAT_lmonBackEnd.C:89> Getting my process table entries <Jul 27 20:45:09> <STAT_lmonBackEnd.C:115> Launchmon successfully initialized <Jul 27 20:45:09> <STAT_lmonBackEnd.C:132> Prepearing to connect to MRNet <Jul 27 20:45:09> <STAT_lmonBackEnd.C:142> Receiving connection information from FE <Jul 27 20:45:09> <STAT_lmonBackEnd.C:146> ad1-1036.bullx: STAT returned error type STAT_LMON_ERROR: Failed to receive data from FE <Jul 27 20:45:09> STATD.C:230 ad1-1036.bullx: STAT returned error type STAT_LMON_ERROR: Failed to connect BE <Jul 27 20:45:09> <STAT_BackEnd.C:650> de-registering signal handlers

antonl321 avatar Jul 27 '22 21:07 antonl321

Perhaps it's just a typo in your message, but you said you set "STAT_PE_HOSTNAME" when it should be "STAT_FE_HOSTNAME", that is it looks like you put a "P" where it was supposed to be "F". Regarding the "-d" option, that has a different meaning for the STATD process than it would with stat-cl, and "-d 1" is the correct option for STATD. That aside, can you send the .top file in the stat_results directory an also the debug log files that STAT generated?

lee218llnl avatar Jul 27 '22 21:07 lee218llnl

I have checked, in my script I have the right variable name. I have miss-typed the variable name in my message. Attached are the debug anf .top file stat-debug.tar.gz

antonl321 avatar Jul 28 '22 08:07 antonl321

Do you happen to have the STAT FE log file too? This would be enabled with -l FE. It looks like this is a different error in the STATD daemons when they are trying to receive their connection info from the STAT FE, and it would help to see where the FE is in the process.

That aside, it looks like there is a potential "fix" in MRNet. Can you modify your var/spack/repos/builtin/packages/mrnet/package.py in Spack and add:

version('hntest', commit='c7d029b03fd6fea932f83c6fe6a2fd9b063956a3')

After that you will have to spack uninstall --dependents mrnet and then when you reinstall STAT in spack, please add "^mrnet@hntest" to your install spec.

lee218llnl avatar Jul 28 '22 16:07 lee218llnl

I should add that with the "hntest" mrnet, please add XPLAT_USE_FQDN=1 when you run and please do not specify STAT_FE_HOSTNAME.

lee218llnl avatar Jul 28 '22 16:07 lee218llnl

And when you test this, try at small scale again, perhaps 4 nodes and add the "-d 2" option to stat-cl1

lee218llnl avatar Jul 28 '22 16:07 lee218llnl

stat logs with -l FE attached.

I have rebuild mrnet and stat. stat-cl -d 2 on small a test (4 nodes 128 MPI ranks) fails, see the attached file (-d-2). The behaviour for large scale is unchanged, that is -d 1 hangs. -d 1 hung also on the small run once out of 3. stat-FE-debug.tar.gz 42875702-d-2.tar.gz

antonl321 avatar Jul 29 '22 14:07 antonl321

In both cases, i still see this in the STAT FE log:

<Jul 29 13:11:33> <STAT_FrontEnd.C:642> STAT returned error type STAT_MRNET_ERROR: MRNet reported a Network error with message: MRNet: Topology error: file has incorrect format

Just to confirm, did you set XPLAT_USE_FQDN=1 in your environment? Also, can you cd to the MRNet installation prefix and grep for that environment variable to make sure you picked up the changes:

[lee218@rzwiz2:spack]$ cd opt/spack/linux-rhel7-broadwell/gcc-8.3.1/mrnet-hntest-kbsbtzcerm7qlrlxankxxhvmzr7mpcgy/ [lee218@rzwiz2:mrnet-hntest-kbsbtzcerm7qlrlxankxxhvmzr7mpcgy]$ grep XPLAT_USE_FQDN -nr * Binary file bin/mrnet_commnode matches Binary file bin/mrnet_topgen matches Binary file lib/libxplat.a.5.0.1 matches Binary file lib/libxplat.so.5.0.1 matches

lee218llnl avatar Jul 29 '22 15:07 lee218llnl

Also did you get any error output to the terminal when you ran?

lee218llnl avatar Jul 29 '22 15:07 lee218llnl

I noticed in the topology file:

ad2-2051.bullx:0 => ad2-2051:0 ad2-2055:0;

That ad-2051 appears twice with the :0 rank. This is because the string comparison thinks the .bullx is a different host than the one without .bullx, and this may cause a conflict in MRNet. I pushed a change to the STAT develop branch that may fix this. Can you uninstall STAT with spack and rebuild spack install stat@develop... If you see failures, please do the same thing with sending debug logs, topology file, and terminal output. Thanks for being patient with me... I don't have an easy way to test this myself since we don't have any systems with "." in the hostname.

lee218llnl avatar Jul 29 '22 16:07 lee218llnl

I tested my mrnet installation, it matches the variable as in your example. I launch stat-* from a script which has this line export XPLAT_USE_FQDN=1. It should be fine. If possible stat in debug mod should dump the values of all environment variable of interest, just to be sure that they have the expected values. For the new stat installation should I keep mrnet@hntest?

antonl321 avatar Jul 29 '22 16:07 antonl321

STAT doesn't dump the value on env variables, but that is a good idea.

please keep using mrnet@hntest from here on out. This is what I think will be needed moving forward and if/when we get it to work, it will hopefully be merged in to the MRNet mast and then a release.

lee218llnl avatar Jul 29 '22 17:07 lee218llnl

I have installed the new version. I'm afraid that we are back to the negative value error when using -d 2

+ ssh ab1-3004 /lus/h2resw01/hpcperm/atosla/Tools/spack/opt/spack/linux-rhel8-zen/gcc-8.4.1/stat-develop-gshuszoyutctc5xapk2dvjhv54hezfcg/bin/stat-cl -d 2 -l FE -l BE -l SW -l CP -L /home/atosla/R3/tmp/stat-logs/29932705 426346
STAT started at 2022-07-29-18:09:42
Attaching to job launcher (null):426346 and launching tool daemons...
<Jul 29 18:09:44> <STAT_FrontEnd.C: 2108> STAT returned error type STAT_ARG_ERROR: Not enough resources for specified topology.  Please specify more resources with --nodes and --procs.
<Jul 29 18:09:44> <STAT_FrontEnd.C: 535> STAT returned error type STAT_ARG_ERROR: Failed to create topology file
<Jul 29 18:09:44> <STAT.C: 181> STAT returned error type STAT_ARG_ERROR: Failed to launch MRNet tree()
<Jul 29 18:09:44> <LMON BE API> (ERROR): read_lmonp return a neg value
<Jul 29 18:09:44> <STAT_lmonBackEnd.C:146> ab1-3005.bullx: STAT returned error type STAT_LMON_ERROR: Failed to receive data from FE
<Jul 29 18:09:44> <STATD.C:230> ab1-3005.bullx: STAT returned error type STAT_LMON_ERROR: Failed to connect BE
<Jul 29 18:09:44> <STAT_lmonBackEnd.C:146> ab1-3007.bullx: STAT returned error type STAT_LMON_ERROR: Failed to receive data from FE
<Jul 29 18:09:44> <STATD.C:230> ab1-3007.bullx: STAT returned error type STAT_LMON_ERROR: Failed to connect BE
<Jul 29 18:09:44> <STAT_lmonBackEnd.C:146> ab1-3009.bullx: STAT returned error type STAT_LMON_ERROR: Failed to receive data from FE
<Jul 29 18:09:44> <STATD.C:230> ab1-3009.bullx: STAT returned error type STAT_LMON_ERROR: Failed to connect BE
<Jul 29 18:09:42> <Launchmon> (INFO): Just continued the RM process out of the first trap
srun: error: ab1-3005: task 1: Exited with exit code 12
srun: launch/slurm: _step_signal: Terminating StepId=29932705.5
srun: error: ab1-3004: task 0: Exited with exit code 12
srun: error: ab1-3007: task 2: Exited with exit code 12
srun: error: ab1-3009: task 3: Exited with exit code 12
<Jul 29 18:09:44> <STAT_lmonBackEnd.C:146> ab1-3004.bullx: STAT returned error type STAT_LMON_ERROR: Failed to receive data from FE
<Jul 29 18:09:44> <STATD.C:230> ab1-3004.bullx: STAT returned error type STAT_LMON_ERROR: Failed to connect BE

logs attached 42875786.tar.gz

antonl321 avatar Jul 29 '22 18:07 antonl321