Increase granularity of halo-exchange timing info
Description
Previously, the NVTX ranges measuring the so-called 'MPI' time included the time to unpack and pack the contiguous buffers actually exchanged during the MPI_SENDRECV operation. While this may make sense, to avoid confusion and always be able to get proper communication time, I renamed the 'RHS-MPI' NVTX range to 'RHS-MPI+BufPack' and added an NVTX range only around the MPI_SENDRECV operation called 'RHS-MPI_SENDRECV.'
Type of change
- [x ] New feature (non-breaking change which adds functionality)
How Has This Been Tested?
I ran an example case under nsys with and without this change. The reported timing from the new RHS-MPI_SENDRECV NVTX range was within 5% error of the MPI trace time reporting for this example.
See below for screenshots from the NSYS reports. In this example, the MPI_SENDRECV time is ~1.4% of the total 'MPI' time.
This shows the NSYS MPI trace timing info. Note the highlighted line's 'total time'
This is the NVTX range timing information. Note that the RHS-MPI_SENDRECV range total time is similar to the new NVTX range result:
Test Configuration: 4 V100 nodes on Phoenix running the 2D shockbubble case for 700 timesteps.
Checklist
- [ x] I ran
./mfc.sh formatbefore committing my code - [ x] This PR does not introduce any repeated code (it follows the DRY principle)
- [ x] I cannot think of a way to condense this code and reduce any introduced additional line count
If your code changes any code source files (anything in src/simulation)
To make sure the code is performing as expected on GPU devices, I have:
- [x ] Checked that the code compiles using NVHPC compilers
- [ ] Checked that the code compiles using CRAY compilers
- [ x] Ran the code on either V100, A100, or H100 GPUs and ensured the new feature performed as expected (the GPU results match the CPU results)
- [ ] Ran the code on MI200+ GPUs and ensure the new features performed as expected (the GPU results match the CPU results)
- [ x] Enclosed the new feature via
nvtxranges so that they can be identified in profiles - [ x] Ran a Nsight Systems profile using
./mfc.sh run XXXX --gpu -t simulation --nsys, and have attached the output file (.nsys-rep) and plain text results to this PR - [ ] Ran an Omniperf profile using
./mfc.sh run XXXX --gpu -t simulation --omniperf, and have attached the output file and plain text results to this PR. - [ ] Ran my code using various numbers of different GPUs (1, 2, and 8, for example) in parallel and made sure that the results scale similarly to what happens if you run without the new code/feature
Codecov Report
Attention: Patch coverage is 80.59701% with 13 lines in your changes missing coverage. Please review.
Project coverage is 42.96%. Comparing base (
efc9d67) to head (022f593). Report is 3 commits behind head on master.
Additional details and impacted files
@@ Coverage Diff @@
## master #639 +/- ##
==========================================
+ Coverage 42.85% 42.96% +0.11%
==========================================
Files 61 61
Lines 16280 16314 +34
Branches 1891 1882 -9
==========================================
+ Hits 6976 7010 +34
- Misses 8259 8260 +1
+ Partials 1045 1044 -1
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Is it possible to adjust the NVTX range naming to make the hierarchy of the ranges more obvious? For example, TimeStep — RHS — Communication — MPI/SendRecv or something similar, with this style used for all ranges. Right now, it's hard to discern which call nests other calls. I do realize that other parts of Nsys GUI make this more obvious.
Just updated things. Here's the output viewed in NSYS (table and graph views):
Nice. I think a 'TSTEP-SUBSTEP' range makes sense (for RK3 you have 3 such substeps). This helps consolidate things. Related to #631
@max-Hawkins would you mind updating/finish this for merge?
@henryleberre Ready for your evaluation.
needs ./mfc.sh format -j 4
edit: nvm
Thanks! A beauty. Merging.