MFC icon indicating copy to clipboard operation
MFC copied to clipboard

Increase granularity of halo-exchange timing info

Open max-Hawkins opened this issue 1 year ago • 4 comments

Description

Previously, the NVTX ranges measuring the so-called 'MPI' time included the time to unpack and pack the contiguous buffers actually exchanged during the MPI_SENDRECV operation. While this may make sense, to avoid confusion and always be able to get proper communication time, I renamed the 'RHS-MPI' NVTX range to 'RHS-MPI+BufPack' and added an NVTX range only around the MPI_SENDRECV operation called 'RHS-MPI_SENDRECV.'

Type of change

  • [x ] New feature (non-breaking change which adds functionality)

How Has This Been Tested?

I ran an example case under nsys with and without this change. The reported timing from the new RHS-MPI_SENDRECV NVTX range was within 5% error of the MPI trace time reporting for this example.

See below for screenshots from the NSYS reports. In this example, the MPI_SENDRECV time is ~1.4% of the total 'MPI' time.

This shows the NSYS MPI trace timing info. Note the highlighted line's 'total time' Screenshot 2024-09-30 at 5 47 37 PM This is the NVTX range timing information. Note that the RHS-MPI_SENDRECV range total time is similar to the new NVTX range result: Screenshot 2024-09-30 at 5 50 07 PM Screenshot 2024-09-30 at 5 50 35 PM

Test Configuration: 4 V100 nodes on Phoenix running the 2D shockbubble case for 700 timesteps.

Checklist

  • [ x] I ran ./mfc.sh format before committing my code
  • [ x] This PR does not introduce any repeated code (it follows the DRY principle)
  • [ x] I cannot think of a way to condense this code and reduce any introduced additional line count

If your code changes any code source files (anything in src/simulation)

To make sure the code is performing as expected on GPU devices, I have:

  • [x ] Checked that the code compiles using NVHPC compilers
  • [ ] Checked that the code compiles using CRAY compilers
  • [ x] Ran the code on either V100, A100, or H100 GPUs and ensured the new feature performed as expected (the GPU results match the CPU results)
  • [ ] Ran the code on MI200+ GPUs and ensure the new features performed as expected (the GPU results match the CPU results)
  • [ x] Enclosed the new feature via nvtx ranges so that they can be identified in profiles
  • [ x] Ran a Nsight Systems profile using ./mfc.sh run XXXX --gpu -t simulation --nsys, and have attached the output file (.nsys-rep) and plain text results to this PR
  • [ ] Ran an Omniperf profile using ./mfc.sh run XXXX --gpu -t simulation --omniperf, and have attached the output file and plain text results to this PR.
  • [ ] Ran my code using various numbers of different GPUs (1, 2, and 8, for example) in parallel and made sure that the results scale similarly to what happens if you run without the new code/feature

max-Hawkins avatar Oct 01 '24 13:10 max-Hawkins

Codecov Report

Attention: Patch coverage is 80.59701% with 13 lines in your changes missing coverage. Please review.

Project coverage is 42.96%. Comparing base (efc9d67) to head (022f593). Report is 3 commits behind head on master.

Files with missing lines Patch % Lines
src/simulation/m_rhs.fpp 73.68% 5 Missing and 5 partials :warning:
src/simulation/m_time_steppers.fpp 66.66% 2 Missing :warning:
src/simulation/m_mpi_proxy.fpp 90.00% 1 Missing :warning:
Additional details and impacted files
@@            Coverage Diff             @@
##           master     #639      +/-   ##
==========================================
+ Coverage   42.85%   42.96%   +0.11%     
==========================================
  Files          61       61              
  Lines       16280    16314      +34     
  Branches     1891     1882       -9     
==========================================
+ Hits         6976     7010      +34     
- Misses       8259     8260       +1     
+ Partials     1045     1044       -1     

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.

codecov[bot] avatar Oct 01 '24 15:10 codecov[bot]

Is it possible to adjust the NVTX range naming to make the hierarchy of the ranges more obvious? For example, TimeStep — RHS — Communication — MPI/SendRecv or something similar, with this style used for all ranges. Right now, it's hard to discern which call nests other calls. I do realize that other parts of Nsys GUI make this more obvious.

sbryngelson avatar Oct 01 '24 16:10 sbryngelson

Just updated things. Here's the output viewed in NSYS (table and graph views):

Screenshot 2024-10-17 at 11 06 23 AM

max-Hawkins avatar Oct 17 '24 15:10 max-Hawkins

Nice. I think a 'TSTEP-SUBSTEP' range makes sense (for RK3 you have 3 such substeps). This helps consolidate things. Related to #631

sbryngelson avatar Oct 17 '24 15:10 sbryngelson

@max-Hawkins would you mind updating/finish this for merge?

sbryngelson avatar Nov 08 '24 21:11 sbryngelson

@henryleberre Ready for your evaluation. Screenshot 2024-11-11 at 7 33 54 PM

max-Hawkins avatar Nov 12 '24 00:11 max-Hawkins

needs ./mfc.sh format -j 4

edit: nvm

sbryngelson avatar Nov 12 '24 00:11 sbryngelson

Thanks! A beauty. Merging.

sbryngelson avatar Nov 12 '24 13:11 sbryngelson