cutlass icon indicating copy to clipboard operation
cutlass copied to clipboard

[QST] How to apply StreamK to hopper warp specialized GEMM

Open Hongbosherlock opened this issue 1 year ago • 10 comments

What is your question? I'm trying to apply StreamK or SplitK to a hopper warp specialized GEMM. you can see the full code here and the Gemm is declared in this way:

using CollectiveEpilogue = typename cutlass::epilogue::collective::CollectiveBuilder<
    ArchTag, OperatorClass,
    TileShape, ClusterShape,
    EpilogueTileType,
    ElementAccumulator, ElementCompute,
    ElementC, LayoutC, AlignmentC,
    ElementD, LayoutD, AlignmentD,
    EpilogueSchedule,
    FusionOperation
  >::CollectiveOp;

using CollectiveMainloopWithBlockWiseScaling = typename cutlass::gemm::collective::CollectiveBuilder<
    ArchTag, OperatorClass,
    ElementA, LayoutA, AlignmentA,
    ElementB, LayoutB, AlignmentB,
    ElementAccumulator,
    TileShape, ClusterShape,
    cutlass::gemm::collective::StageCountAutoCarveout<
      static_cast<int>(sizeof(typename CollectiveEpilogue::SharedStorage))
    >,
    KernelSchedule
  >::CollectiveOp;

using GemmKernel = cutlass::gemm::kernel::GemmUniversal<
    Shape<int,int,int,int>, // Indicates ProblemShape
    CollectiveMainloopWithBlockWiseScaling,
    CollectiveEpilogue
>;
using Gemm = cutlass::gemm::device::GemmUniversalAdapter<GemmKernel>;

Firstly, I tried just replacing GemmUniversal with GemmSplitKParallel to apply SplitK. It didn't work.

Now I'm trying to apply StreamK/SplitK to the GEMM with the other way based on this example

  using GemmKernel = cutlass::gemm::kernel::GemmUniversal<
      Shape<int,int,int,int>,
      CollectiveOp,
      EpilogueOp,
      cutlass::gemm::StreamKScheduler // <--- Change needed to enable the stream-K scheduler
  >;
using Gemm = cutlass::gemm::device::GemmUniversalAdapter<GemmKernel>;

But I'm not sure how to use it next. Does this method support FP8 GEMM on Hopper? Which version of CUDA should I use?

@jackkosaian Will be very grateful for your help!

Hongbosherlock avatar Feb 03 '25 16:02 Hongbosherlock

Yes it is supported for all Hopper GEMMs. And yes that's the right scheduler to use.

thakkarV avatar Feb 03 '25 16:02 thakkarV

As Vijay mentioned, that is the right scheduler to use. Here's a diff that I just used to adapt example 67 (groupwise) to use the stream-K scheduler:

index d6de7f89..556e74c7 100644
--- a/examples/67_hopper_fp8_warp_specialized_gemm_with_blockwise_scaling/67_hopper_fp8_warp_specialized_gemm_with_groupwise_scaling.cu
+++ b/examples/67_hopper_fp8_warp_specialized_gemm_with_blockwise_scaling/67_hopper_fp8_warp_specialized_gemm_with_groupwise_scaling.cu
@@ -168,7 +168,8 @@ using CollectiveMainloopWithBlockWiseScaling = typename cutlass::gemm::collectiv
 using GemmKernel = cutlass::gemm::kernel::GemmUniversal<
     Shape<int,int,int,int>, // Indicates ProblemShape
     CollectiveMainloopWithBlockWiseScaling,
-    CollectiveEpilogue
+    CollectiveEpilogue,
+    cutlass::gemm::StreamKScheduler
 >;

 using Gemm = cutlass::gemm::device::GemmUniversalAdapter<GemmKernel>;
@@ -691,6 +692,7 @@ int run(Options<RasterOrderOptions> &options)
     GpuTimer timer;
     timer.start();
     for (int iter = 0; iter < options.iterations; ++iter) {
+      CUTLASS_CHECK(gemm.initialize(arguments, workspace.get()));
       CUTLASS_CHECK(gemm.run());
     }
     timer.stop();

jackkosaian avatar Feb 03 '25 17:02 jackkosaian

As Vijay mentioned, that is the right scheduler to use. Here's a diff that I just used to adapt example 67 (groupwise) to use the stream-K scheduler:

index d6de7f89..556e74c7 100644 --- a/examples/67_hopper_fp8_warp_specialized_gemm_with_blockwise_scaling/67_hopper_fp8_warp_specialized_gemm_with_groupwise_scaling.cu +++ b/examples/67_hopper_fp8_warp_specialized_gemm_with_blockwise_scaling/67_hopper_fp8_warp_specialized_gemm_with_groupwise_scaling.cu @@ -168,7 +168,8 @@ using CollectiveMainloopWithBlockWiseScaling = typename cutlass::gemm::collectiv using GemmKernel = cutlass::gemm::kernel::GemmUniversal< Shape<int,int,int,int>, // Indicates ProblemShape CollectiveMainloopWithBlockWiseScaling,

  • CollectiveEpilogue
  • CollectiveEpilogue,
  • cutlass::gemm::StreamKScheduler

;

using Gemm = cutlass::gemm::device::GemmUniversalAdapter<GemmKernel>; @@ -691,6 +692,7 @@ int run(Options<RasterOrderOptions> &options) GpuTimer timer; timer.start(); for (int iter = 0; iter < options.iterations; ++iter) {

  •  CUTLASS_CHECK(gemm.initialize(arguments, workspace.get()));
     CUTLASS_CHECK(gemm.run());
    
    } timer.stop();

Is there any parameters we use to get the best performance of the streamK,like split-k-factor

LeeDadao avatar Feb 04 '25 04:02 LeeDadao

If you'd like to use a split-K decomposition, you can set the splits argument as done in the Blackwell stream-K example here.

You can also consider using non-deterministic reduction, which may help performance at the expense of losing the guarantee of deterministic reduction order. See how to set this here and further description here.

jackkosaian avatar Feb 04 '25 11:02 jackkosaian

If you'd like to use a split-K decomposition, you can set the splits argument as done in the Blackwell stream-K example here.

You can also consider using non-deterministic reduction, which may help performance at the expense of losing the guarantee of deterministic reduction order. See how to set this here and further description here.

Thanks! It works for me now.

Hongbosherlock avatar Feb 04 '25 12:02 Hongbosherlock

hi @jackkosaian I observed in Nsight that using streamK will introduces a Memset op, which results in a lot of gaps between GEMM kernels in the CUDA graph mode.

Could you please explain why this happened? Is there any way to optimize it?

Thanks for your help!

Image

We can see gaps between GEMM kernels. Image

As Vijay mentioned, that is the right scheduler to use. Here's a diff that I just used to adapt example 67 (groupwise) to use the stream-K scheduler:

index d6de7f89..556e74c7 100644 --- a/examples/67_hopper_fp8_warp_specialized_gemm_with_blockwise_scaling/67_hopper_fp8_warp_specialized_gemm_with_groupwise_scaling.cu +++ b/examples/67_hopper_fp8_warp_specialized_gemm_with_blockwise_scaling/67_hopper_fp8_warp_specialized_gemm_with_groupwise_scaling.cu @@ -168,7 +168,8 @@ using CollectiveMainloopWithBlockWiseScaling = typename cutlass::gemm::collectiv using GemmKernel = cutlass::gemm::kernel::GemmUniversal< Shape<int,int,int,int>, // Indicates ProblemShape CollectiveMainloopWithBlockWiseScaling,

  • CollectiveEpilogue
  • CollectiveEpilogue,
  • cutlass::gemm::StreamKScheduler

;

using Gemm = cutlass::gemm::device::GemmUniversalAdapter<GemmKernel>; @@ -691,6 +692,7 @@ int run(Options<RasterOrderOptions> &options) GpuTimer timer; timer.start(); for (int iter = 0; iter < options.iterations; ++iter) {

  •  CUTLASS_CHECK(gemm.initialize(arguments, workspace.get()));
     CUTLASS_CHECK(gemm.run());
    
    } timer.stop();

Hongbosherlock avatar Feb 10 '25 09:02 Hongbosherlock

I guess it's related to the code here

//struct PersistentTileSchedulerSm90StreamKParams 

      if (barrier_workspace_size > 0) {
        if (workspace == nullptr) {
          return Status::kErrorWorkspaceNull;
        }

        // Only the barrier workspace needs to be cleared for stream-K.
        // Barrier workspace follows reduction workspace.
        uint8_t* barrier_workspace = reinterpret_cast<uint8_t*>(workspace) + reduction_workspace_size;
        return zero_workspace(static_cast<void*>(barrier_workspace), barrier_workspace_size, stream, cuda_adapter);
      }

then

https://github.com/NVIDIA/cutlass/blob/833f6990e031b48b4cd2fcf55e0849c51ef6bac2/include/cutlass/workspace.h#L69-L73

Is this necessary, and how to optimize it? @jackkosaian

Hongbosherlock avatar Feb 10 '25 12:02 Hongbosherlock

Yes, the memset is necessary. Stream-K uses counters in global memory for determining the order in which CTAs can accumulate their partial results. These counters needs to be initialized to zero before each invocation of the kernel.

jackkosaian avatar Feb 10 '25 15:02 jackkosaian

This issue has been labeled inactive-30d due to no recent activity in the past 30 days. Please close this issue if no further response or action is needed. Otherwise, please respond with a comment indicating any updates or changes to the original issue and/or confirm this issue still needs to be addressed. This issue will be labeled inactive-90d if there is no activity in the next 60 days.

github-actions[bot] avatar Mar 12 '25 16:03 github-actions[bot]

This issue has been labeled inactive-90d due to no recent activity in the past 90 days. Please close this issue if no further response or action is needed. Otherwise, please respond with a comment indicating any updates or changes to the original issue and/or confirm this issue still needs to be addressed.

github-actions[bot] avatar Jun 10 '25 16:06 github-actions[bot]