KernelIntrinsics
The goal is to allow for kernels to be written without relying on KernelAbstractions macros
cc: @maleadt @pxl-th
Codecov Report
:x: Patch coverage is 0% with 18 lines in your changes missing coverage. Please review.
:white_check_mark: Project coverage is 0.00%. Comparing base (9741962) to head (8ea6ad2).
:warning: Report is 37 commits behind head on main.
| Files with missing lines | Patch % | Lines |
|---|---|---|
| src/KernelAbstractions.jl | 0.00% | 12 Missing :warning: |
| src/pocl/backend.jl | 0.00% | 6 Missing :warning: |
Additional details and impacted files
@@ Coverage Diff @@
## main #562 +/- ##
=====================================
Coverage 0.00% 0.00%
=====================================
Files 21 21
Lines 1575 1581 +6
=====================================
- Misses 1575 1581 +6
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
:rocket: New features to boost your workflow:
- :snowflake: Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
-
#562
👈 (View in Graphite)
-
#556
: 1 other dependent PR (#558
) -
main
This stack of pull requests is managed by Graphite. Learn more about stacking.
Benchmark Results
| main | 8ea6ad2457a1cf... | main/8ea6ad2457a1cf... | |
|---|---|---|---|
| saxpy/default/Float16/1024 | 0.0496 ± 0.026 ms | 0.0612 ± 0.026 ms | 0.811 |
| saxpy/default/Float16/1048576 | 0.896 ± 0.024 ms | 0.899 ± 0.023 ms | 0.997 |
| saxpy/default/Float16/16384 | 0.0639 ± 0.028 ms | 0.0658 ± 0.028 ms | 0.971 |
| saxpy/default/Float16/2048 | 0.05 ± 0.023 ms | 0.0583 ± 0.024 ms | 0.858 |
| saxpy/default/Float16/256 | 0.0603 ± 0.027 ms | 0.064 ± 0.027 ms | 0.943 |
| saxpy/default/Float16/262144 | 0.273 ± 0.026 ms | 0.273 ± 0.026 ms | 0.998 |
| saxpy/default/Float16/32768 | 0.0772 ± 0.028 ms | 0.0779 ± 0.028 ms | 0.991 |
| saxpy/default/Float16/4096 | 0.0619 ± 0.025 ms | 0.0642 ± 0.024 ms | 0.964 |
| saxpy/default/Float16/512 | 0.0597 ± 0.027 ms | 0.0599 ± 0.026 ms | 0.997 |
| saxpy/default/Float16/64 | 0.062 ± 0.027 ms | 0.063 ± 0.026 ms | 0.984 |
| saxpy/default/Float16/65536 | 0.108 ± 0.028 ms | 0.11 ± 0.028 ms | 0.98 |
| saxpy/default/Float32/1024 | 0.0574 ± 0.026 ms | 0.0585 ± 0.026 ms | 0.982 |
| saxpy/default/Float32/1048576 | 0.484 ± 0.035 ms | 0.481 ± 0.031 ms | 1.01 |
| saxpy/default/Float32/16384 | 0.0563 ± 0.026 ms | 0.0576 ± 0.026 ms | 0.978 |
| saxpy/default/Float32/2048 | 0.0502 ± 0.024 ms | 0.0545 ± 0.024 ms | 0.921 |
| saxpy/default/Float32/256 | 0.0572 ± 0.026 ms | 0.0616 ± 0.026 ms | 0.927 |
| saxpy/default/Float32/262144 | 0.167 ± 0.035 ms | 0.168 ± 0.034 ms | 0.994 |
| saxpy/default/Float32/32768 | 0.0622 ± 0.027 ms | 0.0636 ± 0.027 ms | 0.977 |
| saxpy/default/Float32/4096 | 0.0594 ± 0.025 ms | 0.0625 ± 0.025 ms | 0.95 |
| saxpy/default/Float32/512 | 0.0573 ± 0.027 ms | 0.0638 ± 0.026 ms | 0.898 |
| saxpy/default/Float32/64 | 0.0591 ± 0.026 ms | 0.0635 ± 0.026 ms | 0.93 |
| saxpy/default/Float32/65536 | 0.0827 ± 0.029 ms | 0.0821 ± 0.028 ms | 1.01 |
| saxpy/default/Float64/1024 | 0.0509 ± 0.026 ms | 0.0607 ± 0.026 ms | 0.838 |
| saxpy/default/Float64/1048576 | 0.516 ± 0.042 ms | 0.517 ± 0.038 ms | 0.998 |
| saxpy/default/Float64/16384 | 0.0582 ± 0.026 ms | 0.0583 ± 0.026 ms | 1 |
| saxpy/default/Float64/2048 | 0.048 ± 0.023 ms | 0.0519 ± 0.024 ms | 0.925 |
| saxpy/default/Float64/256 | 0.061 ± 0.027 ms | 0.061 ± 0.026 ms | 1 |
| saxpy/default/Float64/262144 | 0.176 ± 0.028 ms | 0.177 ± 0.029 ms | 0.99 |
| saxpy/default/Float64/32768 | 0.0666 ± 0.026 ms | 0.0666 ± 0.026 ms | 1 |
| saxpy/default/Float64/4096 | 0.0537 ± 0.025 ms | 0.0629 ± 0.025 ms | 0.854 |
| saxpy/default/Float64/512 | 0.0551 ± 0.027 ms | 0.0611 ± 0.027 ms | 0.901 |
| saxpy/default/Float64/64 | 0.0566 ± 0.027 ms | 0.0644 ± 0.027 ms | 0.878 |
| saxpy/default/Float64/65536 | 0.0897 ± 0.027 ms | 0.0915 ± 0.027 ms | 0.981 |
| saxpy/static workgroup=(1024,)/Float16/1024 | 0.0482 ± 0.026 ms | 0.0549 ± 0.026 ms | 0.877 |
| saxpy/static workgroup=(1024,)/Float16/1048576 | 0.913 ± 0.027 ms | 0.904 ± 0.028 ms | 1.01 |
| saxpy/static workgroup=(1024,)/Float16/16384 | 0.0626 ± 0.026 ms | 0.0616 ± 0.026 ms | 1.02 |
| saxpy/static workgroup=(1024,)/Float16/2048 | 0.0515 ± 0.024 ms | 0.0577 ± 0.024 ms | 0.892 |
| saxpy/static workgroup=(1024,)/Float16/256 | 0.0594 ± 0.026 ms | 0.0598 ± 0.026 ms | 0.993 |
| saxpy/static workgroup=(1024,)/Float16/262144 | 0.274 ± 0.028 ms | 0.272 ± 0.027 ms | 1.01 |
| saxpy/static workgroup=(1024,)/Float16/32768 | 0.0771 ± 0.026 ms | 0.0761 ± 0.026 ms | 1.01 |
| saxpy/static workgroup=(1024,)/Float16/4096 | 0.0565 ± 0.027 ms | 0.0587 ± 0.026 ms | 0.962 |
| saxpy/static workgroup=(1024,)/Float16/512 | 0.0569 ± 0.026 ms | 0.0586 ± 0.026 ms | 0.971 |
| saxpy/static workgroup=(1024,)/Float16/64 | 0.0592 ± 0.026 ms | 0.062 ± 0.025 ms | 0.956 |
| saxpy/static workgroup=(1024,)/Float16/65536 | 0.108 ± 0.027 ms | 0.106 ± 0.026 ms | 1.02 |
| saxpy/static workgroup=(1024,)/Float32/1024 | 0.0538 ± 0.026 ms | 0.0586 ± 0.027 ms | 0.919 |
| saxpy/static workgroup=(1024,)/Float32/1048576 | 0.472 ± 0.043 ms | 0.472 ± 0.038 ms | 1 |
| saxpy/static workgroup=(1024,)/Float32/16384 | 0.0551 ± 0.025 ms | 0.0561 ± 0.026 ms | 0.98 |
| saxpy/static workgroup=(1024,)/Float32/2048 | 0.0481 ± 0.023 ms | 0.0548 ± 0.023 ms | 0.878 |
| saxpy/static workgroup=(1024,)/Float32/256 | 0.0615 ± 0.026 ms | 0.0634 ± 0.026 ms | 0.97 |
| saxpy/static workgroup=(1024,)/Float32/262144 | 0.162 ± 0.035 ms | 0.164 ± 0.035 ms | 0.986 |
| saxpy/static workgroup=(1024,)/Float32/32768 | 0.0595 ± 0.026 ms | 0.0614 ± 0.026 ms | 0.969 |
| saxpy/static workgroup=(1024,)/Float32/4096 | 0.0584 ± 0.026 ms | 0.0602 ± 0.026 ms | 0.971 |
| saxpy/static workgroup=(1024,)/Float32/512 | 0.0581 ± 0.026 ms | 0.0601 ± 0.026 ms | 0.968 |
| saxpy/static workgroup=(1024,)/Float32/64 | 0.0576 ± 0.026 ms | 0.0599 ± 0.025 ms | 0.961 |
| saxpy/static workgroup=(1024,)/Float32/65536 | 0.0806 ± 0.029 ms | 0.0827 ± 0.028 ms | 0.975 |
| saxpy/static workgroup=(1024,)/Float64/1024 | 0.057 ± 0.026 ms | 0.0536 ± 0.026 ms | 1.06 |
| saxpy/static workgroup=(1024,)/Float64/1048576 | 0.501 ± 0.04 ms | 0.517 ± 0.041 ms | 0.969 |
| saxpy/static workgroup=(1024,)/Float64/16384 | 0.0572 ± 0.026 ms | 0.0595 ± 0.025 ms | 0.962 |
| saxpy/static workgroup=(1024,)/Float64/2048 | 0.0486 ± 0.023 ms | 0.0539 ± 0.023 ms | 0.901 |
| saxpy/static workgroup=(1024,)/Float64/256 | 0.0575 ± 0.026 ms | 0.0608 ± 0.026 ms | 0.946 |
| saxpy/static workgroup=(1024,)/Float64/262144 | 0.175 ± 0.031 ms | 0.174 ± 0.029 ms | 1.01 |
| saxpy/static workgroup=(1024,)/Float64/32768 | 0.0665 ± 0.026 ms | 0.0665 ± 0.025 ms | 1 |
| saxpy/static workgroup=(1024,)/Float64/4096 | 0.0546 ± 0.026 ms | 0.0589 ± 0.026 ms | 0.927 |
| saxpy/static workgroup=(1024,)/Float64/512 | 0.0527 ± 0.026 ms | 0.0593 ± 0.026 ms | 0.889 |
| saxpy/static workgroup=(1024,)/Float64/64 | 0.0584 ± 0.026 ms | 0.0585 ± 0.026 ms | 0.998 |
| saxpy/static workgroup=(1024,)/Float64/65536 | 0.0883 ± 0.027 ms | 0.0888 ± 0.027 ms | 0.994 |
| time_to_load | 1.19 ± 0.003 s | 1.15 ± 0.01 s | 1.04 |
Benchmark Plots
A plot of the benchmark results have been uploaded as an artifact to the workflow run for this PR. Go to "Actions"->"Benchmark a pull request"->[the most recent run]->"Artifacts" (at the bottom).
So the idea is to decouple the back-ends from KA.jl, instead implementing KernelIntrinsics.jl? What's the advantage; do you envision packages other than KA.jl to build their kernel DSL on top of KernelIntrinsics.jl?
So the idea is to decouple the back-ends from KA.jl, instead implementing KernelIntrinsics.jl? What's the advantage; do you envision packages other than KA.jl to build their kernel DSL on top of KernelIntrinsics.jl?
I am very unsure if I am able to pull off #558 and as such, I want to remove the implicit if validindex(ctx) check,
and I want to make #559 possible.
My goal is to allow for a gracefull transition of the DSL macro language to something more similar to "just" OpenCL. Removing the extra overhead introduced by the arbitrary dimensions, etc.
This would allow us to write performance critical kernels directly and solve the issue of how to write kernels that use barriers correctly.
If I can figure out #558 the macro based DSL can stick around; otherwise I will encourage folks to move their kernel to KernelIntrinsics.
Will KA/KI still be a greatest common denominator of the GPU backends, or are you looking to introduce optional intrinsics? How will the groupreduce API do in terms of portability?
Will KA/KI still be a greatest common denominator of the GPU backends
The intrinsics proposed here are the greatest common denominator. I could see us adding some more intrinsics for reductions, but that is TBD.
Replaced by #635