Implementation of LU decomposition using data flow tasks
This is the simple parallelization of LU decomposition using DataFlowTasks library. To see the DAG and performance plot run lu_test_3.
Codecov Report
:x: Patch coverage is 98.30508% with 1 line in your changes missing coverage. Please review.
:white_check_mark: Project coverage is 76.47%. Comparing base (fbebdb2) to head (4bfb148).
:warning: Report is 32 commits behind head on main.
| Files with missing lines | Patch % | Lines |
|---|---|---|
| src/hmatrix.jl | 96.42% | 1 Missing :warning: |
Additional details and impacted files
@@ Coverage Diff @@
## main #65 +/- ##
==========================================
+ Coverage 75.64% 76.47% +0.83%
==========================================
Files 14 14
Lines 1466 1522 +56
==========================================
+ Hits 1109 1164 +55
- Misses 357 358 +1
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
:rocket: New features to boost your workflow:
- :snowflake: Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
I see that we are getting the same but with a missing key when AirSpeedVelocity tries to generate the plots. Do you think it is a bug on their side?
Here is a performance plot for this kind of implementation. As it can be seen we should try to split the computation of hmul, rdiv and ldiv. I used only two threads because it is maximum possible number of tasks which can be run at the same time.
Here is the DAG of HLU (https://github.com/IntegralEquations/HMatrices.jl/assets/72526361/ee138b14-53fa-42ad-af1f-c2798c9ffa89)
I see that we are getting the same but with a missing key when
AirSpeedVelocitytries to generate the plots. Do you think it is a bug on their side?
I think it is a bug from there side because I have never seen this problem running benchmarks locally or running workflows in the Docker container. I will check it carefully again and try to find a solution.