CompatHelper: bump compat for ContinuumArrays to 0.16, (keep existing compat)
This pull request changes the compat entry for the ContinuumArrays package from 0.10, 0.11, 0.12 to 0.10, 0.11, 0.12, 0.16.
This keeps the compat entries for earlier versions.
Note: I have not tested your package with this new compat entry. It is your responsibility to make sure that your package tests pass before you merge this pull request.
Closes #69, #71, #72 & #73
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Comparison is base (
4b817a5) 67.98% compared to head (a1b977f) 67.79%.
Additional details and impacted files
@@ Coverage Diff @@
## master #70 +/- ##
==========================================
- Coverage 67.98% 67.79% -0.20%
==========================================
Files 24 24
Lines 1690 1621 -69
==========================================
- Hits 1149 1099 -50
+ Misses 541 522 -19
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
@dlfivefifty I finally got around fixing compat bounds. A lot of tests fail (related to diff and grammatrix). What do I need to do?
The changes are to avoid relying on @simplify which makes compile-time very slow. Some examples:
- Overload
grammatrix(::MyBasis)instead of@simplify *(::Adjoint{<:Any,<:MyBasis}, ::MyBasis)for creating the mass/gram matrix - Overload
diff(::MyBasis; dims=1)instead of@simplify *(::Derivative, ::MyBasis)for creating the derivative
@dlfivefifty I seem to hitting a few snags:
- How do I implement the Laplacian
A'D'D*B? I have specific formulae for the Laplacian, i.e. it is not the weak Laplacian formed from the multiplication of two gradient operators. Note that I need to consider both the left and right basis in, since I explicitly take the individual restrictions into account. - How do I implement any other operator that mix derivative and potential operators, i.e.
A'D'V*D*B, whereV <: QuasiDiagonal.
In short, I need all arguments to the multiplication passed to my materialization routine. Incidentally, some of those that are implemented through the @materialize macro still work, e.g. https://github.com/JuliaApproximation/CompactBases.jl/blob/4b817a59e052f4480ae6db5ae037954ae889a848/src/fd_derivatives.jl#L222 but not https://github.com/JuliaApproximation/CompactBases.jl/blob/4b817a59e052f4480ae6db5ae037954ae889a848/src/fd_derivatives.jl#L253
Quick test example:
julia> N = 10
10
julia> ρ = 1.0
1.0
julia> R = StaggeredFiniteDifferences(N, ρ)
Staggered finite differences basis {Float64} on 0.0 .. 10.5 with 10 points spaced by ρ = 1.0
julia> r = axes(R, 1)
Inclusion(0.0 .. 10.5)
julia> D = Derivative(r)
Derivative(Inclusion(0.0 .. 10.5))
julia> ∇ = R'D*R
10×10 LinearAlgebra.Tridiagonal{Float64, Vector{Float64}}:
0.0 0.666667 ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
-0.666667 0.0 0.533333 ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
⋅ -0.533333 0.0 0.514286 ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ -0.514286 0.0 0.507937 ⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ -0.507937 0.0 0.505051 ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ -0.505051 0.0 0.503497 ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅ -0.503497 0.0 0.502564 ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ -0.502564 0.0 0.501961 ⋅
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ -0.501961 0.0 0.501548
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ -0.501548 0.0
julia> ∇² = R'D'D*R
ERROR: Overload diff(::StaggeredFiniteDifferences{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}})
Stacktrace:
[1] error(s::String)
@ Base ./error.jl:35
[2] diff_layout(#unused#::ContinuumArrays.BasisLayout, Vm::StaggeredFiniteDifferences{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, dims::Int64)
@ ContinuumArrays ~/.julia/packages/ContinuumArrays/ZkEzA/src/bases/bases.jl:602
[3] #diff#89
@ ~/.julia/packages/QuasiArrays/5wpon/src/calculus.jl:50 [inlined]
[4] diff
@ ~/.julia/packages/QuasiArrays/5wpon/src/calculus.jl:50 [inlined]
[5] mul
@ ~/.julia/packages/ContinuumArrays/ZkEzA/src/operators.jl:139 [inlined]
[6] mul
@ ~/.julia/packages/ContinuumArrays/ZkEzA/src/operators.jl:42 [inlined]
[7] *(A::QuasiArrays.QuasiAdjoint{Float64, StaggeredFiniteDifferences{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}}, B::QuasiArrays.QuasiAdjoint{Float64, Derivative{Float64, IntervalSets.ClosedInterval{Float64}}})
@ QuasiArrays ~/.julia/packages/QuasiArrays/5wpon/src/matmul.jl:23
[8] *(::QuasiArrays.QuasiAdjoint{Float64, StaggeredFiniteDifferences{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}}, ::QuasiArrays.QuasiAdjoint{Float64, Derivative{Float64, IntervalSets.ClosedInterval{Float64}}}, ::Derivative{Float64, IntervalSets.ClosedInterval{Float64}})
@ Base ./operators.jl:578
[9] top-level scope
@ REPL[35]:1
Probably the best approach is to add a DerivativeStaggeredFiniteDifferences to represent D*R.
But then I would need such a proxy type for each basis, and also make it restriction-aware? That does not seem to be a scalable solution.
Just have diff return ApplyQuasiArray(*, D, P)
you might need to set simplifiable to be false