Support other abstract array backends
The proposal is to template out the coefficient storage
struct Fun{S,T,V}
space::S
coefficients::V
end
this will allow, for example, Fun{Chebyshev,Float64,GPUArray{Float64}}. Also, it will allow views of coefficients and BlockVector.
Big effort, no doubt, but this would be great!
This is implemented now in development (0.6 only)
This is so cool! Now a huge Fun can be initialized with O(1) storage with a linspace:
julia> f = Fun(Chebyshev(),1.0:1_000_000_000)
Fun(Chebyshev(【-1.0,1.0】),1.0:1.0:1.0e9)
julia> Base.summarysize(f)
64
julia> f(0.5)
-5.000000015e8
It would be interesting to create Weierstrass's function via a SparseVector. This might favour specializing a "direct" evaluation over Clenshaw summation since the nonzero coefficients are exponentially spaced.
Another extension would be an array whose entries are defined by a function, which would be truly infinite-dimensional!
Another extension would be an array whose entries are defined by a function, which would be truly infinite-dimensional!
I was thinking about doing brown Ian motion this way, where the length can be dynamically grown with random samples. The catch is there's not much you could do with it: there's no way to know where to start clenshaw, or to know when \ has converged.
Though perhaps it's possible to sample the tail by first sampling a uniform bound, and then sampling each coefficient subject to knowing the uniform bound.
Having uniform bounds on the tails (and the tails scaled by k^p) would allow most most algorithms be reimplemented in inf-dimensions subject to a tolerance.
Even cooler would be allowing distributions, so inf-coefficients that don't even decay.
The catch is there's not much you could do with it: there's no way to know where to start clenshaw
But forward recurrence is only a constant factor more expensive.
I just checked and directly calling the inner constructor works:
julia> sp = Chebyshev()⊗Chebyshev()
Chebyshev(【-1.0,1.0】)⊗Chebyshev(【-1.0,1.0】)
julia> coeff = rand(3,3)
3×3 Array{Float64,2}:
0.435006 0.466824 0.950714
0.616575 0.299437 0.450425
0.0160957 0.710504 0.970954
julia> Fun{typeof(sp),eltype(coeff),typeof(coeff)}(sp, coeff)
Fun(Chebyshev(【-1.0,1.0】)⊗Chebyshev(【-1.0,1.0】),[0.435006 0.466824 0.950714; 0.616575 0.299437 0.450425; 0.0160957 0.710504 0.970954])
Any conceptual reason that a Fun couldn't be the universal function container, usurping LowRankFun and ProductFun?
I don't know if anything is lost: in the case of a ProductFun in spherical harmonics, where the basis is separable but not tensor-product (i.e. the associated Legendre functions are different depend on both quantum numbers), it's still reasonable to store the coefficients in a matrix. All the fast transforms of ProductFuns are essentially compatible with matrices. For a LowRankFun, the current storage is a pair of vectors of Funs. But storing a low-rank matrix would de-clutter all those extra pointers.
ProductFun, sure. LowRankFun, probably not: its not linear
Also, ProductFun is different then storing in a Matrix. a Matrix won't work because even if the coefficients are an Array, they are treated like a Vector for padding.
Also, ProductFun is different then storing in a Matrix. a Matrix won't work because even if the coefficients are an Array, they are treated like a Vector for padding.
But aren't those particularities specific to the current implementation?
ProductFun, sure. LowRankFun, probably not: its not linear
Isn't the addition of two Funs still linear, it's just that + delegates to + for low-rank matrices? (Of course, it's a design decision whether or not to compress.)
Yes. But then the current implementation would have to be changed.
I don't know if I understand correctly: the example I created above, of a Fun{Chebyshev(【-1.0,1.0】)⊗Chebyshev(【-1.0,1.0】), Matrix{Float64}, is not accessible by the present outer constructors. So why would anything already implemented need to change? Wouldn't it simply require dispatching to new methods?
The issue is with the padding. But I think what we want is to move that to the coefficient storage type. And depending on the order you want the coefficients stored in memory, one can use different
So we would have the following
abstract type AbstractInfiniteArray{T,p} end
struct InfiniteArray{T,p,VT} <: AbstractInfiniteVector{T,p}
data::VT
end
InfiniteArray(data::AbstractArray{T,p}) where {T,p} = InfiniteArray{T,p,typeof(data)}(data)
const InfiniteVector{T} = InfiniteArray{T,1}
const InfiniteMatrix{T} = InfiniteArray{T,2}
length(A::InfiniteArray) = ∞
size(A::InfiniteArray) = (∞,)
size(A::InfiniteMatrix) = (∞,∞)
function getindex(A::InfiniteArray, k...)
checkbounds(Bool, A.data, k...) && return A.data[k...]
zero(eltype(A))
end
function setindex!(A::InfiniteArray, v, k...)
pad!(A.data, k...)
A.data[k...] = v
end
function +(A::InfiniteVector, B::InfiniteVector)
n = max(length(A.data), length(B.data))
ret = InfiniteVector(Array{promote_type(eltype(A),eltype(B))}(n))
if n == length(A.data)
ret.data[:] = A.data
ret.data[1:length(B.data)] .+= B.data
else
ret.data[:] = B.data
ret.data[1:length(A.data)] .+= A.data
end
ret
end
Fun(S::Chebyshev, v::AbstractVector) = Fun(S, InfiniteArray(v))
Fun(S::Fourier, v::AbstractVector) = Fun(S, InfiniteBlockArray(v, blocklengths(S)))
For TensorSpace we would need to support both a standard matrix storage, but also the block matrix storage that stores by polynomial degree, as needed in .