ApproxFun.jl icon indicating copy to clipboard operation
ApproxFun.jl copied to clipboard

Support other abstract array backends

Open dlfivefifty opened this issue 8 years ago • 12 comments

The proposal is to template out the coefficient storage

struct Fun{S,T,V}
    space::S
    coefficients::V
end

this will allow, for example, Fun{Chebyshev,Float64,GPUArray{Float64}}. Also, it will allow views of coefficients and BlockVector.

dlfivefifty avatar May 08 '17 05:05 dlfivefifty

Big effort, no doubt, but this would be great!

daanhb avatar May 08 '17 06:05 daanhb

This is implemented now in development (0.6 only)

dlfivefifty avatar May 22 '17 00:05 dlfivefifty

This is so cool! Now a huge Fun can be initialized with O(1) storage with a linspace:

julia> f = Fun(Chebyshev(),1.0:1_000_000_000)
Fun(Chebyshev(【-1.0,1.0】),1.0:1.0:1.0e9)

julia> Base.summarysize(f)
64

julia> f(0.5)
-5.000000015e8

It would be interesting to create Weierstrass's function via a SparseVector. This might favour specializing a "direct" evaluation over Clenshaw summation since the nonzero coefficients are exponentially spaced.

Another extension would be an array whose entries are defined by a function, which would be truly infinite-dimensional!

MikaelSlevinsky avatar May 31 '17 16:05 MikaelSlevinsky

Another extension would be an array whose entries are defined by a function, which would be truly infinite-dimensional!

I was thinking about doing brown Ian motion this way, where the length can be dynamically grown with random samples. The catch is there's not much you could do with it: there's no way to know where to start clenshaw, or to know when \ has converged.

Though perhaps it's possible to sample the tail by first sampling a uniform bound, and then sampling each coefficient subject to knowing the uniform bound.

Having uniform bounds on the tails (and the tails scaled by k^p) would allow most most algorithms be reimplemented in inf-dimensions subject to a tolerance.

Even cooler would be allowing distributions, so inf-coefficients that don't even decay.

dlfivefifty avatar May 31 '17 18:05 dlfivefifty

The catch is there's not much you could do with it: there's no way to know where to start clenshaw

But forward recurrence is only a constant factor more expensive.

MikaelSlevinsky avatar Jun 01 '17 19:06 MikaelSlevinsky

I just checked and directly calling the inner constructor works:

julia> sp = Chebyshev()⊗Chebyshev()
Chebyshev(【-1.0,1.0】)⊗Chebyshev(【-1.0,1.0】)

julia> coeff = rand(3,3)
3×3 Array{Float64,2}:
 0.435006   0.466824  0.950714
 0.616575   0.299437  0.450425
 0.0160957  0.710504  0.970954

julia> Fun{typeof(sp),eltype(coeff),typeof(coeff)}(sp, coeff)
Fun(Chebyshev(【-1.0,1.0】)⊗Chebyshev(【-1.0,1.0】),[0.435006 0.466824 0.950714; 0.616575 0.299437 0.450425; 0.0160957 0.710504 0.970954])

Any conceptual reason that a Fun couldn't be the universal function container, usurping LowRankFun and ProductFun?

I don't know if anything is lost: in the case of a ProductFun in spherical harmonics, where the basis is separable but not tensor-product (i.e. the associated Legendre functions are different depend on both quantum numbers), it's still reasonable to store the coefficients in a matrix. All the fast transforms of ProductFuns are essentially compatible with matrices. For a LowRankFun, the current storage is a pair of vectors of Funs. But storing a low-rank matrix would de-clutter all those extra pointers.

MikaelSlevinsky avatar Sep 06 '17 15:09 MikaelSlevinsky

ProductFun, sure. LowRankFun, probably not: its not linear

dlfivefifty avatar Sep 06 '17 15:09 dlfivefifty

Also, ProductFun is different then storing in a Matrix. a Matrix won't work because even if the coefficients are an Array, they are treated like a Vector for padding.

dlfivefifty avatar Sep 06 '17 15:09 dlfivefifty

Also, ProductFun is different then storing in a Matrix. a Matrix won't work because even if the coefficients are an Array, they are treated like a Vector for padding.

But aren't those particularities specific to the current implementation?

ProductFun, sure. LowRankFun, probably not: its not linear

Isn't the addition of two Funs still linear, it's just that + delegates to + for low-rank matrices? (Of course, it's a design decision whether or not to compress.)

MikaelSlevinsky avatar Sep 06 '17 15:09 MikaelSlevinsky

Yes. But then the current implementation would have to be changed.

dlfivefifty avatar Sep 06 '17 15:09 dlfivefifty

I don't know if I understand correctly: the example I created above, of a Fun{Chebyshev(【-1.0,1.0】)⊗Chebyshev(【-1.0,1.0】), Matrix{Float64}, is not accessible by the present outer constructors. So why would anything already implemented need to change? Wouldn't it simply require dispatching to new methods?

MikaelSlevinsky avatar Sep 08 '17 01:09 MikaelSlevinsky

The issue is with the padding. But I think what we want is to move that to the coefficient storage type. And depending on the order you want the coefficients stored in memory, one can use different

So we would have the following

abstract type AbstractInfiniteArray{T,p} end

struct InfiniteArray{T,p,VT} <: AbstractInfiniteVector{T,p}
    data::VT
end

InfiniteArray(data::AbstractArray{T,p}) where {T,p} = InfiniteArray{T,p,typeof(data)}(data)

const InfiniteVector{T} = InfiniteArray{T,1}
const InfiniteMatrix{T} = InfiniteArray{T,2}

length(A::InfiniteArray) = ∞
size(A::InfiniteArray) = (∞,)
size(A::InfiniteMatrix) = (∞,∞)

function getindex(A::InfiniteArray, k...) 
   checkbounds(Bool, A.data, k...) && return A.data[k...]
   zero(eltype(A))
end

function setindex!(A::InfiniteArray, v, k...) 
   pad!(A.data, k...)
   A.data[k...] = v
end

function +(A::InfiniteVector, B::InfiniteVector) 
   n = max(length(A.data), length(B.data))
   ret = InfiniteVector(Array{promote_type(eltype(A),eltype(B))}(n))

   if n == length(A.data) 
       ret.data[:] = A.data
       ret.data[1:length(B.data)] .+= B.data
   else
       ret.data[:] = B.data
       ret.data[1:length(A.data)] .+= A.data     
   end

   ret
end

Fun(S::Chebyshev, v::AbstractVector) = Fun(S, InfiniteArray(v))
Fun(S::Fourier, v::AbstractVector) = Fun(S, InfiniteBlockArray(v, blocklengths(S)))

For TensorSpace we would need to support both a standard matrix storage, but also the block matrix storage that stores by polynomial degree, as needed in .

dlfivefifty avatar Sep 08 '17 09:09 dlfivefifty