Fix automated conversion in adaptive solve
@ChrisRackauckas, I just wanted your comment here. So currently StepRangeLen promotes ref type in it to FP64 as it's done here: https://github.com/JuliaLang/julia/blob/2fb06a7c25fa2b770a8f6e8a45fec48c002268e4/base/twiceprecision.jl#L369
From the document on that code file itself:
# Necessary for creating nicely-behaved ranges like r = 0.1:0.1:0.3 # that return r[3] == 0.3. Otherwise, we have roundoff error due to # 0.1 + 2*0.1 = 0.30000000000000004
So basically by default it will create some FP64 types in the StepRangeLen something like this:
julia> saveat = 0.1f0:0.1f0:10.0f0
0.1f0:0.1f0:10.0f0
julia> typeof(saveat)
StepRangeLen{Float32, Float64, Float64, Int64}
This causes issues with backends that do not completely support double precision (Apple, Intel).
The current PR explicitly creates the StepRangeLen completely with types of the range argument, allowing complete type as FP32 if the types of the ranges are so. The tests fail because they still have FP64 types, which will generate different values within ranges due to rounding-off errors. What should we do in this case, update the tests, remove the explicit cast by us, and give warnings when using saveat as ranges in limited double precision support backends?
Give nice errors when using saveat as ranges in limited double precision support backends
@utkarsh530 were you going to revive and finish this?
Oh yeah this is pretty much complete, let me finish it
Bump; I'd like to test https://github.com/JuliaGPU/Metal.jl/issues/214
Sorry, been a bit busy with the internship, I will have a look ASAP