precision of literals cannot be refined
Hi, thank you for this great repo! very cool (I'm working on a similar implementation in Julia)
I find the following behavior a bit strange, but I'm not entirely sure if it's a bug or a design choice.
# let a = 0.1 ;;
a : real = 0.099999999999999992
# let b = 0.100000000000000000000000000000000000000000000000000000000000000000000001 ;;
b : real = 0.0999999999999999986
# b - a > 1e-60 ;;
- : prop = True
# #precision 1e-300 ;;
Target precision set to 9.9999999999999986e-301
# b - a > 1e-60 ;;
- : prop = True
good to highlight that if one understands 0.1 as the rational $$\frac{1}{10}$$ and the string for b similarly, then b - a is actually $$10^{-72} < 10^{-60}$$. (the result doesn't change even if setting the higher precision before defining a and b).
This happens because the of_string in dyadic_bigint uses the hard-coded 53 bits precision. Is this a bug?
The current behavior is similar to "mainstream programming languages" where a literal like 0.1 parses to an IEEE-float approximation of the rational, but I think here it would be better to parse to a cut (or a dedicated type to lazily represent a dyadic approximation of rationals) that can then be refined with increasing precision as needed?