RFC: add `nextafter`
This RFC proposes adding nextafter function
Overview
Based on array comparison data, the API is available in most array libraries. The main exception is MXNet which doesn't implement it.
Prior art
- NumPy: https://numpy.org/doc/stable/reference/generated/numpy.nextafter.html#numpy.nextafter
- PyTorch: https://pytorch.org/docs/stable/generated/torch.nextafter.html
- TensorFlow: https://www.tensorflow.org/api_docs/python/tf/math/nextafter
- CuPy: https://docs.cupy.dev/en/stable/reference/generated/cupy.nextafter.html
- Dask: https://docs.dask.org/en/stable/generated/dask.array.nextafter.html
- JAX: https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.nextafter.html
- MXNet doesn't have an implementation for
nextafter
Proposal
This proposal follows similar element-wise APIs,
def nextafter(x1: array, x2: array, /) -> array
Related
A few questions come to mind:
- Is C99 preferred here (
nextafter/nexttoward) or would following IEEE 754 (nextUp/nextDown) be better?- In SciPy use case, IEEE 754
nextDownis what is desired: https://github.com/scipy/scipy/blob/422ae7150123feb11b1deb83c6bad61ed3055251/scipy/stats/_axis_nan_policy.py#L203
- In SciPy use case, IEEE 754
- how should type promotion behave? Should
x1andx2participate in type promotion? Shouldx2be converted tox1's type? If not converted, thennextafteris actually more like C99'snexttoward. - what should happen when provided an integer dtype for
x1?
My thoughts:
- The Julia API is nicest and usually what you want. However, I don't really have much of a desire to see a new function here that doesn't have precedent in all existing Python libraries. It's a pretty simple API with no keywords, so it doesn't really matter that it's not completely optimal.
- The proposal say
arraynotarray | scalar. This function is used with scalars a lot.arrayis consistent with our whole API design though, and PyTorch does not support scalars. On the other hand, this is going to cause a lot of churn for array-consuming code, where for examplenp.nextafter(x, -np.inf)has to becomenp.nextafter(x, np.array(-np.inf, dtype=x.dtype))or some such thing. This is the trickiest thing to decide. I think we may consider allowingx2to be a Python scalar. - Type promotion: normal type promotion rules between
x1andx2for floating-point dtypes - that's what I expect, and I just checked that NumPy, JAX and PyTorch all support it. - Integer
x1dype: it's ambiguous and at least PyTorch does not support it, so undefined behavior (may raise an exception). - Integer
x2dtype should be allowed, since it's just indicating the direction.
Type promotion: normal type promotion rules between
x1andx2for floating-point dtypes - that's what I expect, and I just checked that NumPy, JAX and PyTorch all support it.
To my eyes, that is weird behavior. I would think that what you usually want is the next value before/after an element in x1 with the same dtype. To provide a float32 array for x1 and, e.g., a float64 array for x2 and get back a float64 array strikes me as not what is usually wanted, as the entire point of nextafter is to get the nearest floating-point value to an input. If you want the next float64 value, IMO, you should provide a float64 input array for x1.
Integer
x2dtype should be allowed, since it's just indicating the direction.
This seems in conflict with supporting type promotion, as mixed-kind promotion semantics are undefined.
To my eyes, that is weird behavior. I would think that what you usually want is the next value before/after an element in
x1with the same dtype.
On second thought, I agree with you there.