Bad instance selection for default_cut_minus
I was working on a development that used cut_minus with Z and ran into a situation where cut_minus was returning negative integers. I was able to create a small case demonstrating the issue and narrowed it down to peculiar typeclass resolution. It is selecting the decision instance in default_cut_minus before the Le and coming up with the decision for equiv on Z leading equiv to be used for le within default_cut_minus. I'm not anywhere near enough of a typeclasses wizard to know what if anything can be done to force the intended instances to be selected.
The input below demonstrates the issue on coq 8.19.1 and rocq 9.0.1 with math-classes master.
From MathClasses Require Import
interfaces.canonical_names
interfaces.additional_operations
implementations.stdlib_binary_integers
orders.integers
theory.cut_minus
.
From Coq Require Import BinIntDef.
(* cut_minus 1 2 is supposed to evaluate to 0, but it results in -1 *)
Compute (cut_minus 1%Z 2%Z).
Type (@equiv Z _ : Le Z).
(* Typeclass debug logs show that instance resolution is done for
Decision before it is done for Le and the first instance found
is the instance for deciding equiv. As we can see above, equiv
is typeable as an Le instance so this solution is accepted. Thus,
the cut_minus above is equivalent to the following. *)
Compute (default_cut_minus (R:=Z) (o:=Z_equiv) 1%Z 2%Z).
Example cm_selected_instance : cut_minus (A:=Z) ≡ default_cut_minus (R:=Z) (o:=Z_equiv) := eq_refl.
(* cut_minus 1 2 is supposed to evaluate to 0 *)
Example bad_cut_minus : (cut_minus 1%Z 2%Z ≡ 0) := eq_refl.
I thought to try changing the mode for Decision to + reasoning that the available decision instances shouldn't be used to determine what is being decided. I thought this would suspend resolution of Decision instances until any existentials in the argument were resolved (unshelving subgoals if necessary) but it seems to just block resolving any Decision instance for ungrounded properties.
One fix might be to replace forall x y, Decision (x ≤ y) with a more specific class Decidable_le indexed by Le instances, so it can't accidentally pick equiv.
If I add an explicit `{!SemiRingOrder o} argument to default_cut_minus in theory/cut_minus.v then the correct instances are found because the SemiRingOrder and Decision goals are added into the instance search together and the SemiRingOrder instance grounds the Le instance to something that actually means less-than-or-equal-to.
The SemiRingOrder already in the section context isn't added to the default_cut_minus arguments already because it is not used within the definition. By adding it there, the CutMinusSpec instance should always apply to an inferred default_cut_minus.
I also duplicated the decision instance in the arguments so that resolving the decision is done after resolving the order.
Are there downsides to this change that are beyond my knowledge and experience? Are there perhaps other fragile default instances that can be resolved in ways that do not correctly implement the operation?
diff --git a/theory/cut_minus.v b/theory/cut_minus.v
index 0d3fb96..25a2b38 100644
--- a/theory/cut_minus.v
+++ b/theory/cut_minus.v
@@ -217,7 +217,7 @@ End cut_minus_properties.
Section cut_minus_default.
Context `{Ring R} `{!SemiRingOrder o} `{∀ x y : R, Decision (x ≤ y)}.
- Global Instance default_cut_minus: CutMinus R | 10 := λ x y, if decide_rel (≤) x y then 0 else x - y.
+ Global Instance default_cut_minus `{!SemiRingOrder o} `{∀ x y : R, Decision (x ≤ y)}: CutMinus R | 10 := λ x y, if decide_rel (≤) x y then 0 else x - y.
Add Ring R2: (rings.stdlib_ring_theory R).
I was thinking that perhaps the Ring argument should be added as well to ensure that (-) has the proper meaning. That these were introduced into the context before defining default_cut_minus seems to support that these are understood to be restrictions on the applicability of default_cut_minus.