choco-solver icon indicating copy to clipboard operation
choco-solver copied to clipboard

[BUG]Unexpected behavior when using Task with scalar

Open cprudhom opened this issue 1 year ago • 4 comments

The following test fails:

@Test(groups = "1s", timeOut = 60000)
public void testScalarAndTaskBug1() {
    int[] coeffs = new int[]{3, -2,-1,-2};
    IntVar[] vars = new IntVar[4];
    vars[0] = model.intVar(8, 9);
    vars[1] = model.intVar(2, 4);
    vars[2] = model.intVar(new int[]{1,2,4,5,6});
    vars[3] = model.intVar(3, 5);
    model.scalar(vars, coeffs, "<=", 1).post();
    IntVar ee = model.intVar(4, 9);
    new Task(vars[3], vars[2], ee);
    model.getSolver().solve();
    Assert.assertEquals(model.getSolver().getSolutionCount(), 0);
}

When the scalar constraint filters, it first reduces vars[0] to 8, then vars[1]=4 and vars[2]={5,6}. At this point the task automatically reduces the domain of vars[3] to {3,4}. Then the scalar constraint keeps on filtering and fix vars[3]to4and passivates itself. But, withvars[3]=4there is no solution anymore to this model andscalar` is not able to detect it because it does not react on modification during its filtering step. The combination of task filtering and passivate leds to inconsistency.

So IMO, the way tasks (or more generally monitors) reduce domains is buggy, because we cannot consider that constraints should be aware of reductions made by monitors during their filtering step. I'm afraid other constraints will produce the same bug...

cprudhom avatar Nov 15 '24 16:11 cprudhom

It would be interesting to run a benchmark comparing monitors vs classical arithm. I wonder if the monitors are worth it. I am not sure the gain is significant (there are cases of performance loss), and the hack may have side effect on search and learning as well...

jgFages avatar Nov 15 '24 16:11 jgFages

There is (at least) one case where monitors are mandatory, it is when the cumulative constraint is declared.

cprudhom avatar Nov 15 '24 16:11 cprudhom

It was the motivation long ago but is it really still necessary? For which algorithm?

jgFages avatar Nov 15 '24 16:11 jgFages

Yes, filtering done by monitors (and thus by TaskMonitor) can lead to very buggy behaviours inside propagators, as not all of them have been designed to adapt to such filtering. In the case of PropScalar, storing lb and ub values for each variable in the prepare() method would lead to correct behaviour (btw, it would be more consistent as the algorithm rely on pre-computed values such as the I array).

That said, I think we should not rely on monitor filtering (unless we want to possibly re-do all propagators' code).

In the case of TaskMonitor, its main interest is in updating variables' domain wrt the relation start + duration = end, which might help the cumulative's propagators to fail sooner (without waiting the arithm's filtering). However, such a filtering can be done inside the cumulative's propagator, as it is already done in PropCumulative.propIni(), which is called everytime the propagator is propagated, as PropCumulative does not react to fine events (and propIni() is called inside a full propagation if). The problem is that PropGraphCumulative reacts to fine events, and therefore does not call propIni().

Since most (if not all) cumulative's filtering algorithm rely on pre-computed values (such as the profile for instance) and filtering is not helped by previous filtering (unless recomputing the pre-computed values), enforcing relation start + duration = end should be done right before computing pre-computed values. The only problem with this "solution" is that filtering the relation start + duration = end would be done twice (once in the cumulative, once in the arithm), whereas the solution relying on TaskMonitor enforces it once. However, since cumulative's filtering algorithms are quite costful to apply, I still think that applying twice the propagation is the best implementation.

ArthurGodet avatar Nov 16 '24 15:11 ArthurGodet