POT icon indicating copy to clipboard operation
POT copied to clipboard

Discrepancy between the outputs of `ot.sinkhorn` and `ot.optim.gcg` with `reg2=0`

Open MatDag opened this issue 4 months ago • 1 comments

Describe the bug

In optim.gcg, the inner solver is ot.sinkhorn. Thus, if I am not mistaken, I would expect the output of

ot.optim.gcg(a, b, M, reg, 0., lambda x: 0., lambda x: np.zeros(x.shape), numInnerItermax=1000)

and

ot.sinkhorn(a, b, M, reg, numItermax=1000)

to be the same. However, it is not always the case, even when using the same tolerances.

Do I miss something?

To Reproduce

n = 100
X = ot.datasets.make_2D_samples_gauss(n, m=[0, 0], sigma=np.eye(2))
Y = ot.datasets.make_2D_samples_gauss(n, m=[1, 1], sigma=np.eye(2))

M = ot.dist(X, Y)
M /= M.max()

a = ot.unif(n)
b = ot.unif(n)

reg = 1e-2

pi1 = ot.optim.gcg(a, b, M, reg, 0., lambda x: 0., lambda x: np.zeros(x.shape), numInnerItermax=1000)
pi2 = ot.sinkhorn(a, b, M, reg, numItermax=1000)

assert np.allclose(pi1, pi2, atol=1e-5)  # Fail

MatDag avatar Sep 13 '25 14:09 MatDag

I think this is due to the fact that in generic_conditional_gradient the gradient of the entropic regularization is added to the cost. But this operation is not justified if we use sinkhorn as inner solver, no? I will open a PR

MatDag avatar Sep 13 '25 15:09 MatDag