tsne-cuda icon indicating copy to clipboard operation
tsne-cuda copied to clipboard

Returning KL divergence

Open Wilco17 opened this issue 6 years ago • 4 comments

Thank you for this fantastic work!

Could it be possible the fit_transform() method returns the KL divergence of the run?

Thx!

Wilco17 avatar Jan 28 '20 08:01 Wilco17

related question, @DavidMChan , is the Avg. Gradient Norm printed to the log the same/analogous to KLD ?

stu-blair avatar Jul 22 '21 20:07 stu-blair

The average gradient norm is basically the norm of the gradient of the KL-divergence with respect to the particle positions. Thus, it can be a proxy for how stable the optimization process is, but is not the same as the KL.

DavidMChan avatar Jul 23 '21 17:07 DavidMChan

Thanks for the explanation! That makes sense.

So, just to confirm-- there's no way at all currently to see the KL/D value ?

stu-blair avatar Jul 23 '21 20:07 stu-blair

Currently, no - but I'll consider working it into the next version, and we're always welcome to PRs if anyone wants to contribute!

Here would be a good place in the code to start looking: https://github.com/CannyLab/tsne-cuda/blob/b740a7d46a07ca9415f072001839fb66a582a3fa/src/fit_tsne.cu#L513

DavidMChan avatar Jul 23 '21 23:07 DavidMChan