Returning KL divergence
Thank you for this fantastic work!
Could it be possible the fit_transform() method returns the KL divergence of the run?
Thx!
related question, @DavidMChan , is the Avg. Gradient Norm printed to the log the same/analogous to KLD ?
The average gradient norm is basically the norm of the gradient of the KL-divergence with respect to the particle positions. Thus, it can be a proxy for how stable the optimization process is, but is not the same as the KL.
Thanks for the explanation! That makes sense.
So, just to confirm-- there's no way at all currently to see the KL/D value ?
Currently, no - but I'll consider working it into the next version, and we're always welcome to PRs if anyone wants to contribute!
Here would be a good place in the code to start looking: https://github.com/CannyLab/tsne-cuda/blob/b740a7d46a07ca9415f072001839fb66a582a3fa/src/fit_tsne.cu#L513