Gustavo Landfried

Results 7 comments of Gustavo Landfried

Hi all, We have implemented [1] and documented [2] the first TrueSkill Through Time packages for Python, Julia and R. I wish to express my gratitude to Heungsub Lee for...

Hi and thanks Tom. The reference has been very helpful. It allowed us to verify that the exact message 7 that we calculated was in fact correct, since when we...

I still don't understand why the approximate message 7 does not minimize the KL divergence with respect to the exact message 7.

Our original goal was to implement the matchbox model from scratch as an exercise to learn as much as possible from the methods created by you. Matchbox is particularly interesting...

The mathematical definition of the KL-Divergence is, $$KL(p||q) = - \sum_{x \in X} p(x) * \log( \frac{q(x)}{p(x)} )$$ with $p$ the true distribution and $q$ the approximated distribution. Then, the...

Yes, you are right, the message 8 (or 7) are likelihood so they are not distributions. It is my homework to verify the KL divergence correctly. Thank you very much....

Hello again, Here we will follow your technical report "Divergence measures and message passing" [1], rephrasing it as close as possible. To recap, we want the best approximation $q$ that...