Wu Lin
Wu Lin
this issue is partially solved at https://github.com/shogun-toolbox/shogun/pull/2484 TODO: add implicit gradient wrt hyper-parameter
You may have a look at this paper about exponential family embedding. https://arxiv.org/abs/1608.00778
@karlnapf will work on this issue once `notebooks` are done
@karlnapf Currently, `pthread` is used to parallelly compute the gradients.
about moving all GP parameters to log-domain: - not all parameters are positive (for GaussianARDKernel, where weights are FULL_Matrix) (adding a gradient transformation layer is not easy since we have...
@karlnapf Let me know if you are agree with my idea so that I will implement this idea.
@karlnapf the existing model selection (`CGradientModelSelection`) does not do gradient transformation, which means there is a bug if a parameter and its gradient are in different domains.
@karlnapf Any feedback on the model selection
GPML does keep parameters in log domain if they are positive and parameters in standard domain if they can be negative or zero. UPDATED: (`unconstrained representation`) if we can break...
If you think adding a new field in `TParameter` is possible, I may need some help from authors' of the class. Adding a new field in `TParameter` is not trivial...