ivan-marroquin
ivan-marroquin
Hi @JanRhoKa Many thanks for the suggestion. I don't have a Conda installation, just a Python installation. And I tried your approach. Unfortunately, I still got the same issue. Then,...
Hi Greg, Thanks for the prompt answer and explanation. So, I assume that if I only have continuous data the normalized mutual information can be computed using I(X;Z) / H(Z)...
Hi @akshayka Thanks for asking for feedback. With my datasets, it seems that minkowski with p < 1 is a better choice to deal with the issue of distance concentration....
Hi @akshayka Correct, I believe that it will be beneficial to use PyNNDescent to measure the -k-nearest neighbors, so then PyMDE can be used to compute a lower dimension while...
Hi @vtraag Thanks for taking into consideration my enhancement request and for providing the link with instructions to implement a new modularity. Unfortunately, I am not a good C++ programmer....
any comments?
Hi @nickkunz thanks for looking into this issue. The background data consist of zeros, while the outliers are values higher than 0.50 (see attached plot) Hope this helps, Ivan 
Hi @nickkunz Hoping that you are doing well. I was wondering if you had the chance to look into this issue? Kind regards, Ivan
Hi @rkrishna116 Thanks for the workaround! I will give a try. I found another approach to solve the need of minority values in continuous data, and it is "data discretization"....
Hi @avati Thanks for your prompt answer. I made the chance to the code, in which the xgboost n_estimators= 1 while NGBoost n_estimators = 300. Unfortunately, I still get the...