Tom Dupré la Tour
Tom Dupré la Tour
A workaround can be done with SelectKBest and a custom dummy function ranking features in decreasing order. ```py import numpy as np from sklearn.decomposition import PCA from sklearn.feature_selection import SelectKBest...
For k-NN, a workaround is to precompute the graph with the largest number of neighbors considered, and give the precomputed graph to a subsequent estimator. For example: ```py import numpy...
> This looks very good. Do you use it for speed up or regularization? Both, but I use it mostly for speed, since it goes from O(N * N) to...
There are [several other links](https://github.com/scikit-learn-contrib/skope-rules/search?p=1&q=skope-rules%2Fskope-rules&type=&utf8=%E2%9C%93) to the old repo.
Hi, have you tried rescaling your input data, for example dividing the arrays with their standard deviation ?
What values did you use for regularization ? Adequate regularization can vary by several orders of magnitude depending on the problem. See for example the default range of regularization in...
I agree it would be nice to have the legend on the side. Plus we could do with a plot narrower.
A variant of option **3** would be to write a custom R function that only loads the input arrays and returns an empty coefficient array. Timing its call would give...
I don't have a precise implementation in mind, maybe just a single R function that takes (`X`, `y`, `shape`) and returns an empty array of shape `shape`. But it might...