MLJTuning.jl
MLJTuning.jl copied to clipboard
Hyperparameter optimization algorithms for use in the MLJ machine learning framework
No new release required.
Some context: https://github.com/JuliaML/TableTransforms.jl/issues/67 I don't think this would be too bad, and useful preparation for making the MLJ model interface more flexible later. The MLJTuning API doesn't really touch on...
```julia X, y = make_blobs() model = (@load RandomForestClassifier pkg=DecisionTree)() mach = machine(model, X, y) r = range(model, :n_trees, lower=10, upper=70, scale=:log10) many_curves = learning_curve(mach, range=r, resampling=Holdout(), measure=cross_entropy, rng_name=:rng, rngs=1)...
Hi everyone, While benchmarking some toy grid searches, I obtained odd results, and it seemed to me that performing a grid search using a `TunedModel` is slower than it should...
Julia HP optimization packages: - [ ] [Hyperopt](https://github.com/baggepinnen/Hyperopt.jl).jl @baggepinnen (Random search, Latin hypercube sampling, Bayesian opt) - [ ] [TreeParzen](https://github.com/IQVIA-ML/TreeParzen.jl).jl (port of Hyperopt.py to Julia) @IQVIA-ML @iqml - [ ]...
Currently in MLJ acceleration with `CPUThreads` is implemented using `@distributed`. This effectively splits up the given range (`1:nfolds` or `1:nmetamodels`) into equal chunks and sends them off to all workers...
One dimensional range in MLJBase, how does that fit with MLJTuning and with the generalisation where you may want to specify “spaces” for sets of parameters. It might be interesting...
See https://github.com/JuliaAI/MLJTuning.jl/pull/183#issuecomment-1238709651
Because of #210 we have redundant keys in the history entries (all keys except `:metadata` and the new `:evaluation` are redundant.)
Details in alan-turing-institute/MLJ.jl#1029. - Adding parametric type `L` for loggers (detailed implementation in MLJBase.jl).