Eddie Bell
Eddie Bell
Thank you for your quick reply. Yes, I think the config, logs and temp dir would have to be split out. Our other option is to build a custom docker...
I will compare, the original paper did some comparisons with SGD (not sklearn's implementation) and they found that the projection step and adaptive learning rate improved performance.
Here are some benchmarks with identical learning rates: https://raw.github.com/ejlb/pegasos/master/benchmarks/benchmarks.png Pegasos seems to be slightly more accurate (1%). The only two differences I know of are: 1) pegasos projection 2) pegasos...
@amueller SGDClassifier trains on the whole data set at each iteration I assume? It is probably where the speed increase comes from edit: yes true, that would be a good...
Yeah, will run some with equal weight updates
this makes much more sense: https://raw.github.com/ejlb/pegasos/master/benchmarks/weight_updates/benchmarks.png Perhaps batching the pegasos weight updates would retain the slight accuracy boost and improve the training time
I used this: SGDClassifier(power_t=1, learning_rate='invscaling', n_iter=sample_coef, eta0=0.01). The full benchmark is here: https://github.com/ejlb/pegasos/blob/master/benchmarks/weight_updates/benchmark.py
Sorry to raise this from the dead. But is there any easy way to map hp.choice indices to the hyper-parameter value? At the moment I'm doing it myself but it...