Eben Olson
Eben Olson
I wanted something like this in order to implement [batchwise dropout](http://arxiv.org/abs/1502.02478). I think what I was considering was pretty similar to your case 2 - layers would return a mask...
> We should try to collect more use cases. @ebenolson, how would you implement batch-wise dropout with multi-output layers? I'm not sure. Part of the reason I stopped working on...
Can you post a minimal script to replicate the error?
looks like the pylearn2 cuda_convnet wrapper might need a fix like this one? https://github.com/Theano/Theano/pull/3774
I've found I get better throughput for loading + data augmentation by using processes and writing results as pickles in shared memory (`/run/shm` on Ubuntu). I think that `joblib` is...
I think it would be good to have a separation between model container / training loop and dataset container / batch generator. The model would have `train` and `predict` (+`evaluate`?)...
``` net = NeuralNet( # bunch of stuff... on_epoch_finished=[ SaveWeights( 'models/model-{epoch}-{timestamp}.pkl', only_best=True, ), ], ) ``` This is sorta what I think should be avoided. Now there needs to be...
I made something along those lines a while back, although it probably needs updating to work with the current code: https://gist.github.com/ebenolson/1682625dc9823e27d771