Nicholas Léonard

Results 54 comments of Nicholas Léonard

Are you sure caffe also updates the parameters (or accumulates the parameter gradients) in the code?

Yeah your link seems to confirm it. Yet the difference between forward and backward is so much smaller for caffe. I wonder what their secret is. Isn't SpatialConvolutionMM supposed to...

@f0k I think a call `cudaDeviceSynchronize` after the calls to forward/backward. Simpler than events, and conforms to other benchmarks.

@nouiz @kyunghyuncho good job on that LSTM tutorial guys. Looks great.

Can it sample double digits from random latent variables instead of an input? Nicholas Léonard 450-210-1214 On Fri, May 1, 2015 at 6:35 PM, Ajay Talati [email protected] wrote: > The...

Okay so you want one timing for the entirety of the `forward/backward`.

Oh, an just so we are clear, does this includes an inplace parameter update? (in torch, `backward` or `backwardUpdate` vs just `updateGradInput`).

French: Ok donc Fred ce sera benchmark qui inclu le forward, ainsi qu'un update des parametres utilisant les gradient. Et le update peut être fait inplace.

Okay so the theano backward is complete: https://github.com/soumith/convnet-benchmarks/pull/11.Thanks @f0k.

@soumith It would be a nice feature request for the next NVIDIA cudnn release. The lack of zero-masking is the only reason I am still not using cudnn LSTMs.