Erfan Noury
Erfan Noury
Hi, What's the status of this PR? Was there a problem that you didn't merge in the main code? Are you getting correct results? Also which script should one use...
Hi @fmassa, Yes, I understand. I'm looking for a working implementation of **Fast** R-CNN in either Torch or PyTorch, but unfortunately there aren't any other working implementations out there except...
I have a working implementation of Layer Normalization for LSTM [(LN-LSTM)](https://github.com/erfannoury/seq2seq-lasagne/blob/master/CustomLSTMLayer.py#L17-L650) that you may take a look. We can further improve it if you notice anything that needs to be...
It is mentioned in the section 6.7 of the Layer Normalization paper that this normalization technique won't work as well as the Batch Normalization for Convolutional Layers. So I think...
Would something like this work? [create_param.py](https://gist.github.com/erfannoury/6f42b6098e4cbbabd1f114766c212506)
Yes, Ok. I'm working on it.
See the PR https://github.com/Lasagne/Lasagne/pull/695
I think using the default implementation of Hierarchical Softmax (`theano.tensor.nnet.h_softmax`) won't work when the first dimension (the batch size) is `None`. Therefore a somehow different implementation is required.
I think these lines cause the problem [nnet.py#L2315-L2316](https://github.com/Theano/Theano/blob/master/theano/tensor/nnet/nnet.py#L2315-L2316). Adding `ndim=2` and `ndim=3`, respectively, can fix the problem.
I put the `ndim` values as an argument to the `reshape` calls.