Convolutional_LSTM_PyTorch icon indicating copy to clipboard operation
Convolutional_LSTM_PyTorch copied to clipboard

Input shape issue and lack of bias.

Open mikumeow opened this issue 6 years ago • 9 comments

The first problem is that in ConvLSTM.forward, the code is using the same x = input in multiple timesteps. I guess the input shape of forward func. shall be changed to

[sequence, bsize, channel, x, y] 

instead of the original

[bsize, channel, x, y]

And, x=input line shall be changed to

x=input[step]

for different steps. I am still studying if it's appropriate to loop layers within loops of timesteps, but after training your current code(with the change I mentioned above), I can get decent outcomes.

The second problem is that in ConvLSTMCell, there're no biases. For example in

ci = torch.sigmoid(self.Wxi(x) + self.Whi(h) + c * self.Wci)

While it should be something like

ci = torch.sigmoid(self.Wxi(x) + self.Whi(h) + c * self.Wci + self.Bci)

But I don't know if such constants would affect the backward phase.

P.S. I'm myself a beginner so maybe I'm wrong. Please reply :)

mikumeow avatar May 09 '19 10:05 mikumeow

By the way this is the result of the code after the x=input[step] change. I'm training with a moving squares dataset adjusted from the Keras's ConvLSTM2D example code here After 1 epoch * 5000 batches * 6 seqs per batch, here's a random result and the ground truth: image

So great it worked!! Cheers to the author! I'll try to add bias into ConvLSTMCell sometime later.

mikumeow avatar May 09 '19 10:05 mikumeow

@mikumeow Hi, could you please share your code which works on the Keras's example? Many thanks :-)

yaorong0921 avatar May 27 '19 11:05 yaorong0921

@yaorong0921 Hello! Thanks for replying. But sorry, currently I am still adjusting this code because my later tries with it revealed some issues.

I am currently checking things like the loss func and how it works with batches in this model, in accordance to the implementation of ConvLSTM in Tensorflow. Also, I was wrong about the bias because the model has already added bias here:

self.Wxi = nn.Conv2d(self.input_channels, self.hidden_channels, self.kernel_size, 1, self.padding, bias=True)

mikumeow avatar May 28 '19 11:05 mikumeow

@mikumeow i get the similar problem with you, about same x(absence of sequence size).i think your method should be right .i will try it and give a response.surely,it doesn't lack of bias. and convlstm seems that it does't need parameter step(get from x.size()[0])

EthanHe001 avatar Jul 25 '19 02:07 EthanHe001

@mikumeow if it's appropriate to loop layers within loops of timesteps?

jhhuang96 avatar Aug 05 '19 13:08 jhhuang96

I think iterating over timesteps seems reasonable

emjay73 avatar Aug 08 '19 23:08 emjay73

@mikumeow if it's appropriate to loop layers within loops of timesteps?

It seems ok, since any hidden state is independent of future hidden states. So no need to compute the entire time-loop hidden states ahead. @mikumeow also mentioned that good decent is performed using this code when he did x=input[step]

tianfudhe avatar Nov 12 '19 03:11 tianfudhe

The first problem is that in ConvLSTM.forward, the code is using the same x = input in multiple timesteps. I guess the input shape of forward func. shall be changed to [sequence, bsize, channel, x, y]

instead of the original [bsize, channel, x, y]

And, x=input line shall be changed to x=input[step]

for different steps. I am still studying if it's appropriate to loop layers within loops of timesteps, but after training your current code(with the change I mentioned above), I can get decent outcomes. The second problem is that in ConvLSTMCell, there're no biases. For example in ci = torch.sigmoid(self.Wxi(x) + self.Whi(h) + c * self.Wci) While it should be something like ci = torch.sigmoid(self.Wxi(x) + self.Whi(h) + c * self.Wci + self.Bci) But I don't know if such constants would affect the backward phase. P.S. I'm myself a beginner so maybe I'm wrong. Please reply :)

Hi:I agree with your question about the lack of bias...

But now I am only a beginning scholar of Convlstm, I can understand the principle but cannot use it, so you have successfully used the author's Convlstm_pytorch, could you please send me the code of this successful prediction image (from Keras)? I'm very grateful because learning convlstm is really painful

ghost avatar Jan 03 '20 07:01 ghost

could you please send me the code of this successful prediction image (from Keras)? Thank you

to19851985 avatar Jun 01 '20 15:06 to19851985