Input shape issue and lack of bias.
The first problem is that in ConvLSTM.forward, the code is using the same x = input in multiple timesteps. I guess the input shape of forward func. shall be changed to
[sequence, bsize, channel, x, y]
instead of the original
[bsize, channel, x, y]
And, x=input line shall be changed to
x=input[step]
for different steps. I am still studying if it's appropriate to loop layers within loops of timesteps, but after training your current code(with the change I mentioned above), I can get decent outcomes.
The second problem is that in ConvLSTMCell, there're no biases. For example in
ci = torch.sigmoid(self.Wxi(x) + self.Whi(h) + c * self.Wci)
While it should be something like
ci = torch.sigmoid(self.Wxi(x) + self.Whi(h) + c * self.Wci + self.Bci)
But I don't know if such constants would affect the backward phase.
P.S. I'm myself a beginner so maybe I'm wrong. Please reply :)
By the way this is the result of the code after the x=input[step] change.
I'm training with a moving squares dataset adjusted from the Keras's ConvLSTM2D example code here
After 1 epoch * 5000 batches * 6 seqs per batch, here's a random result and the ground truth:

So great it worked!! Cheers to the author! I'll try to add bias into ConvLSTMCell sometime later.
@mikumeow Hi, could you please share your code which works on the Keras's example? Many thanks :-)
@yaorong0921 Hello! Thanks for replying. But sorry, currently I am still adjusting this code because my later tries with it revealed some issues.
I am currently checking things like the loss func and how it works with batches in this model, in accordance to the implementation of ConvLSTM in Tensorflow. Also, I was wrong about the bias because the model has already added bias here:
self.Wxi = nn.Conv2d(self.input_channels, self.hidden_channels, self.kernel_size, 1, self.padding, bias=True)
@mikumeow i get the similar problem with you, about same x(absence of sequence size).i think your method should be right .i will try it and give a response.surely,it doesn't lack of bias. and convlstm seems that it does't need parameter step(get from x.size()[0])
@mikumeow if it's appropriate to loop layers within loops of timesteps?
I think iterating over timesteps seems reasonable
@mikumeow if it's appropriate to loop layers within loops of timesteps?
It seems ok, since any hidden state is independent of future hidden states. So no need to compute the entire time-loop hidden states ahead. @mikumeow also mentioned that good decent is performed using this code when he did x=input[step]
The first problem is that in ConvLSTM.forward, the code is using the same x = input in multiple timesteps. I guess the input shape of forward func. shall be changed to [sequence, bsize, channel, x, y]
instead of the original [bsize, channel, x, y]
And, x=input line shall be changed to x=input[step]
for different steps. I am still studying if it's appropriate to loop layers within loops of timesteps, but after training your current code(with the change I mentioned above), I can get decent outcomes. The second problem is that in ConvLSTMCell, there're no biases. For example in ci = torch.sigmoid(self.Wxi(x) + self.Whi(h) + c * self.Wci) While it should be something like ci = torch.sigmoid(self.Wxi(x) + self.Whi(h) + c * self.Wci + self.Bci) But I don't know if such constants would affect the backward phase. P.S. I'm myself a beginner so maybe I'm wrong. Please reply :)
Hi:I agree with your question about the lack of bias...
But now I am only a beginning scholar of Convlstm, I can understand the principle but cannot use it, so you have successfully used the author's Convlstm_pytorch, could you please send me the code of this successful prediction image (from Keras)? I'm very grateful because learning convlstm is really painful
could you please send me the code of this successful prediction image (from Keras)? Thank you