Great code, have you tried scaling the image data? ([0,255] -> [0,1])
Thanks for posting this code, it has been very helpful. Maybe I missed it in the code. Have you tried to rescale the image pixels for better results?
I was also realizing that the model produced rather larger errors. I was able to get them into the correct range (e.g. around .2), by scaling the image data:
return self.X[loc:loc+self.nt] / 255.0 # line 27 in kitti_data.py
But the model is currently not converging on anything better than the trivial model of just using the previous video frame as the prediction for what the current frame would look like. working on it ...
Great info @wmpauli, thank you!
It seems to be running a lot slower than the keras implementation, and doesn't seem to be learning correctly either.
It seems to be running a lot slower than the keras implementation, and doesn't seem to be learning correctly either.
Can you get the result in the paper? And why does the test result look black and white? Thank you for your reply
have you got some good results? @wmpauli
Has anybody made this to produce the paper results? If yes, can you please share what the problem is? I am getting black and white predictions: Does anybody know why?