Disentangled-Sequential-Autoencoder icon indicating copy to clipboard operation
Disentangled-Sequential-Autoencoder copied to clipboard

Prior is not modelled using an LSTM

Open blackPython opened this issue 6 years ago • 0 comments

In the appendix of the paper they mention that they use a prior which is modeled by an LSTM, but in the code here I see that only normal prior is used for all dynamic latent varaibles. Can i send you a pull request to incorporate the prior with LSTM?

blackPython avatar Oct 31 '19 13:10 blackPython