rainyrainyguo
rainyrainyguo
can you give some explanations of the KL divergence term? I am a little bit confused kl_loss = torch.mean(0.5 * torch.sum(torch.exp(logvar) + mu**2 - 1 - logvar, 1)) Thank you...
resolving padding issue
Can this be applied to a regression problem?
Is there a way to load the model and reuse it? I didn't find how I can put in my test sentences in the code. If I only have source...