Dan Antoshchenko
Dan Antoshchenko
Hi @junyanz. I believe that your latest [commit](https://github.com/junyanz/BicycleGAN/commit/6f0eec8ef0a147a80f64f25103089feb33553a06) introduced an error. Now this [comment](https://github.com/junyanz/BicycleGAN/blob/6f0eec8ef0a147a80f64f25103089feb33553a06/models/bicycle_gan_model.py#L200) is no longer valid. Because `self.backward_G_alone()` also computes gradients for the encoder. You must keep the...
Hi. Sorry, I can't help you with this. You need to do the debugging by yourself (I don't have enough information to help you).
I just tested the code and didn't get any errors. Have you changed any parameters in the code? Do you have up-to-date `pytorch` installed?
Hi. Have you changed any parameters in the code?
Let's debug together. Add to `utils/alias_multinomial.py` before line 57 (line with `b = torch.bernoulli(q)`) the following `print(q)`.
Now try `print(q.min(), q.max())`.
Add `q = q.clamp(0.0, 1.0)` before `b = torch.bernoulli(q)`. It must solve the problem.
@Oliveche I don't recommend to run this on CPU because it will be very slow. But yeah, if you delete `cuda` in right places the code will run on CPU.
For inference, you need to freeze existing topic vectors and word vectors and train with the new documents. But this is a research problem, there is no clear answer.
Hi! This is an unfinished implementation of this paper: `https://arxiv.org/abs/1605.02019` How to run it: 1. `preprocess_data.ipynb` 2. `get_windows.ipynb` 3. `train.ipynb` - but the gradient for some reason explodes. I hope,...