Soon-Yau Cheong
Soon-Yau Cheong
@yarri-oss the "ds_train = ds_train.map(lambda x: x / 255.0)" was wrongly inserted in your last commit. You can create a PR to remove it.
Got this issue and reinstalling pytorch-lightning==1.0.8 and - omegaconf==2.0.0 fixed the problem. But the versions are different from that in requirements.txt.
You'll need to add the distributed strategy to avoid multiple gpus accessing the same files. I used the following arguments and it works for me: `trainer = pl.Trainer(strategy="ddp", accelerator="gpu", devices=2,...
I see that in your code, you freeze the temporal by disabling the gradient. Won't that stop the gradient from flowing to the other non-freeze-blocks in backpropagation?
To do pose transfer, you select all the style images from a source image, then pose from target inage. However, current app doesn't support pose import apart from the given...
Can you provide your generated test samples for download?