question regarding preprocessed_images
Hi,
Thanks for sharing this work.
Got a questing regarding this line
https://github.com/pender/stylegan-encoder/blob/46605b23756078345cf1d544017d2ff24d0e5b2f/encoder/perceptual_model.py#L21
I guess the purpose is to normalize reference image to [-1,1] because the discriminator requires it. The bug described in https://github.com/pender/stylegan-encoder/blob/46605b23756078345cf1d544017d2ff24d0e5b2f/encoder/perceptual_model.py#L22 doesn't have any impact because it is the generator_output_tensor, not generated_image_tensor is fed into the discriminator, as specified in https://github.com/pender/stylegan-encoder/blob/46605b23756078345cf1d544017d2ff24d0e5b2f/encoder/perceptual_model.py#L50. So I guess the correct way to normalize is simply preprocessed_images * ((drange[1] - drange[0]) / 255) + drange[0].
Please correct me if I miss something else.