Yiqian Wu

Results 75 comments of Yiqian Wu

> What's about your results for the first step (inversion), do you just use lpips loss, following PTI ? Yes, my code is based on the w projector of PTI....

@jiaxinxie97 Hi jiaxinxie97, I use the original eg3d checkpoints to generate video for the latent code, it seems that the eye glasses is reconstructed successfully, which indicates that I got...

> Thanks! I also use PTI repo, but it is strange I can't reconstruct eye glasses using w or w+ space optimization, I will check! Since the original eg3d checkpoint...

> Hi, @oneThousand1000 > > Did you set both the `z` and `c` as the trainable parameters during the GAN inversion? I guess fixing the `c` (which can be obtained...

> follow the FFHQ preprocessing steps in EG3D > Got it, thanks for your reply. > > BTW, did you follow the FFHQ preprocessing steps in EG3D (i.e., realign to...

Hi, please follow the "Preparing datasets" in reademe to get realigned images. According to https://github.com/NVlabs/eg3d/issues/16#issuecomment-1151563364, the original ffhq dataset is not work for the camera parameters of dataset.json, you should...

> @oneThousand1000 > > Yuh, I agree. For those who want to directly use FFHQ well-aligned 1024 images, you have to predict the camera parameters by Deep3DFace_pytorch by yourself. You...

> @oneThousand1000 Do you use the noise regularization loss in the first GAN inversion step ? See https://github.com/NVlabs/eg3d/issues/28#issuecomment-1161560077

> Hi, @oneThousand1000, > > I tried to use PTI to get pivot of an image, then in the gen_video.py file, I used the pivot to set zs, which original...

> FYI We added additional scripts that can preprocess in-the-wild images compatible with the FFHQ checkpoints. Hope that is useful. [#18 (comment)](https://github.com/NVlabs/eg3d/issues/18#issuecomment-1200366872) Hi! I found that all the faces in...