roundchuan
roundchuan
> @lubovbyc Hi. I have tried to alleviate this. You can add more MLP layers for the APNet and use the PDC loss to improve robustness. Can you provide the...
> @cassiePython Thanks for your reply! I will take a shot and check if working. Do you get the correct results of APNet by the advice of author? I meet...
> @lubovbyc I am still confused with the problem. I did not find this problem on my dataset. I attached the dataset with 4K images and the corresponding checkpoint. Please...
> @roundchuan Can you get results like this: > >  > @roundchuan Can you get results like this: > No , all the render faces are the same. And...
> > I also struggle in finding the good results of APNet. Should I use the renderer, landmark loss as well? > > 1. Before using the pseudo gt to...
> A100-40G or A100-80G? It can be trained on A100-80G using the settings as follows: > > batch size: 1 resolution: 256 frame: 25 Hello I can't find the ckpt...
> > Hi, > > When launching the t2v training, the it also requires to specify an image data path as [here](https://github.com/PKU-YuanGroup/Open-Sora-Plan/blob/main/scripts/train_data/image_data.txt). However, in the HuggingFace dataset repo there is...