ch1998

Results 15 comments of ch1998

> Yes, actually the model is trained with 4 random views. You could check this https://github.com/3DTopia/LGM/blob/main/core/models.py#L61 to specify the camera poses. I used the specified camera pose for 3D generation...

I used the enerf-outsider data set, re-colmap, and used the conversion code in easymocap to get intri/extri.yml. Tested on actor5, and the exported mesh results are as shown below. But...

> @520jz I've overcome this problem. This is because you could not connet to the huggingface to get the zero123 pretrained model. You can try to download the pretrained model...

Hi, have you successfully used Custom data to get good results? I also used imgs2poses.py for data calibration, but the training results were very bad.

Hi,I use easymocap to extract the bbox in vhull, but the dimension of the bbox extracted by easymocap is [l, t, r, b, conf], the shape of the data in...

> Hi! You can use animation sequences from the AMASS Dataset: https://amass.is.tue.mpg.de/ They have SMPL-X animated with motion capture. But you have to reorder the file structure so it matches...

Thanks for your reply, I will try it!

Hi! I tried to get the SMPL-X parameters of animation sequences , but in the testseq_azure_amass_merged_poses.pickle you gave, there is a K, and the shape of K is (3, 3)....

> I pack rgb with mask to be png images. The program will then treat the 4th channel as mask used for bounded TSDF fusion. In this way, can I...

Maybe you should adjust the parameters of colmap in colmap2nerf.py? I tried to use colmap2nerf.py to generate train_meta.json, all the K matrices are the same, I think this is wrong,...