Results 12 comments of Andyen512

I just run [this](https://github.com/Arthur151/ROMP/blob/master/scripts/video.sh) to get the npz file, is there anthing I should do?

ah, I found the [line 83](https://github.com/Arthur151/ROMP/blob/master/romp/predict/base_predictor.py)was '#'

> I found that Mixamo's X-Bot could also be driven with the plugin. In fact, the key for the character to be driven is that the initial pose must be...

ok, thanks. I'll contact you if I need.

@SDNAFIO Hi, can you share the weight visualization code? I'd like to see my trained model.

May be you are using headless rendering, I've also met this problem, but I solved it by refering this [Issue](https://github.com/shunsukesaito/PIFu/issues/49)

So the intrinsic is ([443.4, 1, 512//2], [1, 443.4, 512//2],[0, 0, 1]), and the extrinsics[:3, 3] = cam_trans , right? @MoyGcc

@Arthur151 Sorry, why using the humannerf cam_intrinsics? I was using `romp --mode=video --calc_smpl --render_mesh -i=/path/to/video.mp4 -o=/path/to/output/folder/results.mp4 --save_video` to inference my own video and I see the args.focal_length in https://github.com/Arthur151/ROMP/blob/91dac0172c4dc0685b97f96eda9a3a53c626da47/romp/lib/config.py#L60 is...

> 是的,可以参考[这个介绍](https://github.com/Arthur151/ROMP/blob/master/docs/train.md)里的Training with your own datasets 抱歉我还是没有很明白。 因为我想的是您的方法中间用了SMPL,最后输出的是N×72的poses参数,所以我只能按照SMPL的形式去标注24个点吗 我最终是想得到在我自己数据集上26个关节的旋转参数,不知能否实现?