ChenYutongTHU
ChenYutongTHU
Getcha Thanks:)
> I got all the models by running: > `path/to/azcopy copy https://biglmdiag.blob.core.windows.net/vinvl/model_ckpts/* --resursive` > nocaps models are located in `model_ckpts/image_captioning/` > but it is surely not a good way to...
I see aligned_aninerf_lbw_${sub}.yaml uses data/light_stage while configs/aninerf_${sub}.yaml uses data/zju_mocap. Is there any difference between light_stage and zju_mocap?
Thanks! Are rendered images with the reported scores available? If the released model cannot reproduce the reported scores on my machine, it would be great to access the rendered results...
Also, can you help provide the init_sdf model, which seems to be needed in the Ani-SDF method. Great thanks!
In fact, I found for subject 313, there are only 21 cameras. Camera (20)(21) are missing. For subjects 387 and 377, there are 23 cameras. Another distinction from the paper...
I see. So, there are 617 frames in total, (I checked the dataset). I guess with 300 frames for training, there are 317 remaining frames for testing, which are then...
@yaseryacoob Thanks for sharing this practice! On ETH3D dataset, I followed your suggestions, using the camera extrinsics and FoV output from width=518 input to unproject the high-resolution depth prediction of...
@yaseryacoob Great thanks. @jytime Yes. I used the undistorted image, which was released [here](https://www.eth3d.net/datasets) as 'scenename_dslr_undistorted.7z'. The download link for the office scene I used is https://www.eth3d.net/data/office_dslr_undistorted.7z.
Awesome! May I know when the DINO-v3-backboned version will be released? Thanks!