zkaiWu
zkaiWu
I notice that the the network_grid ignore the ray direction, which is different with your another repo torch-ngp. Are there any reasons to ignore it?
Thanks for you great work! Is the llff/flower dataset still not work in repo right? Thanks
I use the command "ns-process-data images --data /data5/wuzhongkai/data/dreamfusion_data/llff/nerf_llff_data/flower --output-dir /data5/wuzhongkai/data/dreamfusion_data/llff/nerf_llff_data/flower --skip_colmap --colmap_model_path sparse/0 --skip_image_processing" to generate transform.json and use the command CUDA_VISIBLE_DEVICES=$1 ns-train nerfacto --data /data/nerf_llff_data/flower/transforms.json \ --experiment-name llff/flower --vis...
when I run on gpu, there is an error:  how can I fix this without changing the code
Hi. Can you tell me the hyperparameter you use, I train the model but it not convergence.
I test ddnm on arbitrary imagenet image but did not get as good results as the demo. The command I use is : CUDA_VISIBLE_DEVICES=3 python main.py --resize_y --config confs/inet256.yml --path_y...
When using the script render_llff_video.py to render the video, the video seems to be very dark.  And this video is multi-view discontinuous, is that correct?
How to test my own lowlight imges
DDIM
Are there any way to apply DDIM sampling to this pipeline, thanks.
How can I download metadata only?