Jing Zeng
Jing Zeng
have a multi-room scenario, using 400 images for reconstruction with MonoSDF. The rendered new viewpoints only achieve a PSNR of 21. How can I improve this?  
Thanks for your nice work, When I run ii in scene: office.pcd, the exploration trajectory encounters a collision. Why does this situation occur, and how can we resolve this issue?...
When I test scans_test split dataset (not val and without label) dataset, `python visualization.py --prediction_path ../results/ --room_name scene0707_00 --task instant_pred --out ../results/scene0707_00_instant_pred.ply` output is  but `python visualization.py --prediction_path ../results/...
When I run it in room0 scene, speed is about 0.5 fps, which is much lower than the paper, why is it? 
I have a problem, Image quality by rendering VS monosdf and other nerf based method ? In replica room0 dataset , is it possible to achieve PSNR > 30 such...
Thanks for your work, but I do not find the code to compute the scale of omnidata depth in the paper. 
Does it support indoor scenes? Or are there limitations such as network forgetting issues when using larger multi-room scenes like Matterport 3D dataset as below 
using scene_id: 0616_00 train_suffix: 0616_00_pretrained vis_suffix: 0616_00_pretrained mvs_suffix: mvs **kwargs: --casting Traceback (most recent call last): File "scripts/train_pretrained.py", line 17, in from scripts.generate_normal import ray_casting_depth_normal File "/mnt/dataset/zengjing/HelixSurf/scripts/generate_normal.py", line 9, in...