Xi Chen
Xi Chen
Hi, thanks for your interest in our work! It looks like the cache folder is missing, maybe some pre-processing procedures failed or skipped, could you post the structure of your...
Hi, it seems you didn't follow our data preprocessing instructions on your own datasets. You can simply use [this script](https://github.com/zju3dv/NeuralRecon-W/blob/main/scripts/preprocess_data.sh) to process your data.
> > The val_check_interval error means that your epoch (9636) is smaller than when you would like to run the validation (10000). > > To fix this, change the `VAL_FREQ:...
Hi, thanks for your interest in our work! We did use depth supervision, but not the ground truth depth from LiDAR scans. The depth used for supervision is from SfM...
Hi, thanks for your interest in our work! The problem seems to be there is no SfM key point exists in that image. The latest code has fixed this issue.
Hi, there are two things you can do to reduce gpu memory usage. First, set a smaller batch size in [here](https://github.com/zju3dv/NeuralRecon-W/blob/main/scripts/train.sh#L18). Second, enlarge voxel size in [config](https://github.com/zju3dv/NeuralRecon-W/blob/main/config/train.yaml#L18), this can reduce...
Hi, thanks for your interest in our work! The folder "neuralsfm" is generated by LoFTR and is used for evaluation, it's not needed for your own datasets. As for the...
Hi, the error during training indicates no surface has been reconstructed. If you are reconstructing an indoor scene, you should use [this](https://github.com/zju3dv/NeuralRecon-W/blob/main/config/train_indoor.yaml) config. If not, you can post your data...
Thanks for the reply. The spc I use has over 10000 points. and the ray origin is not inside of any aabb, but is inside of normalization range. For example,...
I find the same thing, each worker will endlessly repeat its share of the shards unless we set `with_epoch` to specify epoch length.