PLNet
PLNet copied to clipboard
Code for PLNet: Plane and Line Priors for Unsupervised Indoor Depth Estimation
PLNet
The Pytorch code for our following paper
PLNet: Plane and Line Priors for Unsupervised Indoor Depth Estimation, 3DV 2021 (pdf)
Hualie Jiang, Laiyan Ding, Junjie Hu and Rui Huang

Preparation
Installation
Install pytorch first by running
conda install pytorch=1.5.1 torchvision=0.6.1 cuda101 -c pytorch
Then install other requirements
pip install -r requirements.txt
Datasets & Preprocessing
Please download preprocessed (sampled in 5 frames) NYU-Depth-V2 dataset by Junjie Hu and extract it.
Extract the superpixels and line segments by excuting
python extract_superpixel.py --data_path $DATA_PATH
python extract_lineseg.py --data_path $DATA_PATH
Try an image
run depth_prediction_example.ipynb with jupyter notebook
Training
Using 3 Frames
python train.py --data_path $DATA_PATH --model_name plnet_3f --frame_ids 0 -2 2
Using 5 Frames
Using the pretrained model from 3-frames setting gives better results.
python train.py --data_path $DATA_PATH --model_name plnet_5f --load_weights_folder models/plnet_3f --frame_ids 0 -4 -2 2 4
Evaluation
The pretrained models of our paper is available on Google Drive.
NYU Depth Estimation
python evaluate_nyu_depth.py --data_path $DATA_PATH --load_weights_folder $MODEL_PATH
ScanNet Depth Estimation
python evaluate_scannet_depth.py --data_path $DATA_PATH --load_weights_folder $MODEL_PATH
ScanNet Pose Estimation
python evaluate_scannet_pose.py --data_path $DATA_PATH --load_weights_folder $MODEL_PATH --frame_ids 0 1
Note: to evaluate on ScanNet, one has to download the preprocessed data by P^2Net.
Acknowledgements
The project borrows codes from Monodepth2 and P^2Net. Many thanks to their authors.
Citation
Please cite our papers if you find our work useful in your research.
@inproceedings{jiang2021plnet,
title={PLNet: Plane and Line Priors for Unsupervised Indoor Depth Estimation},
author={Jiang, Hualie and Ding, Laiyan and Hu, Junjie and Huang, Rui},
booktitle={In IEEE International Conference on 3D Vision (3DV)},
year={2021}
}