DIR
DIR copied to clipboard
[ICCV 2023 Oral] Decoupled Iterative Refinement Framework for Interacting Hands Reconstruction from a Single RGB Image
Decoupled Iterative Refinement Framework for Interacting Hands Reconstruction from a Single RGB Image
Our method DIR can achieve an accurate and robust reconstruction of interacting hands. https://github.com/PengfeiRen96/DIR/blob/main/README.md :open_book: For more visual results, go checkout our project page
[Project Page] • [arXiv]
:mega: Updates
[10/2023] Released the pre-trained models 👏!
[07/2023] DIR is accepted to ICCV 2023 (Oral) :partying_face:!
:love_you_gesture: Citation
If you find our work useful for your research, please consider citing the paper:
@inproceedings{ren2023decoupled,
title={Decoupled Iterative Refinement Framework for Interacting Hands Reconstruction from a Single RGB Image},
author={Ren, Pengfei and Wen, Chao and Zheng, Xiaozheng and Xue, Zhou and Sun, Haifeng and Qi, Qi and Wang, Jingyu and Liao, Jianxin},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
year={2023}
}
:desktop_computer: Data Preparation
- Download necessary assets misc.tar.gz and unzip it.
- Download InterHand2.6M dataset and unzip it.
- Process the dataset by the code provided by IntagHand
python dataset/interhand.py --data_path PATH_OF_INTERHAND2.6M --save_path ./data/interhand2.6m/
:desktop_computer: Installation
Requirements
- Python >= 3.8
- PyTorch >= 1.10
- pytorch3d >= 0.7.0
- scikit-image==0.17.1
- timm==0.6.11
- trimesh==3.9.29
- openmesh==1.1.3
- pymeshlab==2021.7
- chumpy
- einops
- imgaug
- manopth
Setup with Conda
# create conda env
conda create -n dir python=3.8
# install torch
pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 --extra-index-url https://download.pytorch.org/whl/cu113
# install pytorch3d
pip install fvcore iopath
pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/py38_cu113_pyt1110/download.html
# install other requirements
cd DIR
pip install -r ./requirements.txt
# install manopth
cd manopth
pip install -e .
:train: Training
python train.py
:running_woman: Evaluation
Download the pre-trained models Google Drive
python apps/eval_interhand.py --data_path ./interhand2.6m/ --model ./checkpoint/xxx
You can use different joint id for alignment by setting root_joint (0: Wrist 9:MCP)
Set Wrist=0, you would get following output:
joint mean error:
left: 10.74602734297514 mm, right: 9.60523635149002 mm
all: 10.17563184723258 mm
vert mean error:
left: 10.49137581139803 mm, right: 9.40467044711113 mm
all: 9.94802312925458 mm
pixel joint mean error:
left: 6.332123279571533 mm, right: 5.808280944824219 mm
all: 6.070201873779297 mm
pixel vert mean error:
left: 6.235969543457031 mm, right: 5.725381851196289 mm
all: 5.98067569732666 mm
root error: 28.983158990740776 mm
(We fixed some minor bugs and the performance is higher than the value reported in the paper)
:newspaper_roll: License
Distributed under the MIT License. See LICENSE for more information.
:raised_hands: Acknowledgements
The pytorch implementation of MANO is based on manopth. We use some parts of the great code from IntagHand. We thank the authors for their great job!