SSAN
SSAN copied to clipboard
Code of SSAN
Semantically Self-Aligned Network for Text-to-Image Part-aware Person Re-identification
We provide the code for reproducing result of our paper Semantically Self-Aligned Network for Text-to-Image Part-aware Person Re-identification.
Getting Started
Dataset Preparation
-
CUHK-PEDES
Organize them in
datasetfolder as follows:|-- dataset/ | |-- <CUHK-PEDES>/ | |-- imgs |-- cam_a |-- cam_b |-- ... | |-- reid_raw.jsonDownload the CUHK-PEDES dataset from here and then run the
process_CUHK_data.pyas follow:cd SSAN python ./dataset/process_CUHK_data.py -
ICFG-PEDES
Organize them in
datasetfolder as follows:|-- dataset/ | |-- <ICFG-PEDES>/ | |-- imgs |-- test |-- train | |-- ICFG_PEDES.jsonNote that our ICFG-PEDES is collect from MSMT17 and thus we keep its storage structure in order to avoid the loss of information such as camera label, shooting time, etc. Therefore, the file
testandtrainhere are not the way ICFG-PEDES is divided. The exact division of ICFG-PEDES is determined byICFG-PDES.json. TheICFG-PDES.jsonis organized like thereid_raw.jsonin CUHK-PEDES .Please request the ICFG-PEDES database from [email protected] and then run the
process_ICFG_data.pyas follow:cd SSAN python ./dataset/process_ICFG_data.py
Training and Testing
sh experiments/CUHK-PEDES/train.sh
sh experiments/ICFG-PEDES/train.sh
Evaluation
sh experiments/CUHK-PEDES/test.sh
sh experiments/ICFG-PEDES/test.sh
Results on CUHK-PEDES and ICFG-PEDES
Our Results on CUHK-PEDES dataset
Our Results on ICFG-PEDES dataset
Citation
If this work is helpful for your research, please cite our work:
@article{ding2021semantically,
title={Semantically Self-Aligned Network for Text-to-Image Part-aware Person Re-identification},
author={Ding, Zefeng and Ding, Changxing and Shao, Zhiyin and Tao, Dacheng},
journal={arXiv preprint arXiv:2107.12666},
year={2021}
}