RAS_ECCV18
RAS_ECCV18 copied to clipboard
Code for the ECCV 2018 paper "Reverse Attention for Salient Object Detection"
RAS
This code is for the paper "Reverse Attention for Salient Object Detection".pdf
Pytorch Version
A PyTorch version is available at here.
Citation
@inproceedings{chen2018eccv,
author={Chen, Shuhan and Tan, Xiuli and Wang, Ben and Hu, Xuelong},
booktitle={European Conference on Computer Vision},
title={Reverse Attention for Salient Object Detection},
year={2018}
}
@article{chen2020tip,
author={Chen, Shuhan and Tan, Xiuli and Wang, Ben and Lu, Huchuan and Hu, Xuelong and Fu, Yun},
journal={IEEE Transactions on Image Processing},
title={Reverse Attention Based Residual Network for Salient Object Detection},
volume={29},
pages={3763-3776},
year={2020}
}
Installing
- Install prerequisites for Caffe (http://caffe.berkeleyvision.org/installation.html#prequequisites).
- Build DSS [1] with cuDNN v5.1 for acceleration. Supposing the root directory of DSS is
$DSS.
USE_CUDNN := 1
- Copy the folder RAS to
$DSS/example/.
Training
- Prepare training dataset and its corresponding data list.
- Download the Pre-trained VGG model (VGG-16) and copy it to
$DSS/example/ras. - Change the dataset path in
$DSS/example/RAS/train.prototxt. - Run
solve.pyin shell (or you could use IDE like Eclipse).
cd $DSS/example/RAS/
python solve.py
Testing
- Change the dataset path in
$DSS/example/RAS-tutorial_save.py. - Run
jupyter notebook RAS-tutorial_save.ipynb.
Evaluation
We use the code of [1] for evaluation.
Pre-trained RAS model
Pre-trained RAS model on MSRA-B: Baidu drive(h7qj) and Google drive.
Note that this released model is newly trained and is slightly different from the one reported in our paper.
Saliency Map
ECCV 2018: The saliency maps on 7 datasets are available at Baidu drive(zin5) and Google drive.
TIP 2020: The saliency maps on 6 datasets are available at Google drive.
Reference
[1] Hou, Q., Cheng, M.M., Hu, X., Borji, A., Tu, Z., Torr, P.: Deeply supervised salient object detection with short connections. In: CVPR. (2017) 5300–5309.