DATFuse
DATFuse copied to clipboard
[IEEE TCSVT 2023] Official implementation of DATFuse: Infrared and Visible Image Fusion via Dual Attention Transformer
DATFuse: Infrared and Visible Image Fusion via Dual Attention Transformer (IEEE TCSVT 2023)
This is the official implementation of the DATFuse model proposed in the paper (DATFuse: Infrared and Visible Image Fusion via Dual Attention Transformer) with Pytorch.
Comparison with SOTA methods
Fusion results on TNO dataset

Fusion results on RoadScene dataset

Ablation study on network structure

Ablation study on the number of TRMs

Ablation study on the second DARM

Impact of weight parameters in the loss function
Impact of weight parameter α on fusion performance with λ and γ fixed as 100 and 10, respectively.

Impact of weight parameter λ on fusion performance with α and γ fixed as 1 and 10, respectively.

Impact of weight parameter γ on fusion performance with α and λ fixed as 1 and 100, respectively.

Computational efficiency comparisons
Average running time for generating a fused image (Unit: seconds)
| Method | TNO Dataset | RoadScene Dataset |
|---|---|---|
| MDLatLRR | 26.0727 | 11.7310 |
| AUIF | 0.1119 | 0.0726 |
| DenseFuse | 0.5663 | 0.3190 |
| FusionGAN | 2.6796 | 1.1442 |
| GANMcC | 5.6752 | 2.3813 |
| RFN_Nest | 2.3096 | 0.9423 |
| CSF | 10.3311 | 5.5395 |
| MFEIF | 0.0793 | 0.0494 |
| PPTFusion | 1.4150 | 0.8656 |
| SwinFuse | 3.2687 | 1.6478 |
| DATFuse | 0.0257 | 0.0141 |
Cite the paper
If this work is helpful to you, please cite it as:
@ARTICLE{Tang_2023_DATFuse,
author={Tang, Wei and He, Fazhi and Liu, Yu and Duan, Yansong and Si, Tongzhen},
journal={IEEE Transactions on Circuits and Systems for Video Technology},
title={DATFuse: Infrared and Visible Image Fusion via Dual Attention Transformer},
year={2023},
volume={33},
number={7},
pages={3159-3172},
doi={10.1109/TCSVT.2023.3234340}}
If you have any questions, feel free to contact me ([email protected]).