Infrared and Visible Images Fusion Method Based on Multi-Scale Features and Multi-head Attention
-
-
Abstract
To address the challenges of detail loss and the imbalance between visual detail features and infrared (IR) target features in fused infrared and visible images, this study proposes a fusion method combining multiscale feature fusion and efficient multi-head self-attention (EMSA). The method includes several key steps. 1) Multiscale coding network: It utilizes a multiscale coding network to extract multilevel features, enhancing the descriptive capability of the scene. 2) Fusion strategy: It combines transformer-based EMSA with dense residual blocks to address the imbalance between local details and overall structure in the fusion process. 3) Nested-connection based decoding network: It takes the multilevel fusion map and feeds it into a nested-connection based decoding network to reconstruct the fused result, emphasizing prominent IR targets and rich scene details. Extensive experiments on the TNO and M3FD public datasets demonstrate the efficacy of the proposed method. It achieves superior results in both quantitative metrics and visual comparisons. Specifically, the proposed method excels in targeted detection tasks, demonstrating state-of-the-art performance. This approach not only enhances the fusion quality by effectively preserving detailed information and balancing visual and IR features but also establishes a benchmark in the field of infrared and visible image fusion.
-
-