Indexed by:
Abstract:
The purpose of infrared and visible (Inf-Vis) image fusion is to merge images from two different wavelength cameras into a fused image, aiming to obtain more information and richer visual content than a single image can provide. To be able to better extract the local key information in the image, and reduce the redundant information in the image, this article advances a hybrid attention-based fusion algorithm for illumination-aware Inf-Vis images (HAIAFusion). Specifically, we introduce a DenseNet-201-based illumination-aware subnetwork, and we also propose a multimodal differential perception fusion module based on a hybrid attention (MDP-HA) that integrates channel attention, position attention, and corner attention (COA) in a cascade fashion. Through extensive simulation experiments, our algorithm outperforms state-of-the-art (SOTA) methods in terms of performance. The generated fused images are richer in information, have higher contrast, and are closer to human visual perception. Furthermore, this approach enhances the preservation of the source image's edge information. All these outcomes demonstrate the significant potential of the HAIAFusion algorithm within the domain of illumination-aware Inf-Vis image fusion. This algorithm offers notable performance improvements for achieving various related visual tasks. Our code will be available at https://github.com/sunyichen1994/HAIAFusion.git.
Keyword:
Reprint Author's Address:
Source :
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT
ISSN: 0018-9456
Year: 2025
Volume: 74
5 . 6 0 0
JCR@2022
Cited Count:
WoS CC Cited Count: 1
SCOPUS Cited Count: 1
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 13
Affiliated Colleges: