An Infrared and Visible Image Fusion Network Based on Res2Net and Multiscale Transformer.

 0 Người đánh giá. Xếp hạng trung bình 0

Tác giả: Binxi Tan, Bin Yang

Ngôn ngữ: eng

Ký hiệu phân loại: 171.8 Systems based on altruism

Thông tin xuất bản: Switzerland : Sensors (Basel, Switzerland) , 2025

Mô tả vật lý:

Bộ sưu tập: NCBI

ID: 79603

The aim of infrared and visible image fusion is to produce a composite image that can highlight the infrared targets and maintain plentiful detailed textures simultaneously. Despite the promising fusion performance of current deep-learning-based algorithms, most fusion algorithms highly depend on convolution operations, which limits their capability to represent long-range contextual information. To overcome this challenge, we design a novel infrared and visible image fusion network based on Res2Net and multiscale Transformer, called RMTFuse. Specifically, we devise a local feature extraction module based on Res2Net (LFE-RN) in which dense connections are adopted to reuse the information that might be lost in convolution operation and a global feature extraction module based on multiscale Transformer (GFE-MT) which is composed of a Transformer module and a global feature integration module (GFIM). The Transformer module extracts the coarse-to-fine semantic features of the source images, while GFIM is used to further aggregate the hierarchical features to strengthen contextual feature representations. Furthermore, we employ the pre-trained VGG-16 network to compute the loss of features with different depths. Massive experiments on mainstream datasets indicate that RMTFuse is superior to the state-of-the-art methods in both subjective and objective assessments.
Tạo bộ sưu tập với mã QR

THƯ VIỆN - TRƯỜNG ĐẠI HỌC CÔNG NGHỆ TP.HCM

ĐT: (028) 36225755 | Email: tt.thuvien@hutech.edu.vn

Copyright @2024 THƯ VIỆN HUTECH