Self-attention fusion and adaptive continual updating for multimodal federated learning with heterogeneous data.

 0 Người đánh giá. Xếp hạng trung bình 0

Tác giả: Zhen Ding, Xinhui Ji, Zhiguo Wang, Kangning Yin

Ngôn ngữ: eng

Ký hiệu phân loại: 133.594 Types or schools of astrology originating in or associated with a

Thông tin xuất bản: United States : Neural networks : the official journal of the International Neural Network Society , 2025

Mô tả vật lý:

Bộ sưu tập: NCBI

ID: 718421

Federated learning (FL) enables collaborative model training without direct data sharing, facilitating knowledge exchange while ensuring data privacy. Multimodal federated learning (MFL) is particularly advantageous for decentralized multimodal data, effectively managing heterogeneous information across modalities. However, the diversity in environments and data collection methods among participating devices introduces substantial challenges due to non-independent and identically distributed (non-IID) data. Our experiments reveal that, despite the theoretical benefits of multimodal data, MFL under non-IID conditions often exhibits poor performance, even trailing traditional unimodal FL approaches. Additionally, MFL frequently encounter missing modality issues, further complicating the training process. To address these challenges, we propose several improvements: the federated self-attention multimodal (FSM) feature fusion method and the multimodal federated learning adaptive continual update (FedMAC) algorithm. Moreover, we utilize a Stable Diffusion model to mitigate the impact of missing image modality. Extensive experimental results demonstrate that our proposed methods outperform other state-of-the-art FL algorithms, enhancing both accuracy and robustness in MFL.
Tạo bộ sưu tập với mã QR

THƯ VIỆN - TRƯỜNG ĐẠI HỌC CÔNG NGHỆ TP.HCM

ĐT: (028) 36225755 | Email: tt.thuvien@hutech.edu.vn

Copyright @2024 THƯ VIỆN HUTECH