DenseFormer-MoE: A Dense Transformer Foundation Model with Mixture of Experts for Multi-Task Brain Image Analysis.

 0 Người đánh giá. Xếp hạng trung bình 0

Tác giả: Rizhi Ding, Manhua Liu, Hui Lu

Ngôn ngữ: eng

Ký hiệu phân loại: 691.99 Adhesives and sealants

Thông tin xuất bản: United States : IEEE transactions on medical imaging , 2025

Mô tả vật lý:

Bộ sưu tập: NCBI

ID: 707174

Deep learning models have been widely investigated for computing and analyzing brain images across various downstream tasks such as disease diagnosis and age regression. Most existing models are tailored for specific tasks and diseases, posing a challenge in developing a foundation model for diverse tasks. This paper proposes a Dense Transformer Foundation Model with Mixture of Experts (DenseFormer-MoE), which integrates dense convolutional network, Vision Transformer and Mixture of Experts (MoE) to progressively learn and consolidate local and global features from T1-weighted magnetic resonance images (sMRI) for multiple tasks including diagnosing multiple brain diseases and predicting brain age. First, a foundation model is built by combining the vision Transformer with Densenet, which are pre-trained with Masked Autoencoder and self-supervised learning to enhance the generalization of feature representations. Then, to mitigate optimization conflicts in multi-task learning, MoE is designed to dynamically select the most appropriate experts for each task. Finally, our method is evaluated on multiple renowned brain imaging datasets including UK Biobank (UKB), Alzheimer's Disease Neuroimaging Initiative (ADNI), and Parkinson's Progression Markers Initiative (PPMI). Experimental results and comparison demonstrate that our method achieves promising performances for prediction of brain age and diagnosis of brain diseases.
Tạo bộ sưu tập với mã QR

THƯ VIỆN - TRƯỜNG ĐẠI HỌC CÔNG NGHỆ TP.HCM

ĐT: (028) 36225755 | Email: tt.thuvien@hutech.edu.vn

Copyright @2024 THƯ VIỆN HUTECH