Dual-Uncertainty Guided Multimodal MRI-Based Visual Pathway Extraction.

 0 Người đánh giá. Xếp hạng trung bình 0

Tác giả: Zan Chen, Alou Diakite, Cheng Li, Yousuf Babiker M Osman, Yiang Pan, Tao Tan, Shanshan Wang, Jiawei Zhang, Hairong Zheng

Ngôn ngữ: eng

Ký hiệu phân loại: 342.085 *Rights and activities of individuals

Thông tin xuất bản: United States : IEEE transactions on bio-medical engineering , 2025

Mô tả vật lý:

Bộ sưu tập: NCBI

ID: 753660

OBJECTIVE: This study aims to accurately extract the visual pathway (VP) from multimodal MR images while minimizing reliance on extensive labeled data and enhancing extraction performance. METHOD: We propose a novel approach that incorporates a Modality-Relevant Feature Extraction Module (MRFEM) to effectively extract essential features from T1-weighted and fractional anisotropy (FA) images. Additionally, we implement a mean-teacher model integrated with dual uncertainty-aware ambiguity identification (DUAI) to enhance the reliability of the VP extraction process. RESULTS: Experiments conducted on the Human Connectome Project (HCP) and Multi-Shell Diffusion MRI (MDM) datasets demonstrate that our method reduces annotation efforts by at least one-third compared to fully supervised techniques while achieving superior extraction performance over six state-of-the-art semi-supervised methods. CONCLUSION: The proposed label-efficient approach alleviates the burdens of manual annotation and enhances the accuracy of multimodal MRI-based VP extraction. SIGNIFICANCE: This work contributes to the field of medical imaging by facilitating more efficient and accurate visual pathway extraction, thereby improving the analysis and understanding of complex brain structures with reduced reliance on expert annotation.
Tạo bộ sưu tập với mã QR

THƯ VIỆN - TRƯỜNG ĐẠI HỌC CÔNG NGHỆ TP.HCM

ĐT: (028) 36225755 | Email: tt.thuvien@hutech.edu.vn

Copyright @2024 THƯ VIỆN HUTECH