Explainable artificial intelligence for neuroimaging-based dementia diagnosis and prognosis.

 0 Người đánh giá. Xếp hạng trung bình 0

Tác giả: Frederik Barkhof, James H Cole, Phoebe Imms, Andrei Irimia, Sophie A Martin, Jiongqi Qu, An Zhao

Ngôn ngữ: eng

Ký hiệu phân loại: 363 Other social problems and services

Thông tin xuất bản: United States : medRxiv : the preprint server for health sciences , 2025

Mô tả vật lý:

Bộ sưu tập: NCBI

ID: 218620

INTRODUCTION: Artificial intelligence and neuroimaging enable accurate dementia prediction, but 'black box' models can be difficult to trust. Explainable artificial intelligence (XAI) describes techniques to understand model behaviour and the influence of features, however deciding which method is most appropriate is non-trivial. Vision transformers (ViT) have also gained popularity, providing a self-explainable, alternative to traditional convolutional neural networks (CNN). METHODS: We used T1-weighted MRI to train models on two tasks: Alzheimer's disease (AD) classification (diagnosis) and predicting conversion from mild-cognitive impairment (MCI) to AD (prognosis). We compared ten XAI methods across CNN and ViT architectures. RESULTS: Models achieved balanced accuracies of 81% and 67% for diagnosis and prognosis. XAI outputs highlighted brain regions relevant to AD and contained useful information for MCI prognosis. DISCUSSION: XAI can be used to verify that models are utilising relevant features and to generate valuable measures for further analysis.
Tạo bộ sưu tập với mã QR

THƯ VIỆN - TRƯỜNG ĐẠI HỌC CÔNG NGHỆ TP.HCM

ĐT: (028) 36225755 | Email: tt.thuvien@hutech.edu.vn

Copyright @2024 THƯ VIỆN HUTECH